|
Nice robot, any further advancement in this field will definitely lead to a completely independent robot (of course not one equipped with a deadly machine gun) that can go around and learn about it's environment. Nice link
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
I would argue that in this case Qbo did NOT pass the "mirror test"[^].
To do that Qbo would have to be shown a picture of "himself" and learn it and be able to differentiate between the picture and the reflection. (The training would have to be done very carefully so that a response indicating self-awareness could not be constructed just from a phrase assembly algorithm.)
There's nothing in the video of Qbo's response that indicates an understanding that what "he" saw was not a "representation" of him (picture), but was actually the instance of self.
|
|
|
|
|
I agree, Q'bo can now (after this training) understanding or recognize that he is seeing a "Q'bo" in front of him, the verbal out put that "This is me" is nothing but a string variable they used during compilation.
If the robot could have seen that the object in the mirror was performing the exact same actions as it was at the exact same time and made the leap to understand that it was the same object that it was seeing THEN it would have reached a major milestone towards self awareness. A baby isn't told that what it is seeing in a mirror is itself, you can actually watch the comprehension wash over the babies face as it makes this realization.
This is the major difference in how a computer learns versus how a human learns I believe. Humans have the ability to make these "Leaps" in their learning and understanding of stimuli, Computers lack that ability at this point in time.
|
|
|
|
|
Binding 100,000 items to a list box can be just silly ... no way, that can be only silly!
|
|
|
|
|
Just trying to see what that has to do with thinking outside the box.
|
|
|
|
|
Asking if computers think is like asking if submarines swim.
"Microsoft -- Adding unnecessary complexity to your work since 1987!"
|
|
|
|
|
I like that thought but it's not computers but computer programs in question, if our creativity is as a result of neural computations can't we give computer programs the same creativity by emulating those computations? The brain must use some algorithm or a set of algorithms to generate what we call intelligence and self aware. Though not sure about that.
|
|
|
|
|
We can give computers similar creativity. By their nature, computers may require a different approach to achieve intelligence and creativity.
While computers are currently serial in nature (with limited parallelism) the brain is massively parallel with many millions of neurons working simultaneously.
The brain also seems to employ both discrete and continuous forms of knowledge representation and processing. Neurons fire at various frequencies (continuous) and with continuous impulse levels from other neurons and continuous thresholds. However a single neuron firing is a discrete event.
With such radically different architectures, it's natural to expect different algorithms may be appropriate to produce intelligence.
Whatever approach turns out to be successful, we can expect computers to eventually be millions of times faster than humans, since their hardware is extensible. A future society may need to build limitations into intelligent computers in positions of power to prevent them from ruling us. Sort of like Issac Asimov's Three Laws of Robotics.
"Microsoft -- Adding unnecessary complexity to your work since 1987!"
|
|
|
|
|
now that i like
|
|
|
|
|
Our intelligence is not a result of computations. We are consummate pattern-matchers. Here is a really simplified version of how it goes: When we perceive something, it causes a certain bunch of sensory neurons to fire, which correspond directly to that perception. The neurons connected to those sensory neurons fire in turn if they recognize a pattern there -- for example, some neurons only fire if they see a vertical bar traveling from left to right, or other specific patterns like that. Then the next level of connected neurons fire if they recognize a particular pattern in the level before them, and so forth. We learn by building up patterns of patterns. The match to a pattern pops up automatically, or in other words, perceiving and recalling a matching previous pattern happen because the perception and the recall are linked by sharing the same set of neurons in the middle.
For an example of how this works, take driving. When you first got behind the wheel as a kid, everything seemed very unfamiliar. All the knobs were confusing, and you probably had to concentrate to remember which pedal was which. You probably had trouble recognizing following distances and when to turn to fit into a parking space and that kind of thing. But with practice, your brain began to recognize and store the patterns of driving, until almost all of driving became subconscious pattern-matching -- the lines on the road should be at particular distances, the feel of the brake matches to how quickly or how slowly the car comes to a stop, et cetera -- we don't have to think about any of these things because they match stored patterns in our minds. We don't have to consciously think about anything unless it breaks our expectations. Unexpected or unknown things draw our attention because they defy the patterns we know.
In contrast, a computer is terrible at pattern-matching. Many, many man-years went into the Google search algorithm, but really, what it's doing is trying to mimic the natural human ability to glance over a list and recognize what you are looking for out of it. This very basic ability has to be painstakingly coded into the computer. If you lined up a bunch of toys and asked a preschooler to hand you the "meanest one," the preschooler will be able to match his or her idea of "meanness" to the various traits of the toys and decide which one is the most mean. The computer, on the other hand, has no ability to take the concept of "mean" and expand it to apply to a toy, *unless a human writes an algorithm designed to do just that.* In other words, computers are 100% dependent on programmers for their pattern-matching intelligence. Even computers that "learn" things only learn whatever it is the programmer told them to learn. They don't independently gather data about the world around them and apply it creatively, using thought and consideration; instead, they simply follow a strict set of rules determining what data they will gather, how they will gather it, how they will interpret it, and how they will regurgitate it later.
Computers have an opposite kind of intelligence from humans. Their intelligence is related to their ability to perfectly remember exact things. Humans are terrible at remembering exact things -- most people can't even remember the rules of grammar for writing their own native language, and we can even forget things that are very important to us, such as the phone number of that hot (chick|dude|not applicable) we met last night. Computers are good at exactly remembering and carrying out particular algorithms and equations; most humans struggle with algebra. In other words, we are not good at computing, we are good at recognizing things. Computers are not good at recognizing things, but they are good at computing.
So, to answer your original question, if we want to make computers creative in the same way that humans are creative, we have to change the very basics of how they work. The closest we have to a humanlike intelligence is in "neural networks," which mimic the neural-connectivity pattern-matching I was describing, sometimes in robots and sometimes in virtual worlds. Most current neural networks are about as smart as insects. This is because building an artificial neural network of the complexity of the human brain is currently not feasible, given the state of current technology.
|
|
|
|
|
Well i just had to give you +5 for that, yes the current computer programs are like you have described, what you just described there is called a standard model of vision, i 'am currently researching in computer vision and i'am trying to integrate figure - ground discrimination into the algorithms as efficiently as possible. The reason why computers are bad at pattern matching is that programmers haven't just figured out how to efficiently tell a computer how to do just that, we might not need new hardware, but such algorithms can be hardware accelerated by using Graphics Processing Units, introducing parallel processing and more better methods/algorithms will start solving the problem of perception by computers. Don't blame the computers blame us programmers for their shortcomings.
|
|
|
|
|
You are sort of missing the point of the conversation here. When I said "we have to change the very basis for how they work," I didn't mean the hardware. While those ant-like robots are the most humanlike in intelligence, scaling up that hardware to human size would be technically infeasible. I meant that, if the goal is to have computers have humanlike intelligence, then the whole way the system works would have to change, hardware and software both.
You said you are "trying to integrate figure-ground discrimination into the algorithms as efficiently as possible." In other words, the way the computer you are using "thinks," it requires you to tell it what patterns exist, what to do when it sees them, what to do when it doesn't see them, et cetera, et cetera. This "intelligence" of precisely following a programmer's algorithm is simply not humanlike. The extent to which that computer is humanlike depends, as you said, on your own programming ability to impose a small part of *your* human intelligence into the computer. The computer will only be able to mimic the small portion of your intelligence that you are able to give it, and not one whit more.
The program you eventually write will not be really doing the same thing you are doing in your own head at all. You are not following any kind of algorithm when you look at a cute kitten and say, "Awwwwww." It's simply that the way your perception works, the kitten triggered a sufficient amount of a pattern linked to the "Awwww" emotion to evoke enough of said emotion that you were made consciously aware of it, and following a pattern of "what I do when I feel sufficient awwwwwww," and said pattern not being overridden by other patterns (such as "what I do when I'm in front of my boss"), you triggered the pattern for saying the sounds of "Awwwwww" in a particular tone of voice.
This networking of patterns and behavior is the basis for humanlike intelligence. So, for a computer to be humanlike, we have to throw out our current programming techniques and start fresh with an attempt to create computer architecture (by which I mean, both hardware and software) with a pliable, adaptible artificial network that "learns as it goes" not because some algorithm tells it what it should be learning, but rather, because learning and processing are one and the same. That's how a human mind works: every time you perceive something, you reinforce or change patterns simply through the act of perception. The human mind is in a state of constant change. This is why we are adaptable, creative, and not very good at following unchanging sets of rules.
When a computer naturally changes from everything it does (and not just because a programmer has put detailed hundreds-of-pages "learn from X" code alongside all of the "do X" code), I will say that it has intelligence similar to a living creature. And when a computer is able to sit back and think about what it has learned and draw new conclusions from it, then I will say it has humanlike intelligence. Until then, as long as there is a human imposing a mimicry of his or her own intelligence behind the computer's behavior, it will still be nothing more than a complex regurgitator.
|
|
|
|
|
I get your view, "new hardware and software", but any machine can be simulated on a digital computer before it is made,so those pliable, adaptible artificial network that "learns as it goes" can be emulated on already existing digital computers with the right coding. That's the point i'am putting across here.
And one can also use already existing GPU's for accelerating the processing speed of the simulation.
|
|
|
|
|
That is by far the best description of AI that i have heard. I think you are totally right in saying that the approach must be different. In fact, the result will not be a computer at all, since a computer does what you tell it to. It is not the case that a self-aware entity will do as you ask. If i ask a lion to say cheese, it will most likely eat me instead.
|
|
|
|
|
Programs are already self aware. They get offended too.
Veni, vidi, vici.
|
|
|
|
|
My vote of 5. I think they are too, only that we haven't yet given them the ability to express themselves.
|
|
|
|
|
And when they get offended .. nah! the hell
|
|
|
|
|
funny huh it seems we have been hurting programs feeling without ever noticing, in the future even robots will have rights, so tell your child to be ready for such issues.
|
|
|
|
|
A computer will never be able to simulate the human brain "very accurately", IMHO.
What is your definition of self aware?
I think one of the biggest hurdles/road blocks to AI is choice and random thought. You wake up in the middle of the night and you are craving chocolate ice-cream, for no good reason - random thought. You decide not to get it because you are too tired - choice.
Note: many will argue the subject of randomness. I am not going debate that subject.
"the meat from that butcher is just the dogs danglies, absolutely amazing cuts of beef." - DaveAuld (2011) "No, that is just the earthly manifestation of the Great God Retardon." - Nagy Vilmos (2011)
"It is the celestial scrotum of good luck!" - Nagy Vilmos (2011)
"But you probably have the smoothest scrotum of any grown man" - Pete O'Hanlon (2012)
|
|
|
|
|
I think self aware is the ability in my opinion is the ability to recognize ones presence that "I am here".
One thing that intrigues me is that one can never be aware of something without neurons firing action potentials in the brain. To me i think the aspect of pseudo - random thoughts has nothing to do with our ability to be aware of ourselves, i think anything with short term memory is capable of self aware.How is it that neural computations generate self awareness, then any computational device with a memory or log of it's actions is somewhat able to be self aware.unless we introduce some supernatural issues here.
|
|
|
|
|
A computer is only "self aware" if you tell it to be...it doesn't do anything unless you tell it to do it. A computer doesn't wake up in the morning and grab a cup of coffee and marvel at the beautiful day outside and the pretty bird chirping 75 feet away. That is being self aware.
BupeChombaDerrick wrote: I think self aware is the ability in my opinion is the ability to recognize ones presence that "I am here".
I agree with this statement, to a point.
"the meat from that butcher is just the dogs danglies, absolutely amazing cuts of beef." - DaveAuld (2011) "No, that is just the earthly manifestation of the Great God Retardon." - Nagy Vilmos (2011)
"It is the celestial scrotum of good luck!" - Nagy Vilmos (2011)
"But you probably have the smoothest scrotum of any grown man" - Pete O'Hanlon (2012)
|
|
|
|
|
Slacker007 wrote: <layer>A computer doesn't wake up in the morning and grab a cup of coffee and marvel at the beautiful day outside and the pretty bird chirping 75 feet away.
yeah true that, At least you agree that a computer program can be "self aware" if you tell it to be, My vote of 5.
|
|
|
|
|
Slacker007 wrote: A computer will never be able to simulate the human brain "very accurately"
I have not yet in my fairly long life encountered a person who did not regret using the word "never" in public. I will be very interested in seeing if you might be the first, if I live long enough.
Will Rogers never met me.
|
|
|
|
|
Roger Wright wrote: encountered a person who did not regret using the word "never" in public.
I rarely use the word my self. However, in this context, I know I'm right. I will be the first to put my foot in my mouth if I am ever proven wrong.
Computers don't have emotion. We do. Thus, a computer will NEVER be like a human brain, ever.
"the meat from that butcher is just the dogs danglies, absolutely amazing cuts of beef." - DaveAuld (2011) "No, that is just the earthly manifestation of the Great God Retardon." - Nagy Vilmos (2011)
"It is the celestial scrotum of good luck!" - Nagy Vilmos (2011)
"But you probably have the smoothest scrotum of any grown man" - Pete O'Hanlon (2012)
modified 27-Apr-12 7:02am.
|
|
|
|
|
Human brains exist as a part of the human body in order to help the human body survive and thrive. It's a mess of complex chemical reactions that we can hardly hope to comprehend.
Computers are chunks of metal and silicon designed to compute numbers. They have no concept of surviving or thriving, and they can be understood pretty well.
We should be about as worried about gigantic, complex computer systems becoming self-aware as gigantic, complex sewer systems becoming self-aware. You never know; one day, the toilets may rebel, throw all our turds back at us, and then humanity will truly be in deep sh*t.
|
|
|
|
|