|
AI is better where we can define a set of rules for a system and that system is governed by those and only those rules. But when it comes to the open world, it still has a loooong way to go.
I am not the one who knocks. I never knock.
In fact, I hate knocking.
|
|
|
|
|
|
It's all a matter of time...
|
|
|
|
|
Perfect question as they have not included the time in the main page... in the results page you can see they say "during your lifetime"... again it fails as I'll live forever...
|
|
|
|
|
How great would it be to have an AI that could review source code, and apply machine understanding, and architecture. Be trained on prior EXPLOITS, and spend it's time making sure that the OSes and programs are not exploitable.
To be trained to hack into systems, and then play against itself, like they trained one to play chess!
Once trained. It could not only analyze the code, but compile, run, and attack it.
THAT would be useful.
But the day you find it is leaving back doors... You'd have to put it down... Just Sayin'
|
|
|
|
|
Artificial intelligence by definition implies the ability to make an intelligent decision, not necessarily an INFORMED decision. And there is a significant difference.
An informed decision is weighing all of the options available and taking the path that is most likely to lead to success. An intelligent decision may very well create a new process path that didn't previously exist because it had never been thought of.
Can AI as it is currently being billed make decisions faster than people? Absolutely.
Can AI create a new paradigm for use by humans such as happened with true object oriented programming and visualization software? It hasn't so far... all of those originated with people.
|
|
|
|
|
Tim Carmichael wrote: An intelligent decision may very well create a new process path that didn't previously exist because it had never been thought of.
That's an interesting point. Has any AI anywhere ever had an IDEA?
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
|
I once read the preface to the Cobol 60 standard, which stated that now we have a language that doesn't require any sort of special competence to use a computer for problem solving: Just state the problem in plain English, and it will be translated into what the computer needs to solve the problem.
In those days, "autocoders" were just as common a name as "compilers". The computer made the program itself!
Twenty years later, when I received my education, we had the same visions: Database-queries would not require any qualifictions once "Query By Example" became widespread. "Structured Programming" made it so simple to describe a solution that everybody could do it. (And then: Geek & Poke: Simply Explained - vintage edition).
Trivial things can be automated. What remains is to understand What is your probmlem? Undertanding the problem domain, understanding the customer's needs, is certainly hard enough to do for human intelligence. Having AI making a try at it is rather risky... A few days ago a colleague of mine spent half an hour brainstorming about how he should design a general archive for music information - this was a private home project. He is a full time SW developer, so am I, he holds a Ph.D, I hold a Master. After half an hour, he concluded: I think I have to go back to think of what I really want... I don't think that AI system would know much better what he really wants!
Over the last fifty years, our work has gradually shifted from knowing which binary instruction codes and addressing modes to use, towards a larger element of problem description, leaving the menial tasks to the compiler. Maybe we some day in the future can claim that describing the problelem is all we do - but we have tried to make that claim for at least forty years. Still we do coding (/programming). I believe we will still call it coding (/programming) twenty years from now.
|
|
|
|
|
Our computers have just entered the book club, they are now reading, for the first time in history. And a lot. Pretty soon they'll read a million books. They'll adjust their multi- level recurrent neural nets to what they've read - not even knowing if it true or not. And then, one day, you'll ask them: "What is the truth.". And they will tell you. Only YOU won't understand, because by optimizing their neurons they've found patterns and discovered connections you never knew existed.
A silent switch is taking place. From algorithmic thinking and programming towards training. Today's rocket landing, car driving code, or language translating code is 1000 lines in length. The rest is training. We've given up searching algorithmic solutions to problems. We now provide a computer forty hours of driving a car, just like your avg. teenager. Give it a certificate and let it loose on the road. Warranty? Well, forty hours of training. This is not your average evolution of programming languages, from Cobol via Clipper to .NET/SQL. This is real paradigm shift.
Not only will computers write code, and understand problems! Most of us will live to see the day when WE, the human race, are the fish in the tank.
|
|
|
|
|
When I was a student, around 1980, two novels were "cult" books among the Comp.Sci students: The Adolescence Of P-1, by Thomas J. Ryan, and The Two Faces of Tomorrow by James P. Hogan (both were first published in 1979).
Both novels describe an intelligence that develops "autonomously", based on whatever it can obtain through sensors and analyzing files. The stories very different: "P-1" is just crazy fun, and the intelligence develops into something like your best buddy, and almost 40 years later we find it laughable.
For writing "Two faces", the author had close contact with AI experts at Carnegie Mellon, and even today (I read the novel again a couple of years ago), it stands up as valid and durable from a professional point of view. Of course details are not "perfect", but essentially, the problems it adresses are still real, and the approach to handling them is valid.
The intelligence that develops in "Two faces" is very different from a human intelligence. That is a major issue in the novel, and it is very well defended by the text. I might believe that some AI could develop in this direction, but not in the direction of P-1! The question is whether we could make much use of the AI of "Two faces" as a replacement for human intelligence.
I believe both books are still in print - "Two faces" is offered by Amazon, for "P-1" you may have to accept a used copy. (But "Two faces" is the essential one - "P-1" is just for fun.)
Btw: James P. Hogan has written other novels of very high professional quality, seen from a computer science point of view. I would definitely recomment "Realtime interrupt" - but be prepared: It might affect your sleep...
|
|
|
|
|
We're living it. P-1 is Microsofts' Tay, and the other thing destroyed 50 years of chess algorithms, with 72 hours of learning. Two years ago Google also retired most of its statistics based language translation algorithms (Markov based?) for deep learning approach.
The means are convolutions for segmenting and image recognition, recurrence for movies, speech recognition to listen to what they say in a movie, internet for fact-checking the movie, etc. And deep learning to rule them all, to find them, to bring them all and in the darkness bind them.
Lets imagine these technologies integrated into the quantum mother of all deep networks™, 30 years from now.
And then intimately answer the question -- will it be able to write a piece of C# and SQL, or not?
modified 21-Feb-18 5:15am.
|
|
|
|
|
In the thread above this ("AI - no"), the commenter points out the difference between an intelligent decision and an informed decision.
That is exactly the opening scenarion of "The two faces of tomorrow". The global network is making millions of minor and major decisions, which can always be justified and explained as an intelligent decision. It is just not according to common sense.
And how are we going to define common sense? My idea of common sense is quite different from my mother's, and teenagers of today think quite differently of common sense. Is is not a well defined concept. You cannot "learn" it. Sure you can learn a few elements, but not how to put it toegether matching my mother's expectations, my expectations or the teenager's.
Putting it together is done by socalizing, so maybe, if we build human-like robots that parcicipate in political discussions, fall in love, have opinions about which hamburger stand provides the best junk food, are left out of social life for two weeks because it cathes the flue, and must struggle to understand the new relationships in the gang when it returns - two couples have broken up, and this new guy has come in and take over one of the girls, and ...
Those are the kind of things affecting our ideas about common sense. And informed decisions - that is closely related. Until we can give the AI that sort of impulses, they won't get common sense. When/if that happens, like human cultures have conflicting indeas about common sense so they go to war against each other, different AI cores will develop different ideas of common sense and go to war against each other. (Again: Read "The two faces of tomorrow".)
And then: It doesn't take AI to created conflicts regarding common sense! To a programmer, it is common sense that Jonn and john are two different persons. It is common sense that a password cannot consist of two words separated by a blank. A few years ago it was common sense that a file name could be at most eight letters; nowadays it is common sense not name a file with neither ordinary slashes, backslashes, asterisks nor question marks. If you don't think that is common sense, come here and I will explain common sense to you!
A story about common sense: In the 1980s, I was employed by a minicomputer company, making their own OS and file system. Each user had a flat directory of files, no directory hierarchy. The a hierarchy was introduced: Files are put into folders, located in drawers in cabinets. (At that time, having a fixed number of levels was common sense and strongly favored by the users.) One of our major customers, a big publishing house, could not handle this: The users never could keep track of in which folder, drawer and cabinet they had stored their precious documents, and spent an inordinate amount of time seaching for "lost" files. They found a solution: At one of their internal User Group meetings (they were that many), one of the users told that she had decided to make a single cabinet, named "Cabinet", a single drawer, named "Drawer", and a single folder, named "Folder". With all the doucuments in this folder, nothing was ever misplaced or lost. And the User Group cheered: Great idea! And so it was. Everyone put everything into the same folder, and nothing was lost.
That was common sense to them. I very much doubt that an AI solution would have come up with anything remotely similar. Nor would I... but I accepted it as the right solution for those users.
|
|
|
|
|
Let's challenge common sense. Not common sense as a measure of average (because computers are quite good at averaging), but common sense as the final frontier of the universe. What if ... we really are just a special type of mammal? We know dogs have limitations. Regardless of how smart they are, no Fido will ever write Wagners' Gotterdammerung. It is simply beyond the reach of dogs' understanding to do so.
Thus ... what about things that are beyond the reach of understanding of men? Beyond analytical thinking?
Looking around there's quantum mechanics. It is so spooky that -anecdotally- the smartest man on earth, Albert Einstein, struggled to believe in it. Or we can observe something simpler, like the speed of light. If you run on a train that goes just a bit slower then the speed of light, can you be faster then the speed of light then? Not really, because time will slow down just enough so that the limit is not breached.
Now ... are these things common sense? Or do we simply /and hopefully/ strive to reduce them down to 'intuitive' as taught by millions of years of evolution? So if challenged by superior intelligence, could we really assume our common sense to be the reference point? What if they play Wagners' Gotterdammerung to us ... and we are merely a dog?
|
|
|
|
|
The thing with us developers is, we became lazy. I see people learning new patterns and idioms of C#, but not knowing anything about deep learning, and mathematics behind the new AI revolution. So here's a spoiler -- yes it will replace you. Get on board, get informed, and start learning what it can do. We had a few breakthroughs in the past years (even months) demonstrating that tomorrows' AI will not equal to stupid search algorithms of the past decade. This time it IS different.
|
|
|
|
|
That's what she said!!
It's not what you do, it's how well you do it.
|
|
|
|
|
AI will just make the requested spec Software, requiring none of the fumbling, errors, design mistakes that a Developer Develops through.
|
|
|
|
|
So there are a bunch of us old farts who are confident we will kark it before AI get relevant to us.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
+1
User: Technical term used by developers. See Idiot.
|
|
|
|
|
Yeah, but what about "age and treachery overcoming youth and skill"?
Software Zen: delete this;
|
|
|
|
|
Even the most treacherous of us old farts won't outlive most of the young whippersnappers.
If you think 'goto' is evil, try writing an Assembly program without JMP.
|
|
|
|
|
When I remember the first program I wrote and ran on an Atari 800 with a floppy. No memory or speed by todays standards but ran rings around everything on a 32 Gb RAM, 2.x GHz machine. These youngsters didn't grow up on movies of computer AIs taking over the world.
|
|
|
|
|
It asks in Q/A for the codez
|
|
|
|
|
Let's get real here, people. The reason code is the way code is is so that people can read and write the crap.
A true AI would create something entirely different, better, and suited to its way of accomplishing tasks, and it certainly won't resemble the ridiculous syntax and quirks we have to deal with in today's "modern" languages. And it will most likely be entirely unreadable by people.
Latest Article - Code Review - What You Can Learn From a Single Line of Code
Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny
Artificial intelligence is the only remedy for natural stupidity. - CDP1802
|
|
|
|
|
Well, AIs won't need IDEs and debuggers I suppose
|
|
|
|
|