|
AI does not exist, it's just a buzzword for statistics on data sets that are too large for humans.
That may be downplaying it a bit, but that's the essence anyway.
Scenario's where AI take over the world are sci-fi.
Computers are not sentient.
So really, what is this AI we're supposed to protect ourselves against?
Unless, of course, you're going to give a computer the codes to nuclear missiles and use statist, sorry, AI, to decide whether or not to fire them.
No doubt we should regulate Excel in the same way though, yet we never did.
|
|
|
|
|
The previous large wave of AI, often associated with the Japanese '5th generation project' in the early 1980s, was quite easy to define: It was based on predicate logic, inference, the Prolog programming language ... It stood out as something clearly distinct and identifiable. Even earlier AI waves were identified by Lisp or pattern matching.
What distinguishes the current AI wave? "Big data"? How big? Is a terabyte enough to be intelligent, or does it take a petabyte? Maybe several petabytes?
Fifty years ago, people were convinced that a circuit of a billion transistors (if you could imagine such a circuit, which you probably couldn't) would most certainly develop its own self-awareness, personality and emotions. ('Pamela McCorduck: Machines who think' was published in 1979, 43 years ago.) Today, we are equally convinced that petabytes are bound to grow into real AI.
Well, petabytes certainly is something, but I am far from convinced that it is 'intelligence'.
|
|
|
|
|
are the datasets large?
how big is your brains data set. 16-20 hours daily of nonestop audio/visual/touch/smell/taste, plus the 3 billion years of dna changes.
generated images: some million of millions of image sets.
relative bigness
|
|
|
|
|
So the only regulation needed would be "cannot supersede human decision, human will be held accountable for any decision made through AI".
So when the Social Credit system will roll-in, send people to gulags, then be thrown over and another Nuremberg happens, the people who allowed this to happen will be gently detached from their heads.
Also when some self driving incendiary device will inevitably kill someone on the road because their coat looked like the sky so it didn't brake.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
"The video footage of the crime scene was analyzed by computer. It says that it recognizes you."
"But look into my face: that guy in the video is not me."
"The computer says it's you."
Such dialogues may easily happen due to the intelligence of people using AI. In my opinion, that's the most important area for early regulation: hold people responsible when they decide to do something while they use AI. Of course, teach them first about the things which can go wrong with AI - there are so many "ridiculous" examples available. And have them pass a test after training. Only then let them use AI in sensitive areas.
Only afterwards comes regulation for situations where AI decides autonomously.
Oh sanctissimi Wilhelmus, Theodorus, et Fredericus!
|
|
|
|
|
That's not even just AI but the idiocy of people implicitly "trusting the computer". If you know you have money but the bank says "nuh-uh the computer says you have none", you'd demand some sort of ledger accounting/review.
Too much already is just so ridiculous and with zero requirement that machine output be explicitly subject to scrutiny.
|
|
|
|
|
For example GDPR (as linked in the question) is a good idea - it forces developers (via the companies they work for) to adopt sensible practices around the handling of sensitive personal information, and provides a "big stick" with which to threaten or beat those who erroneously assume they know what they doing.
That doesn't mean it's good legislation, just that it's well intentioned legislation which is about all you can expect for a monolithic bureaucracy that doesn't understand technology. So what you get is monolithic legislation that makes life difficult because only the bureaucrats that wrote it understand it ...
I think you need to define what problems or potential problems AI legislation is intended to prevent before you can even consider legislating around how we handle it.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
modified 10-Oct-22 1:42am.
|
|
|
|