|
obermd wrote: authors who are putting disclaimers on the front of their eBooks that if the company want's to train using their work, they need to contact the author for approval and potentially payment
I'm betting the net result of that will be that the LLMs start putting disclaimers in their responses to say that if anyone wants to use the response to train an LLM, they should contact the company for approval and potentially payment.
Because of course the "AI" companies aren't going to start reading and respecting those disclaimers; they're just going to point their scraping tools at the book, and blindly shovel it all into their models.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I'll play - what's an llm?
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
ChatGPT is an LLM
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Large Language Model - basically what's getting called AI these days (ChatGPT, Copilot, etc.)
TTFN - Kent
|
|
|
|
|
Perhaps Limited Learning Model is a more appropriate term, upon reading the article.
Or also Lagging behind in Leap towards Maturity.
modified 23-May-24 22:25pm.
|
|
|
|
|
|
Exactly!
One of the problems I see is that the weakest link in the chat software chain, is context apprehension and continuity across probes.
To be fair, that's common amongst humans also. I'm 67 and I work with colleagues that are, on average early 30-somethings. They (almost, to be fair) never get my 60's, 70's and 80's cultural references.
I don't fault them for that, however. Culturally speaking, my internal LLM has been train on datasets they've never seen.
This is the common look I get:
Cheers,
Mike Fidler
"I intend to live forever - so far, so good." Steven Wright
"I almost had a psychic girlfriend but she left me before we met." Also Steven Wright
"I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
|
|
|
|
|
Quote: Culturally speaking, my internal LLM has been train on datasets they've never seen. Good point ...
BR
|
|
|
|
|
that's HILARIOUS. Really.
I used to support Belgians. What an interesting group of people. Any way, one day I responded to an email with "You are not even in the same ball park." Americans - I'm sure you know the reference.
30 seconds after sending the response, my phone lit up with this person asking me, "wtf are you talking about?" In a heavy Belgian accent. I started laughing so hard at myself I had tears rolling down my cheeks which offended him which made me laugh even harder. After I got my breath back, wiped off my eyes and regained proper breathing I explained... where upon HE started laughing.
Fast forward to an ice hockey trip where I happened to be the "equipment manager" of the team... this means you get the luxury of toting around 200 lbs of gear all over the country. Yeah me, but it did solve my sciatic. I fired off a geek comment in the locker room about "smoke makes computers run, don't let out the smoke..." My two boys on the team wanted to hide, the other kids are like "what?" Meh, the didn't get it.
I s*** you not, we're at home 3 days later and the family PC blows a power supply right in front of the two hockey players. There's that "pop", everything turns off and a small mushroom cloud of smoke forms over the pc. To my credit, I kept my face completely dead pan - "told you... " and walked away...
I've given up on the generational issues...
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
What I read into this:
If you order a LLM to babble, it will babble. Took them a long time to say nothing.
Nothing to see here...
|
|
|
|
|
I suspect the source of the LLM problem described - setting aside all the "no one cares" and "I don't have a conduit to speak with anyone who might care" issues - is architectural.
To make an obvious "well DUH!" statement, there are multiple components here, just in the chatbots simple Input-Process-Output structure. The issue the article author talks about could originate in any one or more of them.
To me a bicameral or multicameral approach/architecture is needed. If these chatbot systems had an output monitoring AI that could "learn" to detect both garbage in and/or garbage out, that could help mitigate the issue.
I'm just flying by the seat of my pants here. I am fully prepared to be wrong, and if that's the case, please be gentle.
Cheers,
Mike Fidler
"I intend to live forever - so far, so good." Steven Wright
"I almost had a psychic girlfriend but she left me before we met." Also Steven Wright
"I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
|
|
|
|
|
Man I hope they don't start getting "smart" (terribly dumb) in that way.
The core problem is still basic fundamental trust in computing really, and how it's just totally misplaced, and the bottom line is you're trusting a computer or a person.
I think only a fool should choose the former because that's just doubling risk and actually trusting both the computer AND the person who made it do.. whatever.
I was about two years into one of my first programming gigs. A bunch involved MSAccess dbs and financial reporting. It slowly dawned on me that somehow I'm the one who has to make sure things are 'correct'. Like there were people taking these reports and just rolling with them.
I was barely in my 20s and didn't have the experience to even know if some numbers were ballpark correct.
It was absolutely terrifying. Because I realized that no, this wasn't at all a unique situation to me/that company, the same scenario was playing out basically the world over.
|
|
|
|
|
And I'm laughing way harder than I have any right to.
your code - YouTube[^]
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
I'm still waiting for some friend to send me a little MP5.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
The shortest horror story: On Error Resume Next
|
|
|
|
|
I'm not picky a regular size would be just as easy!
|
|
|
|
|
And I hope the renessance will come in my lifetime...
"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." ― Gerald Weinberg
|
|
|
|
|
build the game or build.com?
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
I use DuckDuckGo exclusively and it is down right now (since 4:21am Eastern) -- see image of their tweet[^] or see the actual tweet at: x.com[^]
I also just tried to do an Image search from Edge and got this error from Bing[^].
What's up with Search Engines?
modified 23-May-24 9:29am.
|
|
|
|
|
raddevus wrote: What's up with Search Engines?
In general terms? Too many ads, too much AI-generated BS, and too much keyword-stuffing SEO-optimised crap filling the first page(s) of results for them to be much use any more.
At least DDG doesn't suffer too much from the first two.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
|
Haha, that brings back fun memories.
|
|
|
|
|
Very interesting. Looks like the problem with DuckDuckGo was actually related to Bing.
The Brave browser tweeted this (image of tweet)[^].
Was actually a Bing API issue.
|
|
|
|
|
|
Wordle 1,069 4/6*
⬛⬛⬛⬛⬛
🟩🟨🟩⬛⬛
🟩⬛🟩⬛🟩
🟩🟩🟩🟩🟩
|
|
|
|