As someone who works with integrating AI- it’s failing badly.
At best, it’s good for transcription- at least until it hallucinates and adds things to your medical record that don’t exist. Which it does and when the providers don’t check for errors - which few do regularly- congrats- you now have a medical record of whatever it hallucinated today.
And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.
They can’t consistently do anything more complex without making errors- and most people are frankly too dumb or lazy to properly verify outputs. And that’s why this bubble is so huge.
And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.
This is what drives nuts the most about it. We had so many incredibly efficient, purpose-built tools using the same technologies (machine learning and neural networks) and we threw them away in favor of wildly inefficient, general-purpose LLMs that can’t do a single thing right. All because of marketing hype convincing billionaires they won’t need to pay people anymore.
The fact that coding is a big corner of the use cases means that the tech sector is essentially high on their own supply.
Summarizing and aggregating data alone isn’t a substitute for the smoke and mirrors of confidence that is a consulting firm. It just makes the ones that can lean on branding able to charge more hours for the same output, and add “integrating AI” another bucket of vomit to fling.
I tried having it identify an unknown integrated circuit. It hallucinated a chip. It kept giving me non-existent datasheets and 404 links to digikey/mouser/etc.
This is exactly why I love duckduckgo’s AI results built in to search. It appears when it is relevant (and yes you can nuke it from orbit so it never ever appears) and it always gives citations (2 websites) so I can go check if it is right or not. Sometimes it works wonders when regular search results are not relevant. Sometimes it fails hard. I can distinguish one from the other because I can always check the sources.
Well, from this description it’s still usable for things too complex to just do Monte-Carlo, but with possible verification of results. May even be efficient. But that seems narrow.
BTW, even ethical automated combat drones. I know that one word there seems out of place, but if we have an “AI” for target\trajectory\action suggestion, but something more complex\expensive for verification, ultimately with a human in charge, then it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).
it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).
But how does this work help next quarter’s profits?
If each unplanned death not result of operator’s mistake would lead to confiscation of one month’s profit (not margin), then I’d think it would help very much.
As someone who is actually an AI tool developer (I just use existing models) - it’s absolutely NOT failing.
Lemmy is ironically incredibly tech illiterate.
It can be working and good and still be a bubble - you know that right? A lot of AI is overvalued but to say it’s “failing badly” is absurd and really helps absolutely no one.
According to whom? No one’s running their instance here. I’m a software dev with over 20 years of foss experience and imo lemmy’s user base is somewhat illiterate bunch of contrarians when it comes to popular tech discussions.
We’re clearly not going to agree here without objective data so unless you’re willing to provide that have a good day, bye.
As someone who works with integrating AI- it’s failing badly.
At best, it’s good for transcription- at least until it hallucinates and adds things to your medical record that don’t exist. Which it does and when the providers don’t check for errors - which few do regularly- congrats- you now have a medical record of whatever it hallucinated today.
And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.
They can’t consistently do anything more complex without making errors- and most people are frankly too dumb or lazy to properly verify outputs. And that’s why this bubble is so huge.
It is going to pop, messily.
This is what drives nuts the most about it. We had so many incredibly efficient, purpose-built tools using the same technologies (machine learning and neural networks) and we threw them away in favor of wildly inefficient, general-purpose LLMs that can’t do a single thing right. All because of marketing hype convincing billionaires they won’t need to pay people anymore.
This 1 million%.
The fact that coding is a big corner of the use cases means that the tech sector is essentially high on their own supply.
Summarizing and aggregating data alone isn’t a substitute for the smoke and mirrors of confidence that is a consulting firm. It just makes the ones that can lean on branding able to charge more hours for the same output, and add “integrating AI” another bucket of vomit to fling.
I tried having it identify an unknown integrated circuit. It hallucinated a chip. It kept giving me non-existent datasheets and 404 links to digikey/mouser/etc.
This is my main argument. I need to check the output for correctness anyways. Might as well do it in the first place then.
Honestly I mostly use it as a jumping off point for my code or to help me sound more coherent when writing emails.
People are happy to accept the wrong answer without even checking lol
This is exactly why I love duckduckgo’s AI results built in to search. It appears when it is relevant (and yes you can nuke it from orbit so it never ever appears) and it always gives citations (2 websites) so I can go check if it is right or not. Sometimes it works wonders when regular search results are not relevant. Sometimes it fails hard. I can distinguish one from the other because I can always check the sources.
insurance companies, oh no, insurance companies !!! AArrrggghhh !!!
Well, from this description it’s still usable for things too complex to just do Monte-Carlo, but with possible verification of results. May even be efficient. But that seems narrow.
BTW, even ethical automated combat drones. I know that one word there seems out of place, but if we have an “AI” for target\trajectory\action suggestion, but something more complex\expensive for verification, ultimately with a human in charge, then it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).
But how does this work help next quarter’s profits?
If each unplanned death not result of operator’s mistake would lead to confiscation of one month’s profit (not margin), then I’d think it would help very much.
As someone who is actually an AI tool developer (I just use existing models) - it’s absolutely NOT failing.
Lemmy is ironically incredibly tech illiterate.
It can be working and good and still be a bubble - you know that right? A lot of AI is overvalued but to say it’s “failing badly” is absurd and really helps absolutely no one.
I disagree with all these self hosting Linux running passionate open source advocates, so they must be technology illiterate.
According to whom? No one’s running their instance here. I’m a software dev with over 20 years of foss experience and imo lemmy’s user base is somewhat illiterate bunch of contrarians when it comes to popular tech discussions.
We’re clearly not going to agree here without objective data so unless you’re willing to provide that have a good day, bye.