Anti-AI technologists only: If and when the AI bubble finally collapses due to financial strain, and LLMs become synonymous with several open-source, open-weight, ethically trained models that run on your modest local hardware, would you use it?
-
Anti-AI technologists only:
If and when the AI bubble finally collapses due to financial strain, and LLMs become synonymous with several open-source, open-weight, ethically trained models that run on your modest local hardware, would you use it?This is a world without OpenAI, Anthropic, and those massive AI data centres. Maybe some smaller shops spin up no-code wrapper tools, but nothing like the surveillance capital shit we have today.
Basically, if the ethics, ecology, and extractive capitalism bits go away, would you consider working with these tools?
-
Anti-AI technologists only:
If and when the AI bubble finally collapses due to financial strain, and LLMs become synonymous with several open-source, open-weight, ethically trained models that run on your modest local hardware, would you use it?This is a world without OpenAI, Anthropic, and those massive AI data centres. Maybe some smaller shops spin up no-code wrapper tools, but nothing like the surveillance capital shit we have today.
Basically, if the ethics, ecology, and extractive capitalism bits go away, would you consider working with these tools?
Follow that^ poll with this poll
-
Anti-AI technologists only:
If and when the AI bubble finally collapses due to financial strain, and LLMs become synonymous with several open-source, open-weight, ethically trained models that run on your modest local hardware, would you use it?This is a world without OpenAI, Anthropic, and those massive AI data centres. Maybe some smaller shops spin up no-code wrapper tools, but nothing like the surveillance capital shit we have today.
Basically, if the ethics, ecology, and extractive capitalism bits go away, would you consider working with these tools?
@mayintoronto "If things were nice, would you also do nice things", I mean sure. Kinda big if.
But in that hypothetical world, I guess I'd see if there's a use case. LLMs are inherently prone to generating errors, and no amount of development will get rid of that because generating stuff without understanding fact value is what they *do*, which limits their usefulness no matter what.
-
R relay@relay.mycrowd.ca shared this topic
-
Anti-AI technologists only:
If and when the AI bubble finally collapses due to financial strain, and LLMs become synonymous with several open-source, open-weight, ethically trained models that run on your modest local hardware, would you use it?This is a world without OpenAI, Anthropic, and those massive AI data centres. Maybe some smaller shops spin up no-code wrapper tools, but nothing like the surveillance capital shit we have today.
Basically, if the ethics, ecology, and extractive capitalism bits go away, would you consider working with these tools?
-
@WeirdWriter @mayintoronto Unfortunately, "AI" (which has always been a bullshit term) has become entirely synonymous with LLMs. "AI" sucked to begin with. Let's talk instead about "machine learning." LLMs are irredeemable. The entire concept is fundamentally flawed. It's a parlor trick gone horribly wrong.
-
R relay@relay.mycrowd.ca shared this topicR relay@relay.infosec.exchange shared this topic
-
Anti-AI technologists only:
If and when the AI bubble finally collapses due to financial strain, and LLMs become synonymous with several open-source, open-weight, ethically trained models that run on your modest local hardware, would you use it?This is a world without OpenAI, Anthropic, and those massive AI data centres. Maybe some smaller shops spin up no-code wrapper tools, but nothing like the surveillance capital shit we have today.
Basically, if the ethics, ecology, and extractive capitalism bits go away, would you consider working with these tools?
@mayintoronto I question the premises!
Like, why would you think the bubble popping would lead to open-source, lightweight, ethical replacements? It hasn't happened with, say, cryptocurrency: it's just the same stuff, but less of it. And, like, why would you assume OpenAI, Anthropic, etc., just disappear instead of pivoting to something else slightly different, like non-generative machine learning? Why would governments stop finding surveillance tools necessary?
Also, the negative effects, like the resource usage, the massive exploitative industry of people labelling and filtering data, cannot be handwaved away because they are a big reason why the tech works like it does. They are intrinsically connected. It's like asking "you don't want to own a horse? what if there was a magical pony that never pooped and could fly, what then?" It's not a serious question.
All that aside, I personally have no use for the things LLMs are "good" at. I genuinely want to know how to program and learn why it works, and that is more important to me than getting a working program that I don't understand. I am perfectly capable of writing grammatically correct and well-formatted text on my own. And I don't need a chatbot companion.
-
Anti-AI technologists only:
If and when the AI bubble finally collapses due to financial strain, and LLMs become synonymous with several open-source, open-weight, ethically trained models that run on your modest local hardware, would you use it?This is a world without OpenAI, Anthropic, and those massive AI data centres. Maybe some smaller shops spin up no-code wrapper tools, but nothing like the surveillance capital shit we have today.
Basically, if the ethics, ecology, and extractive capitalism bits go away, would you consider working with these tools?
@mayintoronto On the premise that the three issues are truly solved, I will try it. Not sure it would spaghetti on the wall.