Post
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @tris @mcc @arthfach I don't mean to distract too much, given that this part of the thread is about environmental impact, but I do want to also make sure that it's not lost that while AI is indeed unconscionable on an environmental basis, AI is *also* unconscionable on a anti-fascsim basis, on a labor rights basis, on a mental health basis, and on the basis of resisting the enclosure of common culture.
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @tris @mcc @arthfach As mentioned, I do think your estimates of how much of US electrical usage goes to AI are quite low-ball, and don't seem to be in line with other estimates that I've read, such as from US national labs.
That said, your claim that AI is not unconscionable on the basis of environmental impact is a pretty extraordinary one? Just because there are bigger problems, we get to go on and cause a new problem that *we did not need to cause*?
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @tris @mcc @arthfach I'm not sure that's true? We don't have good numbers for AI energy usage due to corporate secrecy, but LBNL puts it at 1% of all US electrical usage as of 2023, and almost 10% projected by 2028.
But even if it was, your original poll was about running AI products on the fediverse, not about running dairy-based software on the fediverse, so I'm not sure that's the most salient argument in favor of AI?
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@ragectl @aneel @mcc @arthfach Working that analogy backwards, in @evan's original poll, a hypothetical *person* is considering the decision as to whether or not to use AI to damage open source software, open social networks, and targets of fascism more generally — it's that hypothetical person whom we should be tolerant or intolerant of *because* they are adopting AI.
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@ragectl @aneel @mcc @evan @arthfach The somewhat uncharitable comparison would be "guns don't kill people, people kill people." Which is true, of course, guns do not have agency, and are incapable of taking action on their own.
But that's not an argument against gun control; if the NRA took *their own argument* seriously, they'd be in favor of regulations targeting the people who use guns, colloquially known as gun control laws.
(con'd)
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@reflex @arthfach @mcc @davidgerard I support the right to block folks and to duck out of conversations that have gotten too heated, but I'll just say that I thought yours was a very salient point. "Live and let live" is a privileged position in general, let alone in the middle of a fascism/tech merger (to borrow Naomi Klein's phrasing, with apologies to her).
Being privileged isn't a sin in and of itself, but I think it's important to be aware of how that affects one's positions?
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@ragectl @aneel @mcc @evan @arthfach I don't think we need to anthropomorphize AI in order to learn from the Paradox of Tolerance. We should not be tolerant of *people* who make the decision to help AI enclose the commons. That may sound a bit extreme, but both the current situation is extreme, and also our intolerance can and should be proportional.
It's good to hold our friends, peers, colleagues, and even loved ones to some kind of moral standards, just as we ask them to hold us accountable.
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @mcc @arthfach That's even before considering that, amongst social media networks, the fediverse is somewhere that it's especially possible and meaningful to do the right thing.
Instance admins defederate from hate speech and spam instances all the time (to your point earlier). We rejected opt-out Bluesky bridging in favor of opt-in. Mastodon itself is developed without LLMs, and via a strong and well-reasoned policy.
We can do the right thing here, we don't need to preemptively give up.
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @mcc @arthfach I get it's complex sometimes. I fall pretty squarely on the fedipact side of things, given how maliciously Meta has acted in the past and how maliciously they are currently acting, but I understand (don't agree with, but understand) arguments like "there's lots of people there."
But AI is a bit unique here, both in the extreme of the harms presented, and in the extreme lack of utility. There's plenty of reason to reject AI and literally no good reason to adopt it.
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @mcc @arthfach Which, fair enough, goodness knows I don't always pick the best of analogies.
That said, I don't think "different human language" is a good analogy for AI, either. It is something that carries a moral weight to it, and a considerable one at that. I think the invocation of the Paradox of Tolerance is spot on — to what degree should an open network invite in malicious actors trying to enclose the commons and hurt OSS development?
I think a pretty reasonable answer is "no."
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @mcc @arthfach With respect to boundaries of control, how does that relate to consent? If I e-mail a draft of a short story I'm working on to a colleague to be critiqued, does that consent then extend to allowing Microsoft or Google to do whatever they want with my creative labor? Said companies claim that it does, but I wouldn't think that's a common understanding, nor consistent with a liberal worldview, as you say.
Putting analytical tools off the table at the outset weakens us.
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @mcc @arthfach Is that the liberal worldview? As you even say, that view has limits — why should those not include "using the wrong operating system or device"?
That's an odd framing, in that it invites us to think of AI as being *especially* outside the "liberal worldview," but it really isn't. We do reject "using the wrong operating system or device" when that would present an immediate risk to human rights, safety, or similar. AI is another case of that analysis.
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @mcc @arthfach All of the above are some flavor of "AI encloses the common good for the benefit of the worst people on the planet, and imposes untenable externalities along the way."
With that in mind, for the fediverse and social networking in particular, the fediverse is a particularly vulnerable common good that bad actors have already tried to disrupt — AI can and should be viewed as one more such attack, and I do not see any reason I or anyone else should help the attackers out.
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World• Artists, software engineers, etc. can so no to unethical projects; AI cannot (see the Trump admin using AI to generate white supremacist images, or Claude being used in the "kill chain").
To OSS *in particular*:
• AI introduces untenable dependence on proprietary services.
• AI code cannot be adequately reviewed for defects (see @glyph's excellent post on the subject).
• AI introduces unknown legal risk; it is probably mild, but IANAL and don't know how to assert that. -
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @mcc @arthfach So. The ethical dimension to AI *in general*, not specific to the fediverse, OSS, or social networking:
• It's predominantly developed by and profits fascists.
• It's founded on eugenicist thought.
• The environmental cost is untenable.
• Large models cannot exist without compromising consent and labor rights.
• AI is used to attack labor rights further (think automated scab).
• The risk of mental health effects is poorly understood so far.(con'd)
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @mcc @arthfach So, presuming that having morals, acting on them, and expressing those actions through software engineering is something worth doing — a point I suspect you'd likely agree with, given your other actions in the world, the question then becomes one of what the moral effects of using AI at all are, and how those effects might be mitigated or exacerbated in the context of open source software and federated networks.
That analysis doesn't look good for AI, to put it mildly.
-
Is it OK to run software written with the help of AI/LLMs on the Fediverse?
World@evan @mcc @arthfach There's a lot in this thread to unpack, but the very first thing I want to address is your "live or let live" point.
That's very much a "take my ball and go home" kind of argument — the very reason things like open source and the fediverse exist is because enough people believe that there is a moral or ethical argument for them, not only a practical one. We rightly tend to reject the idea that proprietary software and open-source software are morally equivalent.