• 1 Post
  • 22 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle




  • It’s possible, but I’ve followed some public comment processes for regulatory stuff before and large volumes of comments make it take way longer, because there is manual work involved. If a politician wants to still have actual people manually consider the contents of their inbox (which they absolutely should), using AI instead of a form letter will make that much harder for them to do. AI talking to AI to determine what the public thinks and wants is probably going to lose a lot in translation, and if it’s using service-based AI will give the companies running it another rather direct way to influence political outcomes.

    Given all that, I’m not sure what the advantage is to balance against it either. As opposed to sending a copy of the form letter, where you can assume they will at least count how many people have done that, what’s even the benefit of having a LLM rewrite it first?


  • Well, the person you responded to above was talking about sending more than one, which is the worst part. But even if you are only using AI to rephrase the canned response for your singular comment, that creates a situation where it is more difficult for them to actually read and consider different points people might be bringing up, because now there’s lots of messages that are basically just the canned response in content and intent but more effort to group together. Also the people going through them will probably be able to tell AI is being used, which could call into question whether someone was sending more than one even if you were not.




  • Part of the headache here is that this situation inherently props up a few monopolistic platforms, rather than allowing people to use whatever payment system is available in their own countries. Some of this can be worked around using cryptocurrencies – famously, the Mitra project leverages Monero for this very purpose, although I’m told it now can accept other forms of payment as well.

    Hell yeah, I didn’t know about Mitra. It sounds like it’s a Patreon esque kind of deal with what the payments part is for.






  • LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning …

    Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

    But take away language from a large language model, and you are left with literally nothing at all.

    The author seems to be making the assumption that a LLM is the equivalent of the language processing parts of the brain (which according to the cited research supposedly focus on language specifically and the other parts of the brain do reasoning) but that isn’t really how it works. LLMs have to internally model more than just the structure of language because text contains information that isn’t just about the structure of language. The existence of Multimodal models makes this kind of obvious; they train on more input types than just text, whatever it’s doing internally is more abstract than only being about language.

    Not to say the research on the human brain they’re talking about is wrong, it’s just that the way they are trying to tie it in to AI doesn’t make any sense.


  • What about a way to donate (held in reserve for that purpose?) money after the fact for specific commits, and then have a way to indicate which things you’d be most likely to donate to going forward if they are completed? This would mean less reliable payments since there wouldn’t be a guarantee any given contribution would result in a payout, but there wouldn’t be any disincentive to work on things and there would be a general idea of what donators want. Plus doing it that way would eliminate the need for a manual escrow process.