

Nice to see someone actually trying it themselves to do their own analysis despite having reservations


Nice to see someone actually trying it themselves to do their own analysis despite having reservations


Stuff like this makes me wonder, at what point is it bad enough that the truisms about leaving medical advice to licensed healthcare professionals become wrong, and everyone would be better off turning to anything else instead of engaging with the system? Are we not there yet? How much further would there be to go?


Ramble about something for long enough that people should be able to tell is how I do it.


It’s possible, but I’ve followed some public comment processes for regulatory stuff before and large volumes of comments make it take way longer, because there is manual work involved. If a politician wants to still have actual people manually consider the contents of their inbox (which they absolutely should), using AI instead of a form letter will make that much harder for them to do. AI talking to AI to determine what the public thinks and wants is probably going to lose a lot in translation, and if it’s using service-based AI will give the companies running it another rather direct way to influence political outcomes.
Given all that, I’m not sure what the advantage is to balance against it either. As opposed to sending a copy of the form letter, where you can assume they will at least count how many people have done that, what’s even the benefit of having a LLM rewrite it first?


Well, the person you responded to above was talking about sending more than one, which is the worst part. But even if you are only using AI to rephrase the canned response for your singular comment, that creates a situation where it is more difficult for them to actually read and consider different points people might be bringing up, because now there’s lots of messages that are basically just the canned response in content and intent but more effort to group together. Also the people going through them will probably be able to tell AI is being used, which could call into question whether someone was sending more than one even if you were not.


I don’t hate AI and think it’s fine to use for a bunch of things, but using it to falsify the level of public engagement on a political issue is a clear misuse, it’s easy to see how that could make democracy not work as well, or backfire and be used as an argument that all the public sentiment about the issue is astroturfed.


Sounds like an additional reason to be doing it in a way where participants can’t be debanked by payments middlemen


Part of the headache here is that this situation inherently props up a few monopolistic platforms, rather than allowing people to use whatever payment system is available in their own countries. Some of this can be worked around using cryptocurrencies – famously, the Mitra project leverages Monero for this very purpose, although I’m told it now can accept other forms of payment as well.
Hell yeah, I didn’t know about Mitra. It sounds like it’s a Patreon esque kind of deal with what the payments part is for.


That kind of painting seems more likely to come alive
Quickly and effortlessly get some music playing that can act as a backdrop for your real activity such as working, driving, cooking, hosting friends, etc. Keep it rolling indefinitely.
“Discover” new music by statistical means based on your average tastes.
This is the main thing I want out of music software tbh.


I think maybe they wouldn’t if they are trying to scale their operations to scanning through millions of sites and your site is just one of them


LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning …
Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.
But take away language from a large language model, and you are left with literally nothing at all.
The author seems to be making the assumption that a LLM is the equivalent of the language processing parts of the brain (which according to the cited research supposedly focus on language specifically and the other parts of the brain do reasoning) but that isn’t really how it works. LLMs have to internally model more than just the structure of language because text contains information that isn’t just about the structure of language. The existence of Multimodal models makes this kind of obvious; they train on more input types than just text, whatever it’s doing internally is more abstract than only being about language.
Not to say the research on the human brain they’re talking about is wrong, it’s just that the way they are trying to tie it in to AI doesn’t make any sense.


What about a way to donate (held in reserve for that purpose?) money after the fact for specific commits, and then have a way to indicate which things you’d be most likely to donate to going forward if they are completed? This would mean less reliable payments since there wouldn’t be a guarantee any given contribution would result in a payout, but there wouldn’t be any disincentive to work on things and there would be a general idea of what donators want. Plus doing it that way would eliminate the need for a manual escrow process.


Maybe a little, but I think it fits pretty well, if you look at this from a “fuck copyright” angle. It’s easy to see the problems with what Disney is doing here and in general.


I bet they also hope to ultimately corral all fanart into spaces they directly control.


Even if they are trying to hack me it’s only polite. Plus on the very remote chance they somehow find this and care they would have slightly more info about me.



Tried setting this up, caught a few already


Barring civilians from using encryption and software deemed dangerous is a new level imo. These are the tools we have to fight this stuff, maintaining those rights is a big deal.
There’s at least some difference between “have been” and “this is currently likely to happen”, since if it’s known then it would have been fixed. I’ve gotten viruses before from just visiting websites but it was decades ago and there’s no way the same method would work now.