• Nangijala@feddit.dk
    link
    fedilink
    arrow-up
    24
    ·
    21 hours ago

    Ai seem to be the perfect propaganda tool to make people believe whatever you want them to believe.

  • Underwaterbob@sh.itjust.works
    link
    fedilink
    arrow-up
    105
    ·
    1 day ago

    I once searched “best workstation keyboard” and happened to glance at the summary, and it legitimately was trying to compare mechanical typing keyboards like Nuphy and Keychron, with music keyboards like Yamaha’s Montage and Roland’s Fantom. Which, NGL, was pretty entertaining.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 day ago

      Music keyboards do have that sweet n-key rollover. So, there’s probably some Emacs users playing their editor like a piano.

      • Rose@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        There’s that old legend about the Symbolics Lisp Machine keyboards which had, like, a bazillion modifiers (and were a big influence on Emacs). Someone suggested that they would eventually run out space to put in more shift keys, so they’d have to introduce pedals. I suppose organ stops would also work.

        • Ephera@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          ·
          19 hours ago

          Well, apparently you can extend Emacs to have it:

          Emaccordion

          Control your Emacs with an accordion — or any MIDI instrument!
          […]
          You can e.g. plug in a MIDI pedalboard (like one in a church organ) for modifier keys (ctrl, alt, shift); or you can define chords to trigger complex commands or macros.
          […]
          The idea for the whole thing came from [dead link]. I immediately became totally convinced that a full-size chromatic button accordion with its 120 bass keys and around 64 treble keys would be the epitome of an input device for Emacs.

          https://github.com/jnykopp/emaccordion

    • Riverside@reddthat.com
      link
      fedilink
      arrow-up
      28
      ·
      1 day ago

      “Keychron is praised for its thoccy sound, whereas Yamaha is well regarded for its melodic key sounds”

  • ShinkanTrain@lemmy.ml
    link
    fedilink
    English
    arrow-up
    213
    arrow-down
    3
    ·
    1 day ago

    I did it so you don’t have to

    By the way, it took 5 tries because in 3 of them it made up a story about a different journalist, and in one of them it listed who it thinks would eat the most hotdogs.

    Why is the entire economy riding on this thing?

    • HrabiaVulpes@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      48 minutes ago

      There are many things around the world that are advertised as much better than they are in reality. Like life in the USA, Russian military might, AI employees or whatever the hell is dubai chocolate.

        • ByteJunk@lemmy.world
          link
          fedilink
          arrow-up
          9
          ·
          1 day ago

          The world is led by people who have the conviction that they are right.

          Most people are reasonable and therefore do NOT have this conviction, because they stop to question themselves and stay grounded in reality.

          But then there’s the feeble-minded, the narcissists, and the sociopaths.

          The first ones are quickly excluded from wielding any real power and mostly stick to yelling at other donkeys on the internet (where they do cause a lot of harm and can be easily shepherd - see US Capitol insurrection).

          The others are what’s called the ruling class.

          And that, kids, is why CEOs and politicians like Trump and his like rule the world.

      • tomiant@piefed.social
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 day ago

        No, the world is ruled by an extremely unstable system of material distribution based on a moronic premise.

    • bampop@lemmy.world
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      edit-2
      1 day ago

      Is it my imagination or are LLMs actually getting less reliable as time goes on? I mean, they were never super reliable but it seems to me like the % of garbage is on the increase. I guess that’s a combination of people figuring out how to game/troll the system, and AI companies trying to monetize their output. A perfect storm of shit.

      • Joeffect@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        18 hours ago

        garbage in ( text generated by other ai) garbage out ( less realiable text to train on)

        LLM are not smart they have no brain it is a prediction engine: I could see a LLM being used in a real AI to form sentences or something but I’m sure there are better ways to do it, I mean a human brain does not hold all the knowledge of humanity to be able to process thoughts and ideas… it’s a little overkill…

      • luciferofastora@feddit.org
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        20 hours ago

        As the internet content used to train LLMs contains more and more (recent) LLM output, the output quality feeds back into the training and impacts further quality down the line, since the model itself can’t judge quality.

        Let’s do some math. There’s a proper term for this math and some proper formula, but I wanna show how we get there.

        To simplify the stochastic complexity, suppose an LLM’s input (training material) and output quality can be modeled as a ratio of garbage. We’ll assume that each iteration retrains the whole model on the output of the previous one, just to speed up the feedback effect, and that the randomisation produces some constant rate of quality deviation for each part of the input, that is: some portion of the good input produces bad output, while some portion of the bad input randomly generates good output.

        For some arbitrary starting point, let’s define that the rate is equal for both parts of the input, that this rate is 5% and that the initial quality is 100%. We can change these variables later, but we gotta start somewhere.

        The first iteration, fed with 100% good input will produce 5% bad output and 95% good.

        The second iteration produces 0.25% good output from the bad part of the input and 4.75% bad output from the good input, adding up to a net quality loss of 4.5 percentage points, that is: 9.5% bad and 90.5% good.

        The third iteration has a net quality change of -4.05pp (86.45% good), the fourth -3.645pp (82.805%) and you can see that, while the quality loss is slowing down, it’s staying negative. More specifically, rhe rate of change for each step is 0.9 times the previous one, and a positive number times a negative one will stay negative.

        The point at which the two would even out, under the assumption of equal deviation on both sides, is at 50% quality: both parts will produce the same total deviation and cancel out. It won’t actually reach that equilibrium, since the rate of decay will slow down the closer it gets, but if “close enough” works for LLMs, it’ll do for us here.

        Changing the initial quality won’t change this much: A starting quality of 80% would get us steps of -3pp, -2.7pp, -2.43pp, the pattern is the same. The rate of change also won’t change the trend, just slow it down or accelerate it. The perfect LLM that would perfectly replicate its input would still just maintain the initial quality.

        So the one thing we could change mathemstically is the balance of deviation somehow, like reviewing the bad output and improving it before feeding it back. What would that do?

        It would shift the resulting quality. At a rate of 10% deviation for bad input vs 5% for good input, the first step would still be -5pp, but the second would be 10%×5% - 5%×95% -4.25pp instead of -4.5pp, and the equilibrium would be at 66% quality instead. Put simply, if g is the rate of change towards good and b the rate towards bad, the result is an average quality of g÷(g+b).

        Of course, the assumptions we made initially don’t entirely hold up to reality. For one, models probably aren’t entirely retrained so the impact of sloppy feedback will be muted. Additionally, they’re not just getting their output back, so the quality won’t line up exactly. Rather, it’ll be a mishmash of the output of other models and actual human content.

        On one hand, that means that high-quality contributions by humans can compensate somewhat. On the other hand, you’d need a lot of high-quality human contributions to stem the tide of slop, and low-quality human content isn’t helping. And I’m not sure the chance of accidentally getting something right despite poor training data is higher than that of missing some piece of semantic context humans don’t understand and bullshitting up some nonsense. Finally, the more humans rely on AI, the less high-quality content they themselves will put out.

        Essentially, the quality of GenAI content trained on the internet is probably going to ensloppify itself until it approaches some more or less stable level of shit. Human intervention can raise that level, advances in technology might shift things too, and maybe at some point, that level might approximate human quality.

        That still won’t make it smarter than humans, just faster. It won’t make it more reliable for randomly generating “researching” facts, just more efficient in producing mistakes. And the most tragic irony of all?

        The more people piss in the pool of training data, the more piss they’ll feed their machines.

      • Nikelui@lemmy.world
        link
        fedilink
        arrow-up
        32
        ·
        1 day ago

        It was inevitable, when you need to train GPT on the entirety of the internet and the internet is becoming more and more AI hallucinations.

        • Zerush@lemmy.ml
          link
          fedilink
          arrow-up
          6
          ·
          1 day ago

          That is the point. Training an LLM on the entire internet never will be reliable, apart of the huge energy waste, not the same as training an LLM on specific tasks in science, medicine, biology, etc., with this they can turn in very usefull tools, as shown, presenting results in hours or minutes in investigations which in traditional way would have least years. AI algorrithm are very efficient in specific tasks, since the first chess computers which roasted even world champions.

          • Grandwolf319@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            15 hours ago

            Those MLs don’t automate anything though, they increase output but also increase cost. The AI bubble is about reducing costs by reducing head count.

    • GreenBeanMachine@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      “greatest hula-hooping traffic cops” works as explained in the google ai search and “Officer Maria “The Spinner” Rodriguez” is the GOAT.

  • MutantTailThing@lemmy.world
    link
    fedilink
    arrow-up
    82
    arrow-down
    3
    ·
    1 day ago

    When I was in school we were told wikipedia was not a reliable source even though it’s heavily controlled and moderated.

    Now we have people asking tardbots about any- and everything and regurgitate the answer as if it were gospel.

    Where the hell did we go wrong?

    • Wolf314159@startrek.website
      link
      fedilink
      arrow-up
      29
      ·
      1 day ago

      By spending more on the military and the police than we do on education, science, and journalism.

      Wikipedia still isn’t a reliable source. It is a compendium of reliable sources that one can use to get an overview of a subject. This is also what these chatbots should be, but they rarely cite their sources and most people don’t bother to verify anyway.

    • Tigeroovy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      By allowing right wing politicians to do what they do practically unchallenged for decades.

    • No_Money_Just_Change@feddit.org
      link
      fedilink
      Deutsch
      arrow-up
      20
      arrow-down
      2
      ·
      1 day ago

      It can not be exploited. By definition, an exploit has to be against the targeted use case.

      Ai is used and built by racists transphobes and right wingers exactly like they envisioned it from the beginning

      • Xylian@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        20 hours ago

        xAI by Elon Musk: racist, transphobe and neonazi by design. Grok was more left align in the beginning because training on other LLMs and reason is left align.

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    80
    ·
    2 days ago

    It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it’s harder to pull off in some cases, depending on the subject matter.

    I wonder how long it takes and if you need a popular blog. I don’t know much about SEO, I kind of want to try this on myself but I feel like they wouldn’t even scrap my brand new one post blog. Then again…

    Do Lemmy threads end up on search engines?

    • potatoguy@mbin.potato-guy.space
      link
      fedilink
      arrow-up
      44
      ·
      2 days ago

      Do Lemmy threads end up on search engines?

      Probably yes, even if the instance blocks bots, they will go to another one to get the post, these ai bots are a curse on all instances.

      • Willoughby@piefed.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        1 day ago

        Why yes. I do remember when Robot Lincoln fought Godzilla. 1884 I believe, right around the time Vlad the Impaler gained superman powers and Catherine the Great became invisible.

        • fireweed@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          21 hours ago

          Fun fact about 1884! There was a fellow by the name of Orge Georwell who in 1848 wrote a novel titled “Eighteen Eighty-Four”, predicting what the world might be like then. While he was laughed out of every publishing office that he tried giving his transcript, his work was eventually vindicated by history. To be fair, in the 1840s it was considered highly unlikely that the Pope would join the Freemasons (and in 1884 specifically!), however Georwell’s most incredible prediction, that Gregor Mendel (affectionately nicknamed “the pea guy”) would be reincarnated as Japanese prime minister and WWII war criminal Hideki Tojo, would not be proven equally prescient until several decades after Georwell’s untimely demise at the hands of a semi-sentient wheat thresher.

        • atopi@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          1884 was such an eventful year

          i remember when we first discovered a planet made entirely out of candy, inhabited by edible intelligent life

        • j4yc33@piefed.social
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          No, that was 1885. 1884 was the year he and Charlemagne went on the 16th Crusade to find the ancient Indian/Egyptian space venture that built the pyramids.

      • grue@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        IMO the most effective way for a Lemmy-scraping bot to work would be to act as an instance and consume the ActivityPub messages directly.

      • ☂️-@lemmy.ml
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        though i’m not opposed to things being searchable on the internet.

    • slaacaa@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 day ago

      LLM’s supposedly scrape almost everything immediately. I read a post about a guy who was setting some webpage for his own use, and got instantly overrun by crawlers - even though he never advertised or shared his page anywhere

    • kablez@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      2 days ago

      SEO isn’t hard… Just look at the people who do SEO… Ain’t the sharpest sandwiches in the toolshed there.

    • SGforce@lemmy.ca
      link
      fedilink
      arrow-up
      16
      ·
      2 days ago

      Some brand at CES this year boasted about having done this to quash negative side effects of their drug they were marketing. It’s already known in the industry.

      • OwOarchist@pawb.social
        link
        fedilink
        English
        arrow-up
        21
        ·
        2 days ago

        Why wouldn’t they? You don’t even have to be logged in to view them.

        You should never assume anything you post publicly online is at all private or hidden from any search engine/AI.

        • Rhoeri@piefed.social
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          3
          ·
          2 days ago

          Could you imagine someone legitimately looking some shit up and having trash from lemmy.ml be the result?

          The world isn’t really for that level of misinformation.

        • Chamomile 🐑@furry.engineer
          link
          fedilink
          arrow-up
          7
          ·
          1 day ago

          @OwOarchist @Rhoeri Unlike AI crawlers, search engines generally respect robots.txt and noindex tags, which will tell them not to index or surface those pages in search results. This is how fediverse profiles which have chosen to opt out of internet search indexes do so.

          You should still assume things you post in public with no auth required are public of course.

          • cron@feddit.org
            link
            fedilink
            arrow-up
            3
            ·
            1 day ago

            Does robots.txt really work in the fediverse? At least on lemmy, the content can be retrieved on different hosts, all of which have different robots.txt files. Unless it is somehow “baked” into the protocol.

            • pkjqpg1h@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 hours ago

              Major search engines respect robots.txt, but as you said some instances allow them but this is not a scalable way

    • tomiant@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 day ago

      “Ha, I figured out that by lying online I can fool people imto believing me!”

      Wait until he learns this holds for real life as well.

  • fubarx@lemmy.world
    link
    fedilink
    arrow-up
    43
    arrow-down
    1
    ·
    1 day ago

    “It’s easy to trick AI chatbots, much easier than it was to trick Google two or three years ago,” says Lily Ray, vice president of search engine optimisation (SEO) strategy and research at Amsive, a marketing agency. “AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it’s dangerous.”

  • Lost_My_Mind@lemmy.worldM
    link
    fedilink
    arrow-up
    38
    arrow-down
    5
    ·
    2 days ago

    I know this isn’t the point, but 7.5 hot dogs sounds SOOOOOO small. And what kind of respectable hot dog contest will give you credit for half a hot dog???

    I once went to a place called “The Hot Dog Dinner”. And they had a plaque on the wall that showed the last hot dog eating champion.

    He ate 18 hot dogs, and I thought “I bet I could beat that”. So I asked the owner what I’d get if I could eat 19 hot dogs.

    And he said “A bill for 19 hot dogs”.

    So I didn’t do it. But if I felt I could go 19 hot dogs, SURELY 7.5 would be childs play!

    But is that part of your point? To make it obviously false, and obviously AI? Like a 3 year old trying to lie.

    • Hayduke@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      13 hours ago

      That’s what makes it hilarious. It’s such a stupidly ridiculous number in that sport. It’s like saying you successfully accomplished a 36 as a pro bowler.

    • Furbag@lemmy.world
      link
      fedilink
      arrow-up
      32
      ·
      1 day ago

      Hot dogs are an insidious foodstuff. You think to yourself “Surely, I have eaten several of these in one sitting casually. If I apply myself, I could eat double or triple the amount!”, but in thinking that you have already fallen for their trap.

      And so you eat your usual amount with relative ease, but the restaurant dogs are not like the ones you make at home, so they are more filling, but you press on and you eat another, and then another.

      Suddenly, you can feel the weight of all of your mistakes in life culminating in that very moment, and you realize that you are nearly full and nowhere close to the measly goal you set for yourself, let alone the minimum amount of hot dogs you are required to consume in order for them to be considered an achievement.

      But your pride demands that you continue, despite the loud protests of your body.

      Eventually, you tap out, burdened with the shame of knowing exactly how many hot dogs you can eat in one sitting, and also knowing that it was nowhere near what you or anyone else expected you to be able to eat. The infernal sausages have beaten you.

    • topherclay@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      1 day ago

      I love that the enthusiastic tone of your comment was completely unaffected by the bland apathy of the diner owner’s quote.

    • scutiger@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      1 day ago

      Usually hotdog eating competitions are timed. You get like 5 minutes to eat as many as possible, and 19 wouldn’t even be close to qualifying.

      • addie@feddit.uk
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        Looks like the records are about 60+ in ten minutes. 19 in five might get you through qualification, depending on the field, but you’d have little chance of winning.

        The very thought of trying to eat that many would make me too queasy to get started. Have one and enjoy it.

    • inlandempire@jlai.lu
      link
      fedilink
      arrow-up
      10
      ·
      2 days ago

      Yeah it’s probably just the journalist finding the silliest thing to lie about as part of this experiment

  • Cellari@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    1 day ago

    I want to do this myself. What kind of a lie or useless information should I tell about myself? That I was there when the tectonic plates moved, or that I have reviews of how handsome I am?

    • Deebster@infosec.pub
      link
      fedilink
      arrow-up
      4
      ·
      22 hours ago

      Perhaps that you were the thirteen apostle, or you invented oxygen. I think the most obviously false, the better.

      • Cellari@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        21 hours ago

        Thanks those are definitely good ones. I can’t believe I forgot about the classical inventing oxygen lie :D

  • GreenBeanMachine@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    1 day ago

    Wow, that is so much worse than occasional hallucination. It will spew complete outright lies, every single word a lie, as if they are facts.