• ThePantser@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Still sounds like the AI is an idiot and did and said thing it shouldn’t. But it still did it and as a representative of a company should still be held to the same standards as an employee. Otherwise it’s fraud. Nobody hacked the system, the customer was just chatting and the “employee” fucked up and the owner can take it out of their pay… oh right it’s a slave made to replace real paid humans.

    • leftzero@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 months ago

      The “AI” isn’t an idiot.

      It isn’t even intelligence, nor, arguably, artificial (since LLM models are grown, not built).

      It’s just a fancy autocomplete engine simulating a conversation based on statistical information about language, but without any trace of comprehension of the words and sentences it’s producing.

      It’s working as correctly as it possibly can, the business was simply scammed into using a tool (a toy, really) that by definition can’t be suited for the job they intended it to do.

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      I don’t disagree, but this is an issue of when/where it’s appropriate to use an LLM to interact with customers and when they shouldn’t. If you present an LLM to the public, it will get manipulated by people who are prepared to in order to get it to do something it shouldn’t.

      This also happens with human employees, but it’s generally harder to do so it’s less common. This sort of behaviour is called social engineering and is used by fraudsters and scammers to get people to do what they want, typically handing over their bank details, but the principal is the same, you’re manipulating someone (or something in this case) into getting it do do something they/it shouldn’t.

      Just because we don’t like the fact that the business owner deployed an LLM in a manner they probably shouldn’t have, doesn’t mean the customer isn’t the one in the wrong and themself voided whatever contract they had through their actions. Whether it’s a human or LLM on the other end of the chat doesn’t actually make any difference.