• glitching@lemmy.ml
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    14 hours ago

    what works with my normie “what’s the big deal” is the following analogy:

    this is akin to cleaning out your snowed-in driveway with a twin-engine afterburner from an F14 Tomcat - holy shit, it actually cleaned the thing! saved me a buncha time! and it cost pennies!

    yeah, but:

    • gas is subsidised by Northrop-Grumman’s investors for the first year or so; afterwards it’ll cost you dearly. and the maintenance, madonn’
    • there are no city snow cleaners no more and no store is selling shovels
    • it obliterated all the trees and grass and critters and shit and damaged parked cars
    • you polluted the shit out of your sight line, nothing will grow there for generations and the runoff poisoned every body of water this shit touches
    • now you gotta clean the burned shit
    • nobody in your neighborhood can get no gas no more; there’s some two boroughs over, but it’s 3x the price
    • nobody can fix their cars with cheap parts, all raw materials go towards NG’s production/maintenance
    • the “explode” and “afterburner” buttons are kinda close together so the former happens eventually
    • everybody says you’re fucking loco and fucking stop with that shit but everytime you press the blast button, a pleasant “you’re so awesome!” voice booms and it makes you feel very special
  • Artwork@lemmy.world
    link
    fedilink
    English
    arrow-up
    90
    arrow-down
    5
    ·
    edit-2
    19 hours ago

    Will you PLEASE stop saying “coding is a practical use case”? This is the third appeal I’ve made on this subject. (Do you read your comments?) If you want bug ridden code with security issues which is not extensible and which no-one understands, then sure, it’s a practical use case. Just like if you want nonsensical articles with invented facts, then article writing is a practical use case. But as I’ve pointed out already no reputable editorial is now using LLMs to write their articles. Why is that? Because it obviously doesn’t work.

    Let’s face it the only reason you’re saying “coding is a practical use case” is because you yourself don’t code, and don’t understand it. I can’t see another reason why would assume the problems experienced in other domains somehow don’t apply to coding. Newsflash: they do. And software engineering definitely doesn’t need the slop any more than anyone else. So I hope this is my final appeal: please stop perpetuating this myth. If you want more information on the problems of using LLMs to code, then I can talk in great length about it - feel free to reach out. Thanks…

    The point is, there has always been a trade-off between the speed of development and quality of engineering (confidence in the code, robustness of the app etc.) I don’t see LLMs as either changing this trade-off or shifting the needle (greater quality in a shorter time), because they are probabilistic and can’t be relied upon to produce the best solution - or even a correct solution - every time. So you’re going to have to pick your way through every single line it generates in order to have the same confidence you would have if you wrote it - and this is unlikely to save time because understanding someone else’s code is always more difficult and time-consuming than writing it yourself. When I hear people say it is “making them 10x more productive” at coding, I think, “and also 10x as unsure what you’ve actually produced”…

    You’ll also need to correct it when it does something you don’t want. Now this is pretty interesting, if you think about it. Imagine you provide an LLM a prompt, and the LLM produces something but not exactly what you want. What is the advice on this? “Provide a more specific prompt!” Ok, so then we write a more specific prompt - the results are better, but it still falls short. What now? “Keep making the prompt more specific!” Ok but wait - eventually won’t I be supplying the same number of tokens to the LLM as it is going to generate as the solution? Because if I’m perfectly specific about what I want, then isn’t this just the same as actually writing the solution myself using a computer language? Indeed, isn’t this the purpose behind computer languages in the first place?..

    We software developers very often pull chunks of code from various locations - not just stackoverflow. Very often they are chunks of code we wrote ourselves, that we then adapt to the new system we are inserting it into. This is great, because we don’t need to make an effort to understand the code we’re inserting - we already understand it, because we wrote it…

    “You should consider combing through Hacker News to see how people are actually making successful use of LLMs” - the problem with this is there are really a lot of hype-driven stories out there that are basically made up. I’ve caught some that are obvious - e.g. see my comment on this post: https://substack.com/home/post/p-185469925 (archived) - which then makes me quite sceptical of many of the others. I’m not really sure why this kind of fabrication has become so prevalent - I find it very strange - but there’s certainly a lot of it going on. At the end of the day I’m going to trust my own experiences actually trying to use these tools, and not stories about them that I can’t verify.

    ~ Tom Gracey

    Source

    Absolutely… Thank you, from the very depths of my heart and soul… dear Tom Gracey, programmer, artist… for the marvel you do… for the wisest attitude, for the belief in in human… in effort… in art…

    • Slotos@feddit.nl
      link
      fedilink
      English
      arrow-up
      20
      ·
      19 hours ago

      Let’s face it the only reason you’re saying “coding is a practical use case” is because you yourself don’t code, and don’t understand it.

      Usually only the last statement is true.

    • Tyrq@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 hours ago

      Just ask those people to read engrish, they’ll stand a chance of understanding the issue. It’s put together with clues of how it works, and can copypaste pieces, but without the knowledge to string it all together cohesively. Maybe not the best example, but coding is a language like English is a language, and we take a lot of our knowledge for granted when it comes to our intimate relationship with language.

    • BartyDeCanter@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      4
      ·
      17 hours ago

      I think there is quite a bit more subtlety than that.

      Yes, just asking an LLM, even the latest versions, to write some code goes from “wow, that’s pretty good” to “eh, not great but ok for something I’m going to use once and then not care about” to “fucking terrible” as the size goes up. And all the agents in the world don’t really make it better.

      But… there are a few use cases that I have found interesting.

      1. Project management, plannning, and learning new languages/domains when using a core prompt as described at: https://www.codingwithjesse.com/blog/coding-with-llms-can-still-be-fun/

      I like to add:

      - Do not offer to write code unless the user specifically requests it. You are a teacher and reviewer, not a developer 
      - Include checks for idiomatic use of language features when reviewing 
      - The user has a strong background in C, C++, and Python. Make analogies to those languages when reviewing code in other languages
      

      as well when I’m using it to help me learn a new language.

      1. Reviews of solo projects. I like working with someone else to review my code and plans at work, particularly when I’m working in a domain or language that I don’t have experience in. But for solo projects I don’t have someone to give me reviews, so asking a coding LLM “Review this project, looking for architectural design issue, idiomatic language use, and completeness. Do not offer to fix anything, just create an issue list.” is really helpful. I can take the list, ignore anything I disagree with, and use it for a polishing round.
    • setsubyou@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      15
      ·
      16 hours ago

      If you want bug ridden code with security issues which is not extensible and which no-one understands, then sure, it’s a practical use case.

      This assumes you never review it, meaning it’s at best an argument against vibe coding. It’s not an argument against using LLMs for coding in general.

      Additionally, I’ve been writing software for a living for almost 30 years, and I could say the exact same thing about a lot of human generated code I’ve reviewed during that time. I don’t even know how often I’ve explained basic stuff like “security goes in the backend, not in the frontend” to humans.

      Let’s face it the only reason you’re saying “coding is a practical use case” is because you yourself don’t code, and don’t understand it.

      I certainly do code and if I don’t understand what the LLM outputs it doesn’t go in the project.

      I can’t see another reason why would assume the problems experienced in other domains somehow don’t apply to coding.

      I’m a software engineer, I can’t judge LLMs in most other domains. I also don’t think there are no problems. A tool doesn’t have to be 100% problem free to be useful as long as you recognize the limitations.

      So you’re going to have to pick your way through every single line it generates in order to have the same confidence you would have if you wrote it

      I don’t see a problem with this. The post even mentions pulling code from stackoverflow, which is the same. But nobody ever argued that it has no uses in coding because you still have to read the code.

      Honestly at this point any article just flat out dismissing LLMs for coding only reads to me like the author isn’t even trying to stay up to date. Which is understandable if they don’t like AI but makes posting about it a bit pointless.

      A year ago I would had a similar opinion as the author but in the last 3-4 months specifically, it feels like AI based tools made a huge leap. I went from using short snippets for learning to letting AI implement entire features and being actually happy with the result.

      There is however still a pretty big difference between what it produces for common problems vs. what it produces for specialized difficult ones. It’s also inherently better at some languages than others based on the availability of up-to-date training material. So you need some amount of breadth in your projects to accurately judge it.

      If you only try some AI service in free mode on one thing every month, for example, you’ll just have this very polarized opinion that’s either “AI is useless” or “AI can do everything”, but you won’t have a good idea of what it can and can’t do.

      • The_Decryptor@aussie.zone
        link
        fedilink
        English
        arrow-up
        10
        ·
        14 hours ago

        A year ago I would had a similar opinion as the author but in the last 3-4 months specifically, it feels like AI based tools made a huge leap.

        I’ve seen this claim made basically weekly for the last couple of years, if we’re having “generational leaps” monthly then these LLMs would actually be capable of doing what people claim they can.

        • setsubyou@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          7 hours ago

          It’s just my experience as someone who was pretty much forced to use AI for coding by my employer for the last few years. For the longest time it was completely useless. And then it suddenly wasn’t. I’m sure you’ll keep hearing this kind of story though, because people have different requirements and AI assisted coding or even agents don’t have to start working for everybody at the same time.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        13 hours ago

        This assumes you never review it

        Too many people assume, that since genAI is a machine, it’ll never make any mistakes.

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        A year ago I would had a similar opinion as the author but in the last 3-4 months specifically, it feels like AI based tools made a huge leap. I went from using short snippets for learning to letting AI implement entire features and being actually happy with the result.

        Maybe if you’re only working with languages and features that are well documented and have a lot of examples out there. I’ve been trying to use LLM coding to assist me with a process automation at work, and the results are a couple steps up from dog vomit more often than not.

        AI code assistants aren’t making big strides, you’re likely just seeing them refine common scenarios to points where it becomes very usable for your specific use cases.

        • setsubyou@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          Sure. How much the language or features change is also important. For example Claude can build entire iPhone apps in Swift but you bet they’re going to be full of warnings about things that are illegal now and you bet if there’s any concurrency stuff it’s going to be a wild mix of everything async that ever existed in Swift. It makes sense too because LLMs are trained on code that’s, on average, outdated.

          But what it’s good at and what it’s not good at is just part of what you need to know when using AI, just like with any other tool. I have projects too where it can at best replace google, so I don’t try to make it implement those by itself.

    • astro@leminal.space
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      33
      ·
      19 hours ago

      Sounds like Tom tried LLM-assisted coding once about 6 model release cycles ago and hasn’t revisited it.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        13 hours ago

        Oh you must try the newest version of <insert model name here> with <insert genAI IDE name here>, with letting it to do most of your job and only do code reviews, and otherwise you’ll have to learn how to prompt it

        ultra copium.jpg

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 hours ago

          LLM coding is great and we need to make sure everyone knows it, despite their own experiences

          soul.yml for some of these people lol

      • pancake@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        17 hours ago

        Just ask any model to write minimally complex formally verified code and watch it crash and burn.

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    English
    arrow-up
    20
    ·
    16 hours ago

    Very, very marginal use cases, that don’t warrant the amount of investment, could’ve achieved with smaller models, and without cooking the planet as much.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      8
      ·
      16 hours ago

      hey thats why they are pedalling it to india to have the datacenters, india is already cooking and MODI wants to give tax holidays.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 hours ago

          Effectively the government paying you to move there by allowing you to not pay any taxes. So if your company is paying $2m/yr in taxes, you’re being paid $2m+ to move your operations. You’ll lose some of that to construction costs, but you’ll likely more than make that up by paying (comparatively to previous employees) slave wages to locals.

        • ulterno@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          12 hours ago

          It’s a thing to encourage foreign investors.
          Honestly, if the Modi govt. is still furthering that strategy, I think that they are out of good ideas and should hand over to another.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    18
    arrow-down
    41
    ·
    19 hours ago

    Odd, no matter how many people keep insisting it’s a scam and it doesn’t work, it nevertheless keeps on working when I use it.

    Maybe they’re not using it right.

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      41
      ·
      19 hours ago

      “doesn’t work” doesn’t mean the AI literally does not produce any output or do anything, it means it has so many flaws it’s just a fundamentally bad technology to be using.

      And don’t worry, I’ve got sources.

      LLMs still routinely hallucinate, and even implementations being used by AI safety researchers can’t help but automatically wipe email inboxes without permission. They atrophy your brain the longer you use them, cause both general dependency and emotional dependency, as well as deskill you at your job, they produce content favored worse by both humans and the AI models searching for trustworthy sources, and to top it all off, scaling laws are already failing to improve AI models enough to fix these problems, companies aren’t seeing returns, the economy gained essentially nothing from AI investment, usage, and growth, and public perception by the people actually affected most by AI is only getting worse while the people financially incentivized to keep building it say it’s going to get better, all while datacenters accelerate global warming and LLMs keep killing people.

      I don’t know about you, but I’d rather not support a technology that makes you get fundamentally worse at most cognitive tasks, damages the planet, burns money that could otherwise go to something more valuable, all while randomly killing mentally vulnerable people.

      • ikt@aussie.zone
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        19
        ·
        18 hours ago

        “doesn’t work” doesn’t mean the AI literally does not produce any output or do anything, it means it has so many flaws it’s just a fundamentally bad technology to be using.

        that’s odd i use it daily and it works fine

        • AmbitiousProcess (they/them)@piefed.social
          link
          fedilink
          English
          arrow-up
          20
          arrow-down
          1
          ·
          17 hours ago

          The doctors who used it daily said it worked fine, and it did. Then those doctors became 20% less capable at identifying tumors in their patients.

          The Meta AI security researcher literally said, and I quote: “It’s been working well with my non-important email very well so far and gained my trust on email tasks” when asked why she’d give it access to her primary email, where it subsequently started trashing her whole inbox.

          All of the participants in the cognitive debt paper’s research had the AI actually produce the results they were looking for, but they all became less capable mentally as a result.

          And when a woman in South Korea killed two men using advice given to her by ChatGPT, it worked fine for her, didn’t it?

          That’s not to say your use of AI makes you a murderer. Far from it. But we have quite well documented evidence of LLMs simply making people dumber. You are not an exception to that, unless your brain biologically operates entirely differently from everyone else’s.

          When you use neurons less, the connections become weaker, and less new connections get made. When you offload work to something else, like an LLM, you stop training your brain to get better, and you let parts of it slowly die.

          Using AI is like using a hydraulic robot to bench press for you. You’re going to move the weights, but your muscle mass ain’t growing.

          The more you outsource the very function of thinking to a chatbot, the more reliant your brain will become on that chatbot to think as well as it used to, and when that chatbot regularly hallucinates faulty answers and logic, ignores best practices, inefficiently implements solutions, and gets things wrong, your brain is not improving as a result of that.

          This doesn’t mean you should never use AI. I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way. But if I used it to try and do everything for me, not only would it have made a ton of mistakes, but I’d then be even less capable of fixing them.

          • Victor@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            16 hours ago

            I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way.

            But still, it does deskill you at that task, lest we forget. So if that was a meaningful task at which you wanted to stay adept, you would lose that meaningful skill. AI consistently deskills us at everything we ask it to do instead of doing it ourselves. Anything we are not doing, we are getting worse at doing.

    • GreenKnight23@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      18 hours ago

      I have a theory that supporters of genAI or LLMs are lonely angry neets who just want a sense of control in their radically tumultuous lives.

      care to weigh in on my theory of when AI started to help out with this moment in your life?

      • ikt@aussie.zone
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        18 hours ago

        there’s like 100 million+ users of ai, that’s a lot of neets

        • theunknownmuncher@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          the overwhelming majority of chatbot users run on average about 5 prompts per week, or less than one prompt per day, according to OpenAI’s own usage stats.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          9
          arrow-down
          5
          ·
          17 hours ago

          “Why are they pushing AI? Nobody wants this!” Meanwhile chatgpt.com is the fifth-most-visited website in the world.

          But I suppose people can just wrap themselves in a social media bubble where anyone who say something positive about AI gets downvoted through the floor, and then their view of the world gets curated to look a bit more like how they want it to be.

          • theunknownmuncher@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            the overwhelming majority of chatbot users run on average about 5 prompts per week, or less than one prompt per day, according to OpenAI’s own usage stats.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              1
              ·
              6 hours ago

              Okay. Not sure the relevance, though. They’re not forced to use it, they choose to go to that site and write those prompts because they want to.

          • Australis13@fedia.io
            link
            fedilink
            arrow-up
            7
            ·
            16 hours ago

            There’s a big difference between having a website that you can choose to engage with and having LLMs jammed into your device’s operating system or programming IDE that make you jump through hoops just to disable them (or your email and then be told your emails are going to be used in training and if you don’t want that you have to turn off all the smart features, including the ones that aren’t LLM-based).

            There would be certain use cases I’d be open to, but at least give me a choice when deploying it out as to whether it’s on or off, what it has access to and make it easy to change those settings.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              2
              arrow-down
              3
              ·
              15 hours ago

              Right. The website that people choose to engage with shows that people are choosing to engage with AI without being forced to. It shows that the demand for AI is organic and real. Lots of people want to use AI.

              • lath@piefed.social
                link
                fedilink
                English
                arrow-up
                4
                ·
                13 hours ago

                Of course they do. People want comfort and AI as it is marketed is the ultimate comfort. Doesn’t change the harm it does at all, but lots of people are eager to dismiss the harm as long as their comfort is assured.

          • ikt@aussie.zone
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            17 hours ago

            yeah exactly, I also love their ‘it produces NOTHING but GARBAGE’, as if I can’t see exactly what it’s producing every time I make a query which I do multiple times a day 🤯

            • Passerby6497@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              9 hours ago

              You’re right, it’s just mostly garbage output. I eventually get to a moderately usable answer a lot of the time, but more often than not I have to constantly tweak the prompt or tell it to follow the goddamned system prompts I give it, and it still feeds me obvious bullshit on 1/4-1/2 of the responses.

              Maybe you’re working in a common area where the AI doesn’t have to work hard to give you good outputs, but the AI is trash for the tasks I give it.

            • BartyDeCanter@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              4
              ·
              17 hours ago

              Yeah, I have deep reservations about the various AI companies, the environmental impacts of the industry, and many of the other issues that people are bringing up here. And, I have still found a few very practical uses.

              My partner was fighting with their insurance company about getting reimbursed for several thousand dollars of medical expenses. After a couple of rounds of rejections I had them send me the paperwork, insurance information, and rejection letters and then asked ChatGPT what we should say to get them to reimburse us. It came up with a letter that had the right legal mumbo jumbo to convince the insurance company to agree and pay us. Yes, I could have hired a lawyer, but the legal fees would have eaten up most of the money. And I guess I could have gone to law school, gotten a specialization in insurance law, and figured it out myself. But that also would have cost more time and money.

              I still think “AI” is overhyped and has a lot of ethical issues, but there are also some very practical uses.