Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.

in other words please help us, use our AI

  • kameecoding@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    2
    ·
    6 hours ago

    I will try to have a balanced take here:

    The positives:

    • there are some uses for this “AI”
    • like an IDE it can help speed up the process of development especially for menial tasks that are important such as unit test coverage.
    • it can be useful to reword things to match the corpo slang that will make you puke if you need to use it.
    • it is useful as a sort of better google, like for things that are documented but reading the documentation makes your head hurt so you can ask it to dumb it down to get the core concept and go from there

    The negatives

    • the positives don’t justify the environmental externalities of all these AI companies
    • the positives don’t justify the pc hardware/silicone price hikes
    • shoehorning this into everything is capital R retarded.
    • AI is a fucking bubble keeping the Us economy inflated instead of letting it crash like it should have a while ago
    • other than a paid product like copilot there is simply very little commercially viable use-case for all this public cloud infrastructure other than targeting with you more ads, that you can’t block because it’s in the text output of it.

    Overall I wish the AI bubble burst already

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      25 minutes ago

      Could not have written my exact take as closely as yours.

      Only thing I’d add is using it to screw around with personal photos. ChatGPT is cleaning up some 80s pics of my wife that were atrocious. I have rudimentary PhotoShop skills, but we’d never have these clean pics without AI. OTOH, I’d gladly drop that ability to reclaim all the negatives.

    • ViatorOmnium@piefed.social
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      6 hours ago

      menial tasks that are important such as unit test coverage

      This is one of the cases where AI is worse. LLMs will generate the tests based on how the code works and not how it is supposed to work. Granted lots of mediocre engineers also use the “freeze the results” method for meaningless test coverage, but at least human beings have ability to reflect on what the hell they are doing at some point.

      • kameecoding@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 hours ago

        Granted lots of mediocre engineers also use the “freeze the results” method for meaningless test coverage,

        I’d be interested what you mean by this? Isn’t all unit tests just freezing the result? A method is an algorithm for certain inputs you expect certain outputs, you unit tests these inputs and matching outputs, and add coverage for edge cases because it’s cheap to do with unit tests and these “freeze the results” or rather lock them in so you know that piece of code always works as expected or it’s “frozen/locked in”

      • kameecoding@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 hours ago

        LLMs will generate the tests based on how the code works and not how it is supposed to work.

        You can tell it to generate based on how it’s supposed to work you know

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        You could have it write unit tests as black box tests, where you only give it access to the function signature. Though even then, it still needs to understand what the test results should be, which will vary from case to case.

      • JoeBigelow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        6 hours ago

        I think machine learning has a vast potential in this area, specifically things like running iterative tests in a laboratory, or parsing very large data sets. But a fuckin LLM is not the solution. It makes a nice translation layer, so I don’t need to speak and understand bleep bloop and can tell it what I want in plain language. But after that LLM seems useless to me outside of fancy search uses. It’s should be the initial processing layer to figure out what type of actual AI (ML) to utilize to accomplish the task. I just want an automator that I can direct in plain language, why is that not what’s happening? I know that I don’t know enough to have an opinion but I do anyway!

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      They f’d up with electricity rates and hardware price hikes. They were getting away with it by not inconveniencing enough laymen.

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        30 minutes ago

        Very few laymen have noticed or give a shit about RAM prices. My young friend across the street and I are likely the only people on the block who know what RAM does, let alone are able to build a PC.

        Business purchasing is where we might see some backlash soon. I’ve bought all the IT goods, hardware and software, for my last two companies, and I’d be screaming.

        Boss: What the hell? Weren’t we getting these laptops for $1,200 last year?!

    • arendjr@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      6 hours ago

      So I’m the literal author of the Philosophy of Balance, and I don’t see any reason why LLMs are deserving of a balanced take.

      This is how the Philosophy of Balance works: We should strive…

      • for balance within ourselves
      • for balance with those around us
      • and ultimately, for balance with Life and the Universe at large

      But here’s the thing: LLMs and the technocratic elite funding them are a net negative to humanity and the world at large. Therefore, to strive for a balanced approach towards AI puts you on the wrong side of the battle for humanity, and therefore human history.

      Pick a side.

      • kameecoding@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        You are presupposing that your opinion about LLMs is absolutely correct and then of course you arrive at your predetermined conclusion.

        What about the free LLmodels available out of china and other places that democratizes the LLMs?

        Therefore, to strive for a balanced approach towards AI puts you on the wrong side of the battle for humanity, and therefore human history.

        Thanks for not being dramatic, lol.

        • arendjr@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Your comment is fair. I try to follow my own philosophy, so I picked a side and stand for it. I feel strongly about it, so that’s why I may use hyperbole at times.

          Yet I understand it’s not everybody’s opinion, so I try to respect those people even when I don’t necessarily respect their positions. It’s a tough line to draw sometimes.

    • Schal330@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 hours ago

      it is useful as a sort of better google, like for things that are documented but reading the documentation makes your head hurt so you can ask it to dumb it down to get the core concept and go from there

      I agree with this point so much. I’m probably a real thicko, and being able to use it to explain concepts in a different way or provide analogies has been so helpful for my learning.

      I hate the impact from use of AI, and I hope that we will see greater efficiencies in the near future so there is less resource consumption.