• MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    256
    arrow-down
    5
    ·
    edit-2
    21 hours ago

    A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

    What the actual fuck? You couldn’t spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?

    A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged

    So they did. Why are we talking about ChatGPT then? You could just leave that part out. It’s useless. Obviously a fake photo has been manipulated. Why bother asking?

      • plantfanatic@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        15
        ·
        edit-2
        13 hours ago

        Wait, you’re surprised it did what you asked of it?

        There’s a massive difference between asking if something is fake, and telling it it is and asking why.

        A person would make the same type of guesses and explanations if given the same task.

        All this is showing is, you and ALOT of other people just don’t know enough about AI to be able to have a conversation about it.

        It even says “suggests” in it, it’s making no claim that it’s real or fake. The lack of basic comprehension is the issue here.

        • Weslee@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          I think if a person were asked to do the same they would actually look at the image and make genuine remarks, look at the points it has highlighted, the boxes are placed around random points and the references to those boxes are unrelated (ie. yellow talks about branches when there are no branches near the yellow box, red talks about bent guardrail when the red box on the guardrail is of an undamaged section)

          It has just made up points that “sound correct”, anyone actually looking at this can tell there is no intelligence behind this

          • plantfanatic@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            4
            ·
            edit-2
            12 hours ago

            Why would it have to? It and the person doing the task already knows to do any task put in front of it. It’s one of a hundred photos for all it and the person knows.

            You are extending context and instructions that doesn’t exist. The situation would be, both are doing whatever task is presented to them. A human asking would fail and be removed. They failed order number one.

            You could also setup a situation where the ai and human were both capable of asking. The ai won’t do what it’s not asked, that’s the comprehension lacking.

            • sem@piefed.blahaj.zone
              link
              fedilink
              English
              arrow-up
              8
              ·
              6 hours ago

              When people use a conversational tool, they expect it to act human, which it INTENTIONALLY DOES but without the sanity of a real human.

        • Deestan@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          4
          ·
          13 hours ago

          Wait, you’re surprised it did what you asked of it?

          No. Stop making things up to complain about. Or at least leave me out of it.

          • plantfanatic@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            5
            ·
            edit-2
            12 hours ago

            Then what are doing? Complaining it did exactly what you instructed it to do?

            What else did you expect?

            I get circle jerking against ai is hip and fun, but this isn’t even one of the valid errors it makes. This is just pure human error lmfao.

            • WhyJiffie@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              3 hours ago

              clearly, they asked it a question that average joe would do, and has shown that again its full of overly confident lies. it did not just reinforce the original belief of the user that it is fake, but it also hallucinated there a bunch of professional-like statements that are false if you take the time to check them. most people won’t check them though, and straight up believe what it just spit out and think “oh this is so smart! outrageous that people call me dumb for asking it life advice!”

    • Wren@lemmy.today
      link
      fedilink
      English
      arrow-up
      16
      ·
      16 hours ago

      My best guess is SEO. Journalism that mentions ChatGPT gets more hits. It might be they did use a specialist or specialized software and the editor was like “Say it was ChatGPT, otherwise people get confused, and we get more views. No one’s going to fact check whether or not someone used ChatGPT.”

      That’s just my wild, somewhat informed speculation.

    • BanMe@lemmy.world
      link
      fedilink
      English
      arrow-up
      52
      arrow-down
      1
      ·
      20 hours ago

      I am guessing the reporter wanted to remind people tools exist for this, however the reporter isn’t tech savvy enough to realize ChatGPT isn’t one of them.

      • 9bananas@feddit.org
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        1
        ·
        edit-2
        17 hours ago

        afaik, there actually aren’t any reliable tools for this.

        the highest accuracy rate I’ve seen reported for “AI detectors” is somewhere around 60%; barely better than a random guess…

        edit: i think that way for text/LLM, to be fair.

        kinda doubt images are much better though…happy to hear otherwise, if there are better ones!

    • IcyToes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      62
      arrow-down
      2
      ·
      20 hours ago

      They needed time for their journalists to get there. They’re too busy on the beaches counting migrant boat crossings.

    • Tuukka R@piefed.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      16 hours ago

      There’s hoping that the reporter then looked at the image and noticed, “oh, true! That’s an obvious spot there!”

    • Railcar8095@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      21 hours ago

      Devils advocate, AI might be an agent that detects tapering with a NLP frontend.

      Not all AI is LLMs.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        35
        ·
        edit-2
        20 hours ago

        A “chatbot” is not a specialized AI.

        (I feel like maybe I need to put this boilerplate in every comment about AI, but I’d hate that.) I’m not against AI or even chatbots. They have their uses. This is not using them appropriately.

        • Railcar8095@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          edit-2
          20 hours ago

          A chatbot can be the user facing side of a specialized agent.

          That’s actually how original change bots were. Siri didn’t know how to get the weather, it was able to classify the question as a weather question, parse time and location and which APIs to call on those cases.

          • MagicShel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            22
            arrow-down
            1
            ·
            edit-2
            20 hours ago

            Okay I get you’re playing devil’s advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT? Even in the unlikely event I’m wrong, what is the message to the audience? That ChatGPT can investigate just as well as BBC. Which may well be the case, but it oughtn’t be.

            My second point still stands. If you sent someone to look at the thing and it’s fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.

            • squaresinger@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              15 hours ago

              ChatGPT is a fronted for specialized modules.

              If you e.g. ask it to do maths, it will not do it via LLM but run it through a maths module.

              I don’t know for a fact whether it has a photo analysis module, but I’d be surprised if it didn’t.

            • Railcar8095@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              4
              ·
              20 hours ago

              It’s not like BBC is a single person with no skill other than a driving license and at least one functional eye.

              Hell, they don’t even need to go, just call the local services.

              For me it’s most likely that they have a specialized tool than an LLM detecting correctly tampering with the photo.

              But if you say it’s unlikely you’re wrong, then I must be wrong I guess.

              • MagicShel@lemmy.zip
                link
                fedilink
                English
                arrow-up
                8
                ·
                20 hours ago

                what is the message to the audience? That ChatGPT can investigate just as well as BBC.

                What about this part?

                Either it’s irresponsible to use ChatGPT to analyze the photo or it’s irresponsible to present to the reader that chatbots can do the job. Particularly when they’ve done the investigation the proper way.

                Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that’s just as valid as BBC reporting.

                  • MagicShel@lemmy.zip
                    link
                    fedilink
                    English
                    arrow-up
                    5
                    ·
                    edit-2
                    19 hours ago

                    “AI Chatbot”. Which is what to 99% of people, almost certainly inciting the journalist who doesn’t live under a rock? They are just avoiding naming it.

                • Riskable@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  edit-2
                  18 hours ago

                  I don’t think it’s irresponsible to suggest to readers that they can use an AI chatbot to examine any given image to see if it was AI-generated. Even the lowest-performing multi-model chatbots (e.g. Grok and ChatGPT) can do that pretty effectively.

                  Also: Why stop at one? Try a whole bunch! Especially if you’re a reporter working for the BBC!

                  It’s not like they give an answer, “yes: Definitely fake” or “no: Definitely real.” They will analyze the image and give you some information about it such as tell-tale signs that an image could have been faked.

                  But why speculate? Try it right fucking now: Ask ChatGPT or Gemini (the current king at such things BTW… For the next month at least hahaha) if any given image is fake. It only takes a minute or two to test it out with a bunch of images!

                  Then come back and tell us that’s irresponsible with some screenshots demonstrating why.

                  • MagicShel@lemmy.zip
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    ·
                    edit-2
                    18 hours ago

                    I don’t need to do that. And what’s more, it wouldn’t be any kind of proof because I can bias the results just by how I phrase the query. I’ve been using AI for 6 years and use it on a near-daily basis. I’m very familiar with what it can do and what it can’t.

                    Between bias and randomness, you will have images that are evaluated as both fake and real at different times to different people. What use is that?

    • HugeNerd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      16 hours ago

      But the stories of Russians under my bed stealing my washing machine’s CPU are totally real.

      • ArcaneSlime@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 hours ago

        This is true, but also there’s no way this wouldn’t have been reported rather quick, like not just online but within 5min someone would have been all:

        “Oi 999? The bridge on Crumpet Lane 'as fallen down, I can’t get to me Chippy!”

        Or

        “Oi wot was that loud bang outside me flat?! Made me spill me vindaloo! Holy Smeg the bridge collapsed!”

        Or like isn’t the UK the most surveiled country with their camera system? Is this bridge not on camera already? For that the AI telling location would probably be handy too I’d just be surprised they don’t have it on security cams.

        • Riskable@programming.dev
          link
          fedilink
          English
          arrow-up
          6
          ·
          18 hours ago

          Or like isn’t the UK the most surveiled country with their camera system?

          Ahahah! That’s a good one!

          You think all those cameras are accessible to everyone or even the municipal authorities? Think again!

          All those cameras are mostly useless—even for law enforcement (the only ones with access). It’s not like anyone is watching them in real time and the recordings—if they even have any—are like any IT request: Open a ticket and wait. How long? I have no idea.

          Try it: If you live in the UK, find some camera in a public location and call the police to ask them, “is there an accident at (location camera is directly pointing at)?”

          They will ask you all sorts of questions before answering you (just tell them you heard it through the grapevine or something) but ultimately, they will send someone out to investigate because accessing the camera is too much of a pain in the ass.

          It’s the same situation here in the US. I know because the UK uses the same damned cameras and recording tech. It sucks! They’re always looking for ways to make it easier to use and every rollout of new software actually makes it harder and more complicated!

          How easy is the ticket system at your work? Now throw in dozens of extra government-mandated fields 🤣

          Never forget: The UK invented bureaucracy and needles paperwork!