• Wispy2891@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 hours ago

    Shouldn’t the voice control specifically target the user voice just to prevent other people interfacing with your device? Otherwise ads can say “hey meta order a crate of coke” or someone on the street might shout “hey meta send a WhatsApp to all my contacts that proves I’m an idiot”

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    6 hours ago

    Did he just said that they can remotely control everyone’s glasses ?

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      4 hours ago

      No, he said that when the audio command came over the speakers, it triggered the smart glasses of everyone in the auditorium.

      • vane@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 hours ago

        I hope many people buy it so I can activate porn on their glasses when they walk around in public places.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    71
    arrow-down
    4
    ·
    edit-2
    6 hours ago

    Don’t fucking care. It’s a stupid product for a stupid company.

    Spend your effort actually helping the world and the people that inhabit it, you disgusting human.

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      4 hours ago

      It’s a company with no morals, but the product isn’t stupid, and neither is the way the company operates or the people who run it.

      Don’t underestimate your adversary.

  • TastehWaffleZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    11 hours ago

    That sounds like complete damage control lies. Why would the AI think the chef had finished prepping the sauce just because there was heavy usage??

    • Ulrich@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      5 hours ago

      Even if it was true, your server can’t handle a couple hundred simultaneous requests? That’s not promising either. Although at least that would be easier to fix than the real problem, which is incredibly obvious to anyone who has ever used this technology, and that’s that it doesn’t fucking work, and is flawed on a fundamental level.

      • KairuByte@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        If this was a tech demo, it tracks that they wouldn’t be using overpowered hardware. Why lug around a full server when they can just load up the software on a laptop, considering they weren’t expecting hundreds of invokes at the exact same moment.

    • PhilipTheBucket@piefed.social
      link
      fedilink
      English
      arrow-up
      31
      ·
      11 hours ago

      Yeah it’s a bunch of shit. I’m not an expert obviously, just talking out of my ass, but:

      1. Running inference for all the devices in the building to “our dev server” would not have maintained a usable level of response time for any of them, unless he meant to say “the dev cluster” or something and his home wifi glitched right at that moment and made it sound different
      2. LLMs don’t degrade by giving wrong answers, they degrade by stopping producing tokens
      3. Meta already has shown itself to be okay with lying
      4. GUYS JUST USE FUCKING CANNED ANSWERS WITH THE RIGHT SOUNDING VOICE, THIS ISN’T ROCKET SCIENCE, THAT’S HOW YOU DO DEMOS WHEN YOUR SHIT’S NOT DONE YET
      • Sasha [They/Them]@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        2
        ·
        11 hours ago

        LLMs can degrade by giving “wrong” answers, but not because of network congestion ofc.

        That paper is fucking hilarious, but the tl;dr is that when asked to manage a vending machine business for an extended period of time, they eventually go completely insane. Some have an existential crisis, some call the whole thing a conspiracy and call the FBI, etc. it’s amazing how trash they are.

        • PhilipTheBucket@piefed.social
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          10 hours ago

          Initial thought: Well… but this is a transparently absurd way to set up an ML system to manage a vending machine. I mean it is a useful data point I guess, but to me it leads to the conclusion “Even though LLMs sound to humans like they know what they’re doing, they does not, don’t just stick the whole situation into the LLM input and expect good decisions and strategies to come out of the output, you have to embed it into a more capable and structured system for any good to come of it.”

          Updated thought, after reading a little bit of the paper: Holy Christ on a pancake. Is this architecture what people have been meaning by “AI agents” this whole time I’ve been hearing about them? Yeah this isn’t going to work. What the fuck, of course it goes insane over time. I stand corrected, I guess, this is valid research pointing out the stupidity of basically putting the LLM in the driver’s seat of something even more complicated than the stuff it’s already been shown to fuck up, and hoping that goes okay.

          Edit: Final thought, after reading more of the paper: Okay, now I’m back closer to the original reaction. I’ve done stuff like this before, this is not how you do it. Have it output JSON, have some tolerance and retries in the framework code for parsing the JSON, be more careful with the prompts to make sure that it’s set up for success, definitely don’t include all the damn history in the context up to the full wildly-inflated context window to send it off the rails, basically, be a lot more careful with how to set it up than this, and put a lot more limits on how much you are asking of the LLM so that it can actually succeed within the little box you’ve put it in. I am not at all surprised that this setup went off the rails in hilarious fashion (and it really is hilarious, you should read). Anyway that’s what LLMs do. I don’t know if this is because the researchers didn’t know any better, or because they were deliberately setting up the framework around the LLM to produce bad results, or because this stupid approach really is the state of the art right now, but this is not how you do it. I actually am a little bit skeptical about whether you even could set up a framework for a current-generation LLM that would enable to succeed at an objective and pretty frickin’ complicated task like they set it up for here, but regardless, this wasn’t a fair test. If it was meant as a test of “are LLMs capable of AGI all on their own regardless of the setup like humans generally are,” then congratulations, you learned the answer is no. But you could have framed it a little more directly to talk about that being the answer instead of setting up a poorly-designed agent framework to be involved in it.