• 5 Posts
  • 123 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle
  • First I don’t see an issue with a “store brand” if it does what you need.

    Secondly - who is the name brand for say a power strip or a USB hub or USB C charger or cables? Or do you buy monster audio cables? SD card reader? Microfiber cloth? What about regular bath towels?

    Somewhat more controversial - what about things that are inherently disposable like latex gloves or laundry detergent?

    I went from all free and clear from Sam’s club which took up space and got me like 120 packets for 20 dollars to these detergent sheets which are much smaller and got 300 for 7 dollars. You use the same number of sheets as you would packets. The clothes come out the same.

    But yes, try searching for something like an electric lighter for candles on both sites and tell me the “quality non knock off” on Amazon. 90 percent are on temu also for less.


  • I mean, most people don’t think ease of changing a light bulb (that they never have to do) is a deal breaker for a car. I haven’t had to change a headlight since they went to LEDs. My last car that was 7 years of owning it.

    I think we should insist on making things repairable, but should focus on the things that come up frequently.

    Because everything is a tradeoff, things like how often it is likely to need repair, how much the car costs, functionality of the car day to day, looks, gas mileage, heck a lot of stuff will come before a once a decade thing that you’re either going to pay a shop to do or trade before it’s an issue.








  • It’s also the anti commodity stuff IP has been allowing. If Hershey makes crap chocolate, there is little stopping you from buying Lidnt say. But if Microsoft makes a bad OS, there’s a lot stopping you from using Linux or whatever.

    What’s worse is stuff like DRM and computers getting into equipment that otherwise you could use any of a bevy of products for. Think ink cartridges.

    Then there’s the secret formulas like for transmission fluid now where say Honda says in the manual you have to get Honda fluid for it to keep working. Idk if it’s actually true, but I l’m loathe to do the 8k USD experiment with my transmission.

    You’d think the government could mandate standards but we don’t have stuff like that.






  • Yes definitely. Many of my fellow NLP researchers would disagree with those researchers and philosophers (not sure why we should care about the latter’s opinions on LLMs).

    I’m not sure what you’re saying here - do you mean you do or don’t think LLMs are “stochastic parrot”s?

    In any case, the reason I would care about philosophers opinions on LLMs is mostly because LLMs are already making “the masses” think they’re potentially sentient, and or would deserve personhood. What’s more concerning is that the academics that sort of define what thinking even is seem confused by LLMs if you take the “stochastic parrot” POV. This eventually has real world affects - it might take a decade or two, but these things spread.

    I think this is a crazy idea right now, but I also think that going into the future eventually we’ll need to have something like a TNG “Measure of a Man” trial about some AI, and I’d want to get that sort of thing right.



  • I think it’s very clear that this “stochastic parrot” idea is less and less accepted by researchers and philosophers, maybe only in the podcasts I listen to…

    It’s not capable of knowledge in the sense that humans are. All it does is probabilistically predict which sequence of words might best respond to a prompt

    I think we need to be careful thinking we understand what human knowledge is and our understanding of the connotations if the word “sense” there. If you mean GPT4 doesn’t have knowledge like humans have like a car doesn’t have motion like a human does then I think we agree. But if you mean that GPT4 cannot reason and access and present information - that’s just false on the face of just using the tool IMO.

    It’s also untrue that it’s predicting words, it’s using tokens, which are more like concepts than words, so I’d argue already closer to humans. To the extent it is just predicting stuff, it really calls into question the value of most of the school essays it writes so well now…


  • Well, LLMs can and do provide feedback about confidence intervals in colloquial terms. I would think one thing we could do is have some idea of how good the training data is in a given situation - LLMs already seem to know they aren’t up to date and only know stuff to a certain date. I don’t see why this could not be expanded so they’d say something much like many humans would - i.e. I think bla bla but I only know very little about this topic. Or I haven’t actually heard about this topic, my hunch would be bla bla.

    Presumably like it was said, other models with different data might have a stronger sense of certainty if their data covers the topic better, and the multi cycle would be useful there.


  • Well, what you could do is run a DNS server so you don’t need to deal with IPs. You could likely adjust ports for whatever server to be 443 or 80 depending on if you’re internal only or need SSL. Also, something like zerotier won’t route your whole connection through your home internet if you set it up correctly, consider split tunneling. With something like zerotier it’ll only route the zerotier network you create for your devices.