• 0 Posts
  • 240 Comments
Joined 1 year ago
cake
Cake day: June 22nd, 2023

help-circle
  • LLMs use the entirety of a copyrighted work for their training, which fails the “amount and substantiality” factor.

    That factor is relative to what is reproduced, not to what is ingested. A company is allowed to scrape the web all they want as long as they don’t republish it.

    By their very nature, LLMs would significantly devalue the work of every artist, author, journalist, and publishing organization, on an industry-wide scale, which fails the “Effect upon work’s value” factor.

    I would argue that LLMs devalue the author’s potential for future work, not the original work they were trained on.

    Those two alone would be enough for any sane judge to rule that training LLMs would not qualify as fair use, but then you also have OpenAI and other commercial AI companies offering the use of these models for commercial, for-profit purposes, which also fails the “Purpose and character of the use” factor.

    Again, that’s the practice of OpenAI, but not inherent to LLMs.

    You could maybe argue that training LLMs is transformative,

    It’s honestly absurd to try and argue that they’re not transformative.



  • you are literally doing what i mean when i say you are making assumptions with no evidence. there is, again, no reason to believe that “driving more efficiently” will result from mass-adoption of automated vehicles–and even granting they do, your assumption that this wouldn’t be gobbled up by induced demand is intuitively disprovable. even the argumentation here parallels other cases where induced demand happens! “build[ing] new roads or widen[ing] existing ones” is a measure that is almost always justified by an underlying belief that we need to improve efficiency and productivity in existing traffic flows,[^1] and obviously traffic flow does not improve in such cases.

    I’m doing nothing other than questioning where the induced demand is coming from. What is inducing if not increased efficiency?

    The whole point of induced demand in highways is that when you add capacity in the form of lanes it induces demand. So if our highways are already full and if that capacity isn’t coming from increased EV efficiency then where is it coming from? If there’s no increase in road capacity then what is inducing demand?

    but granting that you’re correct on all of that somehow: more efficiency (and less congestion) would be worse than inducing demand. “efficiency” in the case of traffic means more traffic flow at faster speeds, which is less safe for everyone—not more.[^2] in general: people drive faster, more recklessly, and less attentively when you give them more space to work with (especially on open roadways with no calming measures like freeways, which are the sorts of roads autonomous vehicles seem to do best on). there is no reason to believe they would do this better in an autonomous vehicle, which if anything incentivizes many of those behaviors by giving people a false sense of security (in part because of advertising and overhyping to that end!).

    You are describing how humans drive, not AVs. AVs always obey the speed limit and traffic calming signs.

    you asserted these as “other secondary effects to AVs”–i’m not sure why you would do that and then be surprised when people challenge your assertion. but i’m glad we agree: these don’t exist, and they’re not benefits of mass adoption nor would they likely occur in a mass adoption scenario.

    We haven’t agreed on anything,I said I was open to your reasoning as to why those effects wouldn’t happen, then you didn’t provide any.

    the vast majority of road safety is a product of engineering and not a product of human driving ability, what car you drive or its capabilities, or other variables of that nature. almost all of the problems with, for example, American roadways are design problems that incentivize unsafe behaviors in the first place (and as a result inform everything from the ubiquity of speeding to downstream consumer preferences in cars). to put it bluntly: you cannot and will not fix road safety through automated vehicles, doubly so with your specific touted advantages in this conversation.

    You think you can eliminate all accidents through road design?

    You are literally ignoring every single accident caused by distracted driving, impatient driving, impaired driving, tired driving etc.

    Yeah, road design in America should be better, AVs should still also replace crappy wreckless humans. Those two ideas are not mutually exclusive.


  • this is at obvious odds with the current state of self-driving technology itself–which is (as i noted in the other comment) subject to routine overhyping and also has rather minimal oversight and regulation generally

    All cool tech things are overhyped. If you judgement for whether or not a technology is going to be useful is “if it sounds at all overhyped then it will flop” then you would never predict any technology would change the world ever.

    And no, quite frankly those assertions are objectively false. Waymo and Cruise’s driverless programs are both monitored by the DMV which is why they revoked Cruise’s license when they found them hiding crash data. Waymo has never been found to do so or even accused of doing so. Notice that in the lawsuit you linked, Waymo was happy to publish accident and safety data but did not want to publish data about how it’s vehicles handle edge cases, which would give their rivals information on how they operate, and the courts agreed with them.

    https://arstechnica.com/cars/2023/12/human-drivers-crash-a-lot-more-than-waymos-software-data-shows/

    Since their inception, Waymo vehicles have driven 5.3 million driverless miles in Phoenix, 1.8 million driverless miles in San Francisco, and a few thousand driverless miles in Los Angeles through the end of October 2023. And during all those miles, there were three crashes serious enough to cause injuries:

    In July, a Waymo in Tempe, Arizona, braked to avoid hitting a downed branch, leading to a three-car pileup. A Waymo passenger was not wearing a seatbelt (they were sitting on the buckled seatbelt instead) and sustained injuries that Waymo described as minor. In August, a Waymo at an intersection “began to proceed forward” but then “slowed to a stop” and was hit from behind by an SUV. The SUV left the scene without exchanging information, and a Waymo passenger reported minor injuries. In October, a Waymo vehicle in Chandler, Arizona, was traveling in the left lane when it detected another vehicle approaching from behind at high speed. The Waymo tried to accelerate to avoid a collision but got hit from behind. Again, there was an injury, but Waymo described it as minor. The two Arizona injuries over 5.3 million miles works out to 0.38 injuries per million vehicle miles. One San Francisco injury over 1.75 million miles equals 0.57 injuries per million vehicle miles. An important question is whether that’s more or less than you’d expect from a human-driven vehicle.

    After making certain adjustments—including the fact that driverless Waymo vehicles do not travel on freeways—Waymo calculates that comparable human drivers reported 1.29 injury crashes per million miles in Phoenix and 3.79 injury crashes per million miles in San Francisco. In other words, human drivers get into injury crashes three times as often as Waymo in the Phoenix area and six times as often in San Francisco.

    Waymo argues that these figures actually understate the gap because human drivers don’t report all crashes. Independent studies have estimated that about a third of injury crashes go unreported. After adjusting for these and other reporting biases, Waymo estimates that human-driven vehicles actually get into five times as many injury crashes in Phoenix and nine times as many in San Francisco.

    To help evaluate the study, I talked to David Zuby, the chief research officer at the Insurance Institute for Highway Safety. The IIHS is a well-respected nonprofit that is funded by the insurance industry, which has a strong interest in promoting automotive safety.

    While Zuby had some quibbles with some details of Waymo’s methodology, he was generally positive about the study. Zuby agrees with Waymo that human drivers underreport crashes relative to Waymo. But it’s hard to estimate this underreporting rate with any precision. Ultimately, Zuby believes that the true rate of crashes for human-driven vehicles lies somewhere between Waymo’s adjusted and unadjusted figures.


  • they can. induced demand is omnipresent in basically all vehicular infrastructure and vehicular improvements and there’s no reason to think this would differ with autonomous vehicles

    Yes, I have no doubt there would be induced demand, but that extra demand wouldn’t be at the cost of anything. Induced demand is a problem when we, for instance, build new roads or widen existing ones, because then more people drive and they clog up the same as they were before. That’s a bad thing because the cost of adding this capacity is that we have to tear down nature and existing city to add lanes, and then we have more capacity that sits at a standstill leading to more emissions.

    But if AVs add more capacity to our roads, that will be entirely because they are driving more efficiently. We’ll have the same amount of cars on the road at any given time, they’ll just be moving faster on average rather than idling in traffic jams made by humans. Which means that there will be only relatively minor emissions increases during peak times, fewer emissions emitted during non peak, and we won’t be tearing anything down to build more giant highways.

    okay but: literally none of this follows from mass-adoption of autonomous vehicles. this is a logical leap you are making with no supporting evidence—there is, and i cannot stress this enough, no evidence that if mass-adoption occurs any of this will follow

    You’re asking for something that does not exist. How am I supposed to provide you evidence proving what the results of mass adoption of AVs will be when there has never been a mass adoption of AVs.

    and in general the technology is subject to far more fabulism and exaggeration (like this!) than legitimate technological advancement or improvement of society.

    Again, it’s never actually been rolled out on a mass scale. It’s a technology still being actively developed. Neither of us know what the end results will be, but I put forth plausible reasoning to my speculation, if you have plausible reasoning why those things won’t come to pass I’m all ears. For instance, what is your reasoning for believing that AVs could never be fundamentally safer than human drivers who are frequently tired, angry, distracted, impaired, impatient, etc?


  • And here we see decades of automobile industry propaganda in action. There is only the car, or no mobility whatsoever.

    Please cite where I said that.

    You remember how everybody was just trapped inside their houses for centuries until the Ford factories started cranking out Model Ts?

    Um, yes. Obviously not remember directly, but that is what is in history books.

    Most Americans lived in small rural communities and seldom left their farm and immediate community. When they travelled at all it would be by horse and buggy, and would take forever to get to the nearest train station, and then forever from the end of the line to wherever they had to go. If people lived farther away you would see them once every couple of years and otherwise letter write them. Cars fundamentally changed how much the average person travels in their life by huge orders of magnitude, and society is now oriented around individual families and communities being much more spread out. I think this is flawed, but I also think it’s unlikely to change given the realities of basic things like housing costs making it unaffordable to live where your parents did.

    We should build out robust train networks to reduce as many cars as possible, but at the same time the idea that you’ll eliminate cars completely is quite frankly, completely divorced from reality. I personally do not own a car and have spent a used car amount of money on a cargo bike to avoid having to buy a car. But guess what? There is still a very clear limit on the size of object I can transport (smaller than virtually any piece of furniture), it’s unpleasant to infeasible to use in the rain depending on the load, and it is flat out unusable in the winter with snow and ice, so I end up using a car share service semi-regularly. I’ve thought about putting on bigger wheels, extending the bed, adding better suspension, a roof, and another set of wheels for balance, but now I’ve invented a car. And that’s not to mention driving out to nature preserves for camping, hiking, rock climbing, mountain biking etc. nor visiting family and friends who live out in the country not near any bus stops or train stations.

    As long as cars exist, AVs will be better than human drivers, and literally no one has ever presented a remotely feasible and practical plan for eliminating cars.


  • This is a fundamentally flawed argument.

    First of all, if people are getting to where they want to go faster, easier, and happier, that is a good thing. If you want to argue that everyone needs to be a hermit who never leaves home and orders everything on Amazon then you will never get your way because people fundamentally want to travel to see the outdoors and nature around them, to see their family and friends, and just to adventure. Eliminating vehicle deaths by making travel impossible is not a noble goal.

    Secondly, it’s based on the idea that people even can drive more than they already do. Road congestion in most major cities is already the limiting factor that pushes people to bike, walk, or take transit. Even if AVs make it easier and cheaper to take car, you’re still not going to do it during rush hour when you can bike.

    Thirdly, it’s based on the idea that AVs are only going to be slightly safer than human drivers. We have no reason to think that’s the case. Humans are fucking terrible drivers, and it’s highly likely that AVs will be several orders of magnitude safer than the average human driver.

    Fourthly, it ignores other secondary effects to AVs, like suddenly not needing nearly as much parking, freeing up both parking lot real estate, but more importantly, freeing up on street parking, creating more room for actual traffic to move, and their increased patience not causing constant traffic jams because they tailgated someone and then slammed on the brakes.




  • Making a copy is free. Making the original is not.

    Yes, exactly. Do you see how that is different from the world of physical objects and energy? That is not the case for a physical object. Even once you design something and build a factory to produce it, the first item off the line takes the same amount of resources as the last one.

    Capitalism is based on the idea that things are scarce. If I have something, you can’t have it, and if you want it, then I have to give up my thing, so we end up trading. Information does not work that way. We can freely copy a piece of information as much as we want. Which is why monopolies and capitalism are a bad system of rewarding creators. They inherently cause us to impose scarcity where there is no need for it, because in capitalism things that are abundant do not have value. Capitalism fundamentally fails to function when there is abundance of resources, which is why copyright was a dumb system for the digital age. Rather than recognize that we now live in an age of information abundance, we spend billions of dollars trying to impose artificial scarcity.


  • they did NOT predict generative AI, and their graphics cards just HAPPEN to be better situated for SOME reason.

    This is the part that’s flawed. They have actively targeted neural network applications with hardware and driver support since 2012.

    Yes, they got lucky in that generative AI turned out to be massively popular, and required massively parallel computing capabilities, but luck is one part opportunity and one part preparedness. The reason they were able to capitalize is because they had the best graphics cards on the market and then specifically targeted AI applications.



  • Well it is one thing to automate a repetitive task in your job, and quite another to eliminate entire professions.

    No it is not. That is literally how those jobs are eliminated. 30 years ago CAD came out and helped to automate drafting tasks to the point that a team of 20 drafters turned into 1 or 2 drafters and eventually turned into engineers drafting their own drawings.

    What you call “menial bullshit” is the entire livelihood and profession of quite a few people, speaking of taxis for one.

    Congratulations, despite you wanting to look at it with rose coloured glasses, that does not change the fact that it is objectively menial bullshit.

    What are all these people going to do when taxi driving is relegated to robots?

    Find other entry level jobs. If we eliminate *all * entry level jobs through automation, then we will need to implement some form of basic income as there will not be enough useful work for everyone to do. That would be a great problem to have.

    Will the state have enough cash to support them and help them upskill or whatever is needed to survive and prosper?

    Yes, the state has access to literally all of the profits from automation via taxes and redistribution.

    A technological utopia is a promise from the 1950s. Hasn’t been realized yet. Isn’t on the horizon anytime soon. Careful that in dreaming up utopias we don’t build dystopias.

    Oh wow, you’re saying that if human beings can’t create something in 70 years, then that means it’s impossible and we’ll never create it?

    Again, the only way to get to a utopia is to have all of the pieces in place, which necessitates a lot of automation and much more advanced technology than we already have. We’re only barely at the point where we can start to practice biology and medicine in a meaningful way, and that’s only because computers completely eliminated the former profession of computer.

    Be careful that you don’t keep yourself stuck in our current dystopia out of fear of change.


  • Better system for WHOM? Tech-bros that want to steal my content as their own?

    A better system for EVERYONE. One where we all have access to all creative works, rather than spending billions on engineers nad lawyers to create walled gardens and DRM and artificial scarcity. What if literally all the money we spent on all of that instead went to artist royalties?

    But tech-bros that want my work to train their LLMs - they can fuck right off. There are legal thresholds that constitute “fair use” - Is it used for an academic purpose? Is it used for a non-profit use? Is the portion that is being used a small part or the whole thing? LLM software fail all of these tests.

    No. It doesn’t.

    They can literally pass all of those tests.

    You are confusing OpenAI keeping their LLM closed source and charging access to it, with LLMs in general. The open source models that Microsoft and Meta publish for instance, pass literally all of the criteria you just stated.




  • I think that’s a huge risk, but we’ve only ever seen a single, very specific type of intelligence, our own / that of animals that are pretty closely related to us.

    Movies like Ex Machina and Her do a good job of pointing out that there is nothing that inherently means that an AI will be anything like us, even if they can appear that way or pass at tasks.

    It’s entirely possible that we could develop an AI that was so specifically trained that it would provide the best script editing notes but be incapable of anything else for instance, including self reflection or feeling loss.




  • We are human beings. The comparison is false on it’s face because what you all are calling AI isn’t in any conceivable way comparable to the complexity and versatility of a human mind, yet you continue to spit this lie out, over and over again, trying to play it up like it’s Data from Star Trek.

    If you fundamentally do not think that artificial intelligences can be created, the onus is on yo uto explain why it’s impossible to replicate the circuitry of our brains. Everything in science we’ve seen this far has shown that we are merely physical beings that can be recreated physically.

    Otherwise, I asked you to examine a thought experiment where you are trying to build an artificial intelligence, not necessarily an LLM.

    This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

    Or you are over complicating yourself to seem more important and special. Definitely no way that most people would be biased towards that, is there?

    Moreover, human beings make their own choices, they aren’t actual tools.

    Oh please do go ahead and show us your proof that free will exists! Thank god you finally solved that one! I heard people were really stressing about it for a while!

    They pointed a tool at copyrighted works and told it to copy, do some math, and regurgitate it. What the AI “does” is not relevant, what the people that programmed it told it to do with that copyrighted information is what matters.

    “I don’t know how this works but it’s math and that scares me so I’ll minimize it!”