• 0 Posts
  • 91 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle



  • Eeeh, I still think diving into the weeds of the technical is the wrong way to approach it. Their argument is that training isn’t copyright violation, not that sufficient training dilutes the violation.

    Even if trained only on one source, it’s quite unlikely that it would generate copyright infringing output. It would be vastly less intelligible, likely to the point of overtly garbled words and sentences lacking much in the way of grammar.

    If what they’re doing is technically an infringement or how it works is entirely aside from a discussion on if it should be infringement or permitted.


  • Basing your argument around how the model or training system works doesn’t seem like the best way to frame your point to me. It invites a lot of mucking about in the details of how the systems do or don’t work, how humans learn, and what “learning” and “knowledge” actually are.

    I’m a human as far as I know, and it’s trivial for me to regurgitate my training data. I regularly say things that are either directly references to things I’ve heard, or accidentally copy them, sometimes with errors.
    Would you argue that I’m just a statistical collage of the things I’ve experienced, seen or read? My brain has as many copies of my training data in it as the AI model, namely zero, but “Captain Picard of the USS Enterprise sat down for a rousing game of chess with his friend Sherlock Holmes, and then Shakespeare came in dressed like Mickey mouse and said ‘to be or not to be, that is the question, for tis nobler in the heart’ or something”. Direct copies of someone else’s work, as well as multiple copyright infringements.
    I’m also shit at drawing with perspective. It comes across like a drunk toddler trying their hand at cubism.

    Arguing about how the model works or the deficiencies of it to justify treating it differently just invites fixing those issues and repeating the same conversation later. What if we make one that does work how humans do in your opinion? Or it properly actually extracts the information in a way that isn’t just statistically inferred patterns, whatever the distinction there is? Does that suddenly make it different?

    You don’t need to get bogged down in the muck of the technical to say that even if you conceed every technical point, we can still say that a non-sentient machine learning system can be held to different standards with regards to copyright law than a sentient person. A person gets to buy a book, read it, and then carry around that information in their head and use it however they want. Not-A-Person does not get to read a book and hold that information without consent of the author.
    Arguing why it’s bad for society for machines to mechanise the production of works inspired by others is more to the point.

    Computers think the same way boats swim. Arguing about the difference between hands and propellers misses the point that you don’t want a shrimp boat in your swimming pool. I don’t care why they’re different, or that it technically did or didn’t violate the “free swim” policy, I care that it ruins the whole thing for the people it exists for in the first place.

    I think all the AI stuff is cool, fun and interesting. I also think that letting it train on everything regardless of the creators wishes has too much opportunity to make everything garbage. Same for letting it produce content that isn’t labeled or cited.
    If they can find a way to do and use the cool stuff without making things worse, they should focus on that.


  • As written the headline is pretty bad, but it seems their argument is that they should be able to train from publicly available copywritten information, like blog posts and social media, and not from private copywritten information like movies or books.

    You can certainly argue that “downloading public copywritten information for the purposes of model training” should be treated differently from “downloading public copywritten information for the intended use of the copyright holder”, but it feels disingenuous to put this comment itself, to which someone has a copyright, into the same category as something not shared publicly like a paid article or a book.

    Personally, I think it’s a lot like search engines. If you make something public someone can analyze it, link to it, or derivative actions, but they can’t copy it and share the copy with others.



  • In the sense that they have a manager? Sure. In the sense that there’s one individual dictating the design of the software? I’ve never even been on a team with that dynamic, to say nothing of the entire codebase.

    Modern software teams tend to eschew design by decree.

    What’s the dynamic that you’re thinking is typically what teams use?


  • I’m not sure I’d construe a manual you can find, or a variety of guides, as a negative. :) most days my usage of git consists of “pull, commit, push, merge” in different orders. You might be overestimating how much effort goes in to managing the tool.

    Most of my professional experience has been working on projects that consist of multiple teams of between 4-6 developers, and between 5 and 40 teams. I’m not entirely sure what you mean about git not mirroring the development patterns of most “real life” projects.
    “Real” projects are frequently developed by groups of people working on the same goal adjacent to other groups working on related but distinct goals.


  • We very clearly work in different professional environments. :)

    In no particular order: Administrating a git server is similarly trivial. A repository is a folder (easy to backup, easy to repair, easy to host), and setting up a new server usually a matter of ssh key management. Don’t even need to install sqlite or anything beyond the git package. Or, because the tool has wide support, you can install a wide selection of tools that manage it for you, or use a free hosting service, or a paid one.

    I’m startled that you would say you can’t think of anyone who would care. My entire professional experience has been developer stories about bad jobs often include details about using old or esoteric VCS systems, usually met with “ew” or “wtf” comments. Sets the flavor of the story.
    Personally, in a business environment, I would take using anything except git for the org as a red flag. It’s a sign that someone in leadership at the company values doing things unrelated to the core mission “their way” above doing it the easy or “paved path” way.

    The standard tool is indeed not constant. Before git existed, using CVS would have been the better choice, as well as for years afterwards until it had clearly been usurped. Most projects aren’t Linux when it made the switch to git.

    You joke that no one really “knows” git, but… This is literally the first time I’ve ever seen a fossil command. I just searched for “fossil manual” and I get analog watches. It’s not even available in any of my systems package managers.
    Developer familiarity is a big advantage that I think you’re downplaying in comparison to “there are metadata files in .git”, which I don’t know has ever been relevant to me in any significant way.
    (Also, I thought the different systems all work basically the same? 😛)

    I’d handily agree people should be using the best tool for the job. Familiarity and ease of use are significant factors in what makes a tool better.
    Ability to integrate with other tools is also a major factor. Setting up continuous integration or code review tools with git is trivial with any number of different systems.

    What are any of the tools you’re using doing better than git? The biggest selling point you’ve shared for fossil is that it’s functionally similar to git, and that it has better merging. I can’t find anything related to merge conflicts outside of years old forum posts, and barely anything relating to merges at all, so I’m not entirely certain what makes it “better”.

    If it’s biggest advantage is that it’s similar enough to git that you can pick it up fast, why wouldn’t I just use git?


  • Like I said, there are always factors.

    For a company starting from scratch though, the usage base factor becomes vastly more significant.
    Using a tool that radically limits your integration capabilities is a poor choice, to say nothing of most likely needing to onboard every new employee to an entirely new VCS.

    I don’t know that I’ve encountered anyone using svn that wasn’t interested in moving in recent memory, so “developer experience” would be a reason to move.




  • File1, file2, file_3.new, etc would be bizarrely stupid. A home rolled solution involving rsync, tar, gzip, crons or inotify would also be bizarrely stupid.

    https://en.wikipedia.org/wiki/List_of_version-control_software anything on that list that’s marked anything other than “active” as a more serious answer. So like DCVS, visual source safe, or bitkeeper. Anything that’s not getting bug fixes or maintenance.

    Anything that doesn’t have significant enough usage to give confidence that bugs or glitches are being caught by common usage would be risky, since you don’t want to be the person to find that edge case.

    There’s things other than git that aren’t wrong, but I see little compelling reason not to use the most ubiquitous tool.


  • There’s a difference between “can’t code” and “can’t work”.

    A lot of people use git for version control: super good idea, basically anything else is at best unorthodox, at worst bizarrely stupid.
    A lot of people also use github for repository hosting, continuous integration, code review, deployment, packaging, etc, etc. this is more of an opinion thing than a standard practice thing, and there are plenty of other ways to get the same tools, either all in one package or from a variety of different ones, self hosted, in the cloud, or some hybrid in between.

    If GitHub goes down, you can make code changes and everything to your hearts content. But you might not be able to run your full integration testing pipeline on it, get a code review, or package your software.

    If your local build process pulls packages from GitHub or refreshes a remote repository automatically, it can also powerfully mess that up, but that’s nothing to do with git. You can use “ctrl-c/v” backups and still have a build process that tips over when GitHub goes down.


  • https://daniel.haxx.se/blog/2020/12/17/curl-supports-nasa/

    https://daniel.haxx.se/blog/2023/02/07/closing-the-nasa-loop/

    Their process for validating software doesn’t have a box for “open source”, and basically assumes it’s either purchased, or contracted. So someone in risk assessment just gets a list of software libraries and goes down it checking that they have the required forms.

    As the referenced talk mentions, the people using the software understand that all the testing and everything is entirely on them, and that sending these messages is bothersome and unfair, and they’re working on it. Unfortunately, NASA is also a massive government bureaucracy and so process changes are slow, at best.
    The TLAs don’t generally help NASA, and getting them involved would unfortunately only result in more messages being sent.

    As for contributions, I think that turns into an even worse can of worms, since generally software developed by or for the US government isn’t just open source, but public domain. I think you’d end up with a big mess of licensing horror if you tried to get money or official relationships involved. It’s why sqlite is public domain, since it was developed at the behest of the US.

    Mostly just context for what you said. NASA isn’t being arrogant, they’re being gigantic. Doing their due diligence in-house while another branch goes down a checklist, sees they don’t have a form and pops of an email and embarrassing the hell out of the first group.

    The time limit thing is weird, but it’s a common practice in bureaucracies, public or private. You stick a timeline on the request to convey your level of urgency and the establish some manner of timeline for the other person to work with. Read the line again, but extremely literally: “we have a time frame of 5 days for a response”. “Our audit timeline guessed that it would take a business week for you to reply, so if you take longer we’re behind schedule”. The threatening version is “your response is required on or before five business days from the date of this message”.
    The presumption is that the person on the other end is also working through a task queue that they don’t have much personal investment in, and is generally good natured, so you’re telling them “I don’t expect you to jump on this immediately, but wherever you can find a moment to reply this week would keep anyone from bothering me, and me from needing to send another email or trying to find a phone number”



  • Paul Eggart is the primary maintainer for tzdb, and has been for the past 20 years.
    Tzdb is the database that maintains all of the information about timezones, timezone changes, leap whatever’s and everything else. It’s present on just about every computer on the planet and plays an important role in making sure all of the things do time correctly.

    If he gets hit by a bus, ICANN is responsible for finding someone else to maintain the list.

    Sqlite is the most widely used database engine, and is primarily developed by a small handful of people.

    ImageMagick is probably the most iconic example. Primarily developed by John Cristy since 1987, it’s used in a hilarious number of places for basic image operations. When a security bug was found in it a bit ago, basically every server needed to be patched because they all do something with images.