• 4 Posts
  • 141 Comments
Joined 2 years ago
cake
Cake day: December 31st, 2023

help-circle




  • I haven’t tried any Anthropic models personally.

    So far, between the free online chats by OpenAI and DeepSeek, and the smaller models I’ve run on my own machine, the most useful things I have gotten from it were to treat it as an overeager student that lacks the first-hand experience needed to see the big picture, asking it questions that I’m pretty sure I already know the answer to and seeing if 1) it “understands” what I’m getting at and 2) it can surprise me with a viewpoint I hadn’t thought of before.

    Using them to double-check my own ideas seems to be marginally useful, especially when there’s no qualified human being whose attention I can borrow. Using them as a sort of semantic web search can sometimes get me what I’m looking for faster than Google. If anything, they’re an opportunity to exercise critical thinking; if I can tell where it’s getting things wrong I can be fairly confident that my own understanding of the problem/subject is pretty solid.

    Vibe coding, though? I have yet to see it work out. Maybe as some starting slop so that I can get to work refactoring code (and get the ideas flowing) instead of staring at a blank file.


  • Learned helplessness is an insidious foe, and one that market forces have tended to side with over the past 20 years (probably for far longer than that, but as I was a mere child back then I wouldn’t claim it with as much certainty).

    It’s an “easy way” for those like you and me who have more or less already built up the know-how over countless small steps, but if you’ve never known “life” outside of these corporate surveillance playgrounds I imagine it seems very scary and deserted.


  • A small gui to automate generating some pdfs from some CSV files.

    There’s a small non-profit in my area helping people operate localized energy distribution (as producers and consumers). Each month, they receive a zip file containing the raw kiloWatt-hours produced and consumed by each participant over the past month as CSV files. So far the non-profit has been manually importing these CSVs into LibreOffice to generate graphs and tables and export the whole thing as an individualized PDF file for each participant. Now that they’re starting to help more than 2-3 operations, it’s become useful to try to automate that process.

    I’ve been writing it in rust for a few reasons. First of all I wanted cross-compilation to be sure to work and at this point I’m more familiar with rust than go, secondly I read a blog post recently that evaluated rust gui solutions in terms of accessibility and IME-compatibility on windows. I started off looking for a “direct” pdf-writing library but eventually switched to using typst to generate the pdfs from templates I write. typst being written in rust has enabled me to bundle its engine into the program in a pretty-straightforward way.

    I’m currently working on allowing the import of multiple sets of data so that the generated PDFs can show line plots of the electricity production and consumption over several months.


    1. chunk_size := file_size / cpu_cores. Compile regex.

    2. spawn cpu_cores workers:
      2.a. worker #n starts at n * chunk_size bytes. If n > 0, skip bytes until newline encountered.
      2.b worker starts feeding bytes from file/chunk into regex. When match is found, write to output (stdout or file, whichever has better performance). When newline encountered, restart regex state automata.
      2.c after having read chunk_size bytes, continue until encountering a newline to ensure the whole file is covered by the parallel search.

    Optionally, keep track of byte number and attach them to the found matches when outputting, to facilitate eventually de-duplicating and/or navigating to said match in the file.

    To avoid problems, have each worker output to a separate file, and only combine these output files when the workers are all finished.

    As others have said, it’s going to be hard to get more speedup than this, and you will ultimately be limited by your storage’s read speed and throughput if the whole file cannot fit into memory.


  • It’s been a while since I set up my runner, and I have it on my personal desktop (which is wayyyyyy beefier than the VPS I host my forgejo instance on), but I’m pretty sure I was able to specify that only my user account can trigger actions to be run on this runner. What I’m getting at is that there is a decent amount of granularity for forgejo action permissions; you should be able to find a balance that suits you between “no actions at all” and “anyone can run any code they desire on your server”.


  • I would find it easier to agree with this article if it didn’t gloss over the shit quality LLM-generated code can have, nor that some of us love our craft because of how efficient and robust we are capable of making the code that we write. It’s not just about blood, sweat, and tears vs a quick prompt, it’s about knowing that the program that has been produced to go buy groceries isn’t going to make the machine run three times around the block balancing the eggs on it’s forehead before walking in the front door.

    And I hate hate hate how I shudder at having written “it’s not (just) X, it’s Y”, but I refuse to strike that form of writing from my repertoire just because today’s behemoth stochastic parrots are “fond” of it.





  • So which is it? Are developers 55% more productive, or are they losing 20% of their time to inefficiencies and burning out at record rates?

    The answer: executives are measuring—and reporting—what makes their stock price rise, not what’s actually happening on the ground.

    Or if you want to get slightly more conspiratorial: the execs are all buying shares in OpenAI, Nvidia, and the like - so now they’re more interested in ordering people to use LLM tools so that these stocks rise in price, even if it means sabotaging their own company.





  • I have the same preference for personal projects, but when I was working on a corporate team it was really useful to have the “run configs” for intellij checked in so that each new team member didn’t need to set them up by themselves. Some of the setup needed to get the python debugger properly connected to the project could get quite gnarly.