I write about technology at theluddite.org

  • 4 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle

  • Investment giant Goldman Sachs published a research paper

    Goldman Sachs researchers also say that

    It’s not a research paper; it’s a report. They’re not researchers; they’re analysts at a bank. This may seem like a nit-pick, but journalists need to (re-)learn to carefully distinguish between the thing that scientists do and corporate R&D, even though we sometimes use the word “research” for both. The AI hype in particular has been absolutely terrible for this. Companies have learned that putting out AI “research” that’s just them poking at their own product but dressed up in a science-lookin’ paper leads to an avalanche of free press from lazy credulous morons gorging themselves on the hype. I’ve written about this problem a lot. For example, in this post, which is about how Google wrote a so-called paper about how their LLM does compared to doctors, only for the press to uncritically repeat (and embellish on) the results all over the internet. Had anyone in the press actually fucking bothered to read the paper critically, they would’ve noticed that it’s actually junk science.






  • I completely and totally agree with the article that the attention economy in its current manifestation is in crisis, but I’m much less sanguine about the outcomes. The problem with the theory presented here, to me, is that it’s missing a theory of power. The attention economy isn’t an accident, but the result of the inherently political nature of society. Humans, being social animals, gain power by convincing other people of things. From David Graeber (who I’m always quoting lol):

    Politics, after all, is the art of persuasion; the political is that dimension of social life in which things really do become true if enough people believe them. The problem is that in order to play the game effectively, one can never acknowledge this: it may be true that, if I could convince everyone in the world that I was the King of France, I would in fact become the King of France; but it would never work if I were to admit that this was the only basis of my claim.

    In other words, just because algorithmic social media becomes uninteresting doesn’t mean the death of the attention economy as such, because the attention economy is something innate to humanity, in some form. Today its algorithmic feeds, but 500 years ago it was royal ownership of printing presses.

    I think we already see the beginnings of the next round. As an example, the YouTuber Veritsasium has been doing educational videos about science for over a decade, and he’s by and large good and reliable. Recently, he did a video about self-driving cars, sponsored by Waymo, which was full of (what I’ll charitably call) problematic claims that were clearly written by Waymo, as fellow YouTuber Tom Nicholas pointed out. Veritasium is a human that makes good videos. People follow him directly, bypassing algorithmic shenanigans, but Waymo was able to leverage their resources to get into that trusted, no-algorithm space. We live in a society that commodifies everything, and as human-made content becomes rarer, more people like Veritsaium will be presented with more and increasingly lucrative opportunities to sell bits and pieces of their authenticity for manufactured content (be it by AI or a marketing team), while new people that could be like Veritsaium will be drowned out by the heaps of bullshit clogging up the web.

    This has an analogy in our physical world. As more and more of our physical world looks the same, as a result of the homogenizing forces of capital (office parks, suburbia, generic blocky bulidings, etc.), the fewer and fewer remaining parts that are special, like say Venice, become too valuable for their own survival. They become “touristy,” which is itself a sort of ironically homogenized commodified authenticity.

    edit: oops I got Tom’s name wrong lol fixed



  • I have worked at two different start ups where the boss explicitly didn’t want to hire anyone with kids and had to be informed that there are laws about that, so yes, definitely anti-parent. One of them also kept saying that they only wanted employees like our autistic coworker when we asked him why he had spent weeks rejecting every interviewee that we had liked. Don’t even get me started on people that the CEO wouldn’t have a beer with, and how often they just so happen to be women or foreigners! Just gross shit all around.

    It’s very clear when you work closely with founders that they see their businesses as a moral good in the world, and as a result, they have a lot of entitlement about their relationship with labor. They view laws about it as inconveniences on their moral imperative to grow the startup.


  • This has been ramping up for years. The first time that I was asked to do “homework” for an interview was probably in 2014 or so. Since then, it’s gone from “make a quick prototype” to assignments that clearly take several full work days. The last time I job hunted, I’d politely accept the assignment and ask them if $120/hr is an acceptable rate, and if so, I can send over the contract and we can get started ASAP! If not, I refer them to my thousands upon thousands of lines of open source code.

    My experience with these interactions is not that they’re looking for the most qualified applicants, but that they’re filtering for compliant workers who will unquestioningly accept the conditions offered in exchange for the generally lucrative salaries. It’s the kind of employees that they need to keep their internal corporate identity of being the good guys as tech goes from being universally beloved to generally reviled by society in general.


  • AI systems in the future, since it helps us understand how difficult they might be to deal with," lead author Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, an AI research company, told Live Science in an email.

    The media needs to stop falling for this. This is a “pre-print,” aka a non-peer-reviewed paper, published by the AI company itself. These companies are quickly learning that, with the AI hype, they can get free marketing by pretending to do “research” on their own product. It doesn’t matter what the conclusion is, whether it’s very cool and going to save us or very scary and we should all be afraid, so long as its attention grabbing.

    If the media wants to report on it, fine, but don’t legitimize it by pretending that it’s “researchers” when it’s the company itself. The point of journalism is to speak truth to power, not regurgitate what the powerful say.


  • Whenever one of these stories come up, there’s always a lot of discussion about whether these suits are reasonable or fair or whether it’s really legally the companies’ fault and so on. If that’s your inclination, I propose that you consider it from the other side: Big companies use every tool in their arsenal to get what they want, regardless of whether it’s right or fair or good. If we want to take them on, we have to do the same. We call it a justice system, but in reality it’s just a fight over who gets to wield the state’s monopoly of violence to coerce other people into doing what they want, and any notions of justice or fairness are window dressing. That’s how power actually works. It doesn’t care about good faith vs bad faith arguments, and we can’t limit ourselves to only using our institutions within their veneer of rule of law when taking on powerful, exclusively self-interested, and completely antisocial institutions with no such scruples.


  • theluddite@lemmy.mltoTechnology@lemmy.ml...
    link
    fedilink
    arrow-up
    8
    ·
    10 months ago

    Honestly I almost never have to deal with any of those things, because there’s always a more fundamental problem. Engineering as a discipline exists to solve problems, but most of these companies have no mechanism to sit down and articulated what problems they are trying to solve at a very fundamental level, and then really break them down and talk about them. The vast majority of architecture decisions in software get made by someone thinking something like “I want to use this new ops tool” or “well everyone uses react so that’s what I’ll use.”

    My running joke is that every client has figured out a new, computationally expensive way to generate a series of forms. Most of my job is just stripping everything out. I’ve replaced so many extremely complex, multi-service deploy pipelines with 18 lines of bash, or reduced AWS budgets by one sometimes two orders of magnitude. I’ve had clients go from spending 1500/month on AWS with serverless and lambda and whatever other alphabet soup of bullshit services that make no sense to 20 fucking dollars.

    It’s just mind-blowing how stupid our industry is. Everyone always thinks I’m sort of genius performance engineer for knowing bash and replacing their entire front-end react framework repo that builds to several GB with server side templating from 2011 that loads a 45kb page. Suddenly people on mobile can actually use the site! Incredible! Turns out your series of forms doesn’t need several million lines of javascript.

    I don’t do this kind of work as much anymore, but up until about a year ago, it was my bread and butter…



  • theluddite@lemmy.mltoTechnology@lemmy.ml...
    link
    fedilink
    arrow-up
    3
    ·
    10 months ago

    Yeah, I totally see that. I want to clarify: It’s not that I don’t think it’s useful at all. It’s that our industry has fully internalized venture capital’s value system and they’re going to use this new tool to slam on the gas as hard as they can, because that’s all we ever do. Every single software ecosystem is built around as fast as possible, everything else be damned.


  • theluddite@lemmy.mltoTechnology@lemmy.ml...
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 months ago

    Yeah, I think helping people who don’t know how to code and letting them dabble is a great use case. I fully encourage that.

    I don’t think it’s actually good for generating scaffolding in terms of helping people write quality software, but I do agree with you that that’s how people are going to use it, and then the expectation is going to become that you have to do things that fast. It’s kind of mindboggling to me that anyone would look at the software industry and decide that our problem is that we don’t move fast enough. Moving too fast for speed’s own sake is already the cause of so many of our problems.


  • theluddite@lemmy.mltoTechnology@lemmy.ml...
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    4
    ·
    10 months ago

    I do software consulting for a living. A lot of my practice is small organizations hiring me because their entire tech stack is a bunch of shortcuts taped together into one giant teetering monument to moving as fast as possible, and they managed to do all of that while still having to write every line of code.

    In 3-4 years, I’m going to be hearing from clients about how they hired an undergrad who was really into AI to do the core of their codebase and everyone is afraid to even log into the server because the slightest breeze might collapse the entire thing.

    LLM coding is going to be like every other industrial automation process in our society. We can now make a shittier thing way faster, without thinking of the consequences.


  • I am totally in favor of criticizing researchers for doing science that actually serves corporate interests. I wrote a whole thing doing that just last week. I actually fully agree with the main point made by the researchers here, that people in fields like machine vision are often unwilling to grapple with the real-word impacts of their work, but I think complaining that they use the word "object" for humans is distracting, and a bit of a misfire. "Object detection" is just the term of art for recognizing anything, humans included, and of course humans are the object that interests us most. It's a bit like complaining that I objectified humans by calling them a "thing" when I included humans in "anything" in my previous sentence.

    Again, I fully agree with much of their main thesis. This is a really important point:

    As co-author Luca Soldaini said on a call with 404 Media, even in the seemingly benign context of computer vision enabled cameras on self-driving cars, which are ostensibly there to detect and prevent collision with human beings, computer vision is often eventually used for surveillance.

    “The way I see it is that even benign applications like that, because data that involves humans is collected by an automatic car, even if you're doing this for object detection, you're gonna have images of humans, of pedestrians, or people inside the car—in practice collecting data from folks without their consent.” Soldaini said.

    Soldaini also pointed to instances when this data was eventually used for surveillance, like police requesting self-driving car footage for video evidence.

    And I do agree that sometimes, it's wise to update our language to be more respectful, but I'm not convinced that in this instance it's the smoking gun they're portraying it as. The structures that make this technology evil here are very well understood, and they matter much more than the fairly banal language we're using to describe the tech.


  • I post our stuff on lemmy because I’m an active user of lemmy and I like it here. I find posting here is more likely to lead to real discussions, as opposed to say Twitter, which sucks, but is where I’d be if I was blasting self-promotion. It’s not like lemmy communities drive major traffic.

    Isn’t that exactly what lemmy is for? It’s what I used to love about Reddit 10 years ago, or Stumble Upon, or Digg, or any of the even older internet aggregators and forums: People would put their small, independent stuff on it. It’s what got me into the internet. I used to go on forums and aggregators to read interesting stuff, or see cool projects, or find weird webcomics, or play strange niche web games, or be traumatized by fucked up memes. Now the entire internet is just “5 big websites, each consisting of pics from the other 4” or whatever the quip is, and it’s fucking boring.

    So yes, I and a few others are theluddite.org. It’s an independent site written by leftists working in tech and academia, mostly aimed at other people in tech and academia, but also for everyone. It’s not like I’m hiding it; it literally says so in my bio. We are not professional opinion-havers, unlike “mainstream” sources; I personally write code for a living every day, which is something that surprisingly few tech commentators have ever done. That makes it possible for me to write about major topics discussed in the media, like google’s ad monopoly,, in a firsthand way that doesn’t really exist elsewhere, even on topics as well trodden as that one.

    And yes, we post our stuff on the fediverse, because the fediverse rules. It is how we think the internet should be. We are also self-hosted, publish an RSS feed, don’t run any ads or tracking (and often write about how bad those things are for the internet) because that’s also how we think the internet is supposed to work.