Rephrasing a common quote - talk is cheap, that’s why I talk a lot.

  • 0 Posts
  • 841 Comments
Joined 3 years ago
cake
Cake day: July 9th, 2023

help-circle











  • There’s one thing funny about this - everyone is treating what’s happening as still some tomfoolery that will end after an election.

    This is institutional. I don’t live in your country, so I might be mistaken, but even if pretentious naming like “Department of War” goes, autonomous weapons remain. And also it’s easy to play a fool or employ a fool as a talking head, but most people are not fools, especially those with power. There will be wars.

    I’m not excited.

    Actually I might be, I had a thought now that I might have a more specific interpretation of what happened to one girl. She had sort of a “dark triad” personality, and a nasty one, but I’m starting to think that what I was hearing about her without specifics wasn’t connected to what she did to me (which I’d forgive despite her probably not caring at all). It was about her once having met herself, that is, seen a sadistic event and enjoyed it, and having been shaken since. And people who carried those indirect and vague messages to me seemed to think very badly of her because of that reaction alone, but I don’t know - I might be the reason she even saw that, and she wasn’t looking for it. She loved Exupery’s “Citadelle”, and if you read it, then you might notice that the same man who wrote “The Little Prince” definitely had some sadistic leanings.

    And that was because another girl took interest in me, and then was disappointed. That another girl was a real depraved sadist. That’s how this one got involved, my acquaintance from another place. And this one too took some interest in me.

    And I’m thinking that perhaps I too have something sadistic in my personality if they liked me. The girl I’ve started about is a pacifist. And one of my family members is a veteran (and he was basically special forces, so the kind of service that’s usually not a transformation but a discovery of personal traits) and a pacifist. And I’m a pacifist (I think everyone should be armed to react to violence, but I’m against any initiation of violence). It just seems to make sense that if you notice something sadistic in yourself early on, you become interested in pacifism, as well as in other ways of considering and containing your (and others’) inner Mr Hyde.

    So, getting back to names and what matters.

    The change that has already happened is contained in human psyche. Your society might have a hidden, but slowly unearthing desire that will have to be fulfilled. I think Freud also described in sexual terms (well, as was his usual trick) what he was feeling about the start of World War One.

    You have a Department of War and have probably even gotten used to that name.


  • Somewhat funny i actually realized this dynamic when watching star trek. Whenever they need to do something illegal they simply put their badges on the desk and just like magic they are no longer bound by federation ethics.

    That’s the main reason I don’t like the “good people in uniform as beacons of virtue” trope. That always happens. Every time I see that on screen I immediately imagine the morally inverted version of the same plot.

    At least in Babylon-V such a decision is something not reversible and important for the main characters.

    And in SG-1, despite that being sort of a piece of military propaganda, that too doesn’t happen too easily.

    But there the main characters are not some beacons of anything, they are just people with their own way.


  • You clearly don’t understand how finance works or don’t understand how leveraged these incestuous deals are. It’s perfectly possible for AI to make killbots and for an AI economic crash to happen.

    You might want to consult a history book. There are a few recurring themes there, silent leges inter arma and vae victis capture most of them. New weapons might change the intensiveness of wars all around the world, because they help those owning them avoid loss of life whatsoever and those not owning them to pay with lives for dealing damage that doesn’t even upset their adversary. Which will bring enormous profits, just not to everyone, only those who conquer. Finance is not all you need for that subject.

    On a humanist note, in “drone army against another drone army wars of the future” scenarios loss of life might be so small that pain and death in wars will be reduced to cases of deliberate sadism. Meaning that … again, there’ll be more war.

    They industry needs to make Trillions of dollars to pay off their creditors and to achieve the profit their investors need to make this worthwhile. That only happens if most white collar workers are replaced with AI.

    No, because profits are not only made from replacing existing mechanisms, but also from building new ones.

    Specifically, most people don’t use computers as really-really meta-machines. They use them as platforms for running specialized applications.

    But LLMs, however expensive in resources, change that. They make computers meta-machines for everyone.

    And also in some races you want to be further from the rear, not closer to the front. If this technology promises a profound crash in any case (because, suppose, it’ll bring about planet-wide totalitarianism), those investments might mean that rush to try to avoid getting eaten completely in the future. Losing less, not gaining more.





  • People are talking about AI killbots and upcoming crash at the same time, and complain about AI slop and vibe coding.

    Sorry, but if something is usable for making killbots, there will be no crash. And AI slop proves that for someone it’s useful to make slop. And vibe coding proves that someone makes things working in production with those tools. Saying that quality suffers is like saying that cobb houses are not comparable to brick houses and vice versa. Both exist. There are places where technologies related to cobb are still common for construction.

    But the most important reason is the first one, if some technique gives you a more convenient and sharper stick to kill someone from another tribe, then that something stays as tribe’s cherished wisdom.

    That LLMs consume too much resources … You might have noticed there’s a huge space for optimization. They are easy to parallelize, and we are in market capture stage, which means that optimization is not yet a priority. When it becomes a priority, there might happen a moment when all the arguments about operations costing in resources more than they give profit and that being funded by investors are suddenly not true anymore.

    I have been converted. Converted back, one might say, there was a time around years 2011-2014.


  • Which will happen regardless.

    Also where there are AI safeguards, they are usually in place because of chain of command and authorization, and those mattered so much because all most likely applications of any AI during the Cold War had a very steep damage curve.

    Small killbots don’t have such a damage curve. If they kill someone by mistake, the rest of the population learns to be careful and not raise attention of those operating them. Same reasons as with nukes and radars, where you need chains of specific people with clear authorization to answer why half the world melted, won’t force anyone to put such limits.