Rephrasing a common quote - talk is cheap, that’s why I talk a lot.

  • 0 Posts
  • 688 Comments
Joined 3 years ago
cake
Cake day: July 9th, 2023

help-circle

  • Not really the thing, it’s more about censorship than full autonomy.

    I mean, if not for sanctions preventing one from using most (usually all) payment methods from Russia, such a disconnection would cause problems.

    Now it won’t, so yeah, living in Russia I pretty much can believe that even waking up one day in a countrywide version of Elektrostal town local forum instead of the Internet is possible.

    In that case I’m packing what stuff I need and leaving for any direction open. If there will be directions open.



  • It’s already having deeper consequences, if their purchases affect RAM and storage prices, then it means it yields results better than half a year ago.

    I agree about “good enough”. I felt that “good enough” moment in year 2006. In year 2009 even more. Some people remember Amiga Workbench of year 1999 stage as “good enough”.

    I don’t think it matters which of these is closer to the equilibrium, we’ll learn empirically.

    But I’m feeling better that it’s having a hard power redistribution from consumer sector to datacenter sector, that’s not a bad thing, because most of that consumer sector was based on the bullshit you are describing. It didn’t need to, but all the potent avenues of said sector’s development were strangled by RIAA, “protect the children”, “there are wrong people saying wrong things in the Internet” and other such pressures. And also by Steve Jobs and his idea that you don’t need ergonomics or usefulness, just a sci-fi look and a brand, I think that’ll take years to rectify, even though people are slowly getting tired of the “touchscreens are the future, physical buttons are fossil” narrative.

    That bullshit drain means that we’ll have a better, healthier consumer sector eventually. And perhaps in 10 years or so something interesting will be happening there. Life is about change and movement.


  • Talking about computers, definitely yes, functionally. The socially important problems got solutions, imperfect, but replaceable ones.

    We had publishing to all the world via Usenet and Web, file exchange with all the world via plenty of FTP servers, way to find those files and published pages via search engines (those real ones, which just indexed file attributes and page contents), our social identities were ICQ numbers and email addresses, our way to repost stuff was sending a link, our way to rate and discover good things was web directories made by people.

    For evaluating something on the Web a vote is simply not a universal unit. Every vote is a different person. So upvotes and downvotes lead to numbers being important for ratings on something, which means that the least useful things get the biggest ratings. Because everything useful is offensive to someone.

    The only downside that environment had was insufficient easiness of making a webpage, hosting a website, hosting something else.

    If I were imagining a solution, it would look like an all-in-one suite like Hotline, but based on how the Web was then, including an intuitive editor (something more like QuarkXPress) for pages and with hosting and mirroring being transparent. A p2p system with cryptographic identities, but manual choice of hosting something. With a p2p contact directory, but many trees of trust inside that directory, where one tree of trust is like one email provider or one xmpp server for identities, that you subscribe to. With “domains” (sort of) being done similarly to that contact directory. With good old Kademlia for finding contacts, domains, groups and separate pages, posts or files. And other than good old Kademlia, possibly some kind of interchangeable client-server things, like storage areas and trackers and relays, to help with offline messaging and NAT’s.

    OK, my thought floated away, intuitive management of anything creative in that system is honestly the main flaw of how it was in year 1999. I even wonder if that “agentic AI” they are talking about has a place in such an application suite.


  • Once again you are talking about programmers in general and not security researchers.

    Really, have a look at what it requires to write binary code, let alone reverse engineering complicated code, that somebody else wrote.

    I have had a look. I’ve also done some solving of simple crackmes and such. I’m definitely not competent, but to find a security backdoor well-hidden you’ll have to examine behavior, which requires certain skills, and then you’ll have to look at the executable code, and then, of course, having the source is good, but less so if it’s deliberately made look like normal.

    I agree with Linus’ statement though:

    I think I’m mistaken on that attribution, OpenBSD’s Theo de Raadt is more likely to be the author.

    I stand by my opinion that it’s a bad look for a privacy- and security-oriented piece of software, to restrict non-“experts” from inspecting that, which should ensure that.

    Yes, I agree that it’s better when the source is present. But if you overvalue the effect, then it might be worse. Say, again, with Linux - plenty of people are using thousands of pieces of FOSS software, trusting that resulting thing far more than Windows. If we knew that the level of trust is absolutely the same, then one could say Linux is safer. But we know that people sometimes do with Linux all kinds of things they wouldn’t do with Windows, because they overvalue the effect of it being FOSS. It’s FOSS, but you still better not store 10 years of home video unencrypted on the laptop you are carrying around, things like that.


  • If open-source, a lot more eyes could be on it

    On the source code. Absolutely the same amount of eyes on the binary.

    Anyway, there’s a joke (by Linus Torvalds, I think, but maybe I am wrong) that most of the eyes that could look at the code are attached to hands typing the thing about “more eyes”.

    and therefore the chances of intentionally implemented vulnerabilities

    Source code being available is obviously beneficial for learning how a program works as a whole, or participating in its development, obviously, but for finding things hidden I’m not sure.


  • Well, again, taking an unpopular but valid point of view - how good it really is to have the source code for finding vulnerabilities? Is it really harder to hide an intentional backdoor in the source code in plain sight than it is in something that’s only distributed in binaries? I have no relevant experience, but I’ve listened to a lecture by someone from Kaspersky lab saying that.

    Having commonly available source code is good for development and learning of functionality of something, but security flaws have that subgroup of backdoors.



  • Yes, and also - if something was normal in 80s, it won’t stop being possible in 2030s. In some sense our civilization now is just reveling in the sea of computational power used wastefully.

    There was a moment when I moved from an old PC with 512 MB RAM which seemed nice, but was becoming a bit weak for games and all, to a newer C2D PC with 2 GB RAM. I felt it can do anything I’ll ever need. And web aside, it still can do most.

    And that old PC, if we compare it to a machine good for year 1999, was very powerful. And 1999 is around Matrix and Phantom Menace, and the X-Wing: Alliance game, and ICQ popularity growing.

    More and more resources spent for the same or less social satisfaction. People like talking in money and graphs and industry slang, but honestly social satisfaction is a far better optimized mechanism than these.

    Adopting a kitten seems still more satisfying than computing, but the gap in year 1999 was subjectively less than now.



  • With Moore’s law

    It’s not really a law and it’s not true anymore.

    there is absolutely no reason why centralizing computers should make sense

    There is, and it’s not too different from central heating.

    You could live 20 years with the same dumb terminal, while on the remote side you could rent better and better hardware.

    I like p2p networks and think socially it’s better to have a decentralized way of providing such resources and paying for them.

    But the benefit of renting computing resources is obvious. Except I like the idea of having a local system, and offloading some tasks to remote performers, not renting a remote system.

    I mean, of course they want the world working their way and that’s what they are offering. If those thinking differently can’t compete, then that’s how it happens.









  • If, suppose, I were optimistic over this technology, but pessimistic over its current stage of development, I’d expect this to be a cure. It’s a problem they’ll have to solve. A test they’ll have to pass.

    If somewhere inside those things someone makes a mechanism building a graph of syllogisms, no kind of poisoned input data will be able to hurt them.

    So - this is a good thing, but when people say it’s a rebellion, it’s not.