Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 442 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • Flatseal: well that’s normal, it can’t control Flatpak’s access controls if it is itself sandboxed. Even if it was sandboxes, it could just grant itself everything.

    For Xournal: it’s probably because it doesn’t support portals or whatever, so it can’t use the open file dialog to get permissions. So it needs to be able to get to your files somehow to open them.

    In both cases, it just means its permissions model is more like regular applications you’d get from your package manager. If you install Xournal with apt/dnf/pacman it also won’t be sandboxed.

    The point of sandboxing is you can run applications you don’t trust too much, or significantly reduce the blast radius if say, your browser gets breached: then it has another barrier to overcome to reach anything other than the browser’s own data. The lack of sandboxing doesn’t inherently imply the app is evil or will hack you. It just means it doesn’t have the extra protection around it. So like, probably don’t open sketchy PDFs in it, but I wouldn’t stop using the app solely because it lacks sandboxing.


  • There shouldn’t be any issues with that. Most distros handle “install side by side” situations out of the box.

    Data partition probably doesn’t matter. Nobara might use snapshots for updates so you can rollback, not sure, but it also shouldn’t horribly break things for /home.

    The thing btrfs does well is root and home can be the same partition, but different subvolumes. Technically you can even have multiple distros on a single btrfs partition by means of subvolumes, so there’s no unusable wasted space.

    I would do btrfs, Mint won’t care about the filesystem having more features than it needs, and there’s so many advantages to btrfs.

    E: I might leave homes separated and explicitly share some folders you want to keep in sync. Mint’s configurations could impact Nobara’s configurations and vice-versa. Especially if versions of things differ, maybe Nobara will upgrade some configs and make them unusable with older packages from Mint. You can just symlink your downloads and documents and whatever to a common shared data partition or subvolume dedicated to that use case.


  • Depends on how good the ISP router is. I’ve had one that had most of the advanced settings available, so I didn’t feel the need to change. For a while I had offloaded DHCP and DNS and VPN to a Raspberry Pi. It’s very much possible to make do with the ISP router. That ISP would let you passthrough the public IP to a box on your network which lets you do a lot of stuff without going into bridge mode, so I could make my server the target while still letting the router do the routing so if my server was down it didn’t take the whole network with it.

    Then I got a bad one where it won’t even let you set up port forwards unless the device is registered over DHCP so my static stuff and VMs didn’t work. Got my EdgeRouter X back online to get my stuff done.

    I do use VLANs and stuff now so it makes sense for me to use my own router. With everything getting breached these days, I have a VLAN just for my computers, another one for smart but trusted-ish devices (the TV’s gotta reach the NAS), one for IoT that’s completely shielded off.


    What you’re missing out on depends a lot on what features you don’t have you could make use of. If you have like 3 devices using the network like I did when I lived alone, yeah you’re probably not going to miss out on the VLANs. But maybe you want to do ad blocking network-wide. Maybe you’d want to better prioritize interactive traffic like VoIP and video calls or games. Maybe you want a reverse proxy or VPN that works even if your home server is down. Maybe you want your kids to not hog all the bandwidth. There’s a lot of things a router can do.

    So if the ISP router does everything you want and you’re happy with its performance, it’s fine. Just keep it in mind, when you start being like “I wish it had X and Y features” maybe consider an upgrade then.

    If you have the option of not getting a router from your ISP, I would definitely recommend bringing your own. If they provide it regardless and you’d be replacing it through unofficial means, eh, if it works well…




  • I get about 350-400 both ways which AFAIK is what my Unifi AC-Lite tops at since it’s WiFi 5 and it’s only got 2 antennas and tops at 80MHz channels. I get about 200-250 on my phone (1+8T) which I think is single stream.

    Everything indicates me that’s as best as it can be with the set of hardware I have. Signal is solid, latency is solid.

    You’ll need 802.11ax and/or more MIMO streams to get higher speeds, and/or 160MHz/320MHz channels.


  • I’m not saying to use native toolkits like Qt or GTK, those indeed have problems. What React Native does is somewhere in-between: it’s an abstraction that produces decent results between platforms including the web.

    It uses slightly higher level abstractions that work a lot like the web for rendering, you still get your boxes and a subset of CSS properties. But on web it’ll compile to flexbox or grids, on Android it’ll compile to something like a LinearLayout or some other kind of layout the OS understands. On web a <Text> will compile to a <span>, on Android it’ll compile to a native text element. On mobile where you need the performance the most, you otherwise end up rendering a web page that will then eventually end up doing the same thing back to display it natively, but with all the downsides of a web view.

    This performs way better with basically no downside for the web version, has the majority of the flexibility one needs for responsive layouts but it’s way more lightweight when you do target native. On native you can just render it all yourself for really cheap, like any native toolkit would. You’re your own toolkit.

    They will never look native, but at least all the rendering will be native. Most companies have their custom UI theme anyway, native widgets rarely gets used anyway.

    We’re talking Electron replacement after all, it’s not like apps made with it look anything native. But if at least they performed like native apps by skipping the web views and all the baggage it brings with it, that’d be great.


  • For the end user, its main weakness is that complex pages can be pretty slow to render if not coded well. It’s not that bad either. You wouldn’t be like “oh this is a React site, yuck”, they’re all like that these days for the reasons you’d expect.

    As for React Native, its main issue is the communication between the JavaScript browser-ish environment and the Java/Kotlin native environment that can be costly because every has to be serialized (meaning, converted to some type of data structure both sides can understand) and deserialized, so complex screen updates don’t scale too well.

    It’s easy for developers to accidentally trigger much bigger and much more expensive rerenders than expected. If you see whole second long page hangs on some websites as new content loads in that’s usually what happened.


    For developers, it’s complicated, you kind of need to experience it to understand the footguns.

    React was born to solve one particular problem at Facebook: how can we make it so any developer can jump on any part of the UI code and add features without breaking everything. One of the most complicated aspects of a website is state management, in other words, making sure every part of the page are updated when something changes. For example, if you read a message in your inbox, the unread count needs to update a couple places on the page. That’s hard because you need to make sure everything that can change that count is in agreement with everything that displays that count.

    React solves that problem by hiding it away from you. Its model is simple: given a set of inputs, you have a function that outputs how to display that. Every time the value changes, React re-renders every component that used that value, compares it with the previous result, and then modifies the page with the updated data. That’s why it’s called React, it reacts to changes and actions.

    The downside of that is if you’re not very careful, you can place something in a non-ideal spot that can cascade into re-rendering the entire page every time that thing updates. At scale, it usually works out relatively okay, and it’s not like rendering the whole page is that expensive. There’s an upper cap on how bad it can be, it won’t let you do re-render loops, but it can be slow.

    I regularly see startups with 25MB of JavaScript caused by React abuse and favoring new features over tracking down excessive renders. Loads the same data 5 times because “this should only render once” and that turned out to be false, but it displays correctly. I commonly see entire forms being re-rendered every character you type because the data is stored in the form’s state, so it has to re-render that entire tree.

    But it’s not that bad. It’s entirely possible to make great and snappy sites with React. Arguably its problem isn’t React itself but how much it is associated with horrible websites because of how tolerant to bad code it is. It’s maybe a little bit too easy to learn, it gives bad developers an undeserved sense of confidence.

    E: And we have better solutions to this such as signals which SolidJS, Vue and Svelte make heavy use of. Most of the advantages with less problems.


    Anyway, that part wasn’t relevant at all why I don’t like React. The point is, skip the web, you don’t really need the web. React Native skipped the whole HTML part, it’s still JSX but for native app styled components for UI building. The web backend worked very well, your boxes became divs with some styles. It pretty much just worked. Do that but entirely in Rust since Rust can run natively on all platforms. Rust gets to skip all the compromises RN needed, and skip the embedded browser entirely. Make it desktop first then make the web version, it’ll run just as well and might even generate better code than if a human wrote it. Making the web look native sucks but making native fit web is a lot easier than it looks. Letting go of HTML and CSS was a good call from React Native.


  • I wish we went the other way around: build for native and compile to HTML/CSS/WASM.

    For me the disadvantage of Electron is well, it doesn’t have any advantage or performance improvement over the browser version for 99% of use cases, and when you shove that on a mobile phone it performs as horribly as the web version.

    People already use higher level components that ends up shitting out HTML and CSS anyway, why not skip the middleman and just render the box optimally from the start? Web browsers have become good, but if you can skip parsing HTML and CSS entirely and also skip maintaining their state, that’s even better.

    I had the misfortune of developing a React Native app, and I’d say thinking in terms of rows and columns and boxes was nice. Most of RN’s problems are because they still run JS and so you have to bundle node and have the native messaging bridge, and of course that it’s tied to the turd that is React. But zero complains about the UI part when it doesn’t involve the bridge: very smooth and snappy, much more than the browser. And the browser version was no different than standard React in performance.

    I like that it’s not yet another Chromium one at least.


  • It’ll depend a lot on your experience. I can just install Arch without reading the wiki at all in about 5 minutes for something fairly vanilla. If you’re comfortable with Linux then following the wiki won’t be too hard, took me maybe 2-3 hours on my first install before I had my DE and everything all set up (12 years ago). If you’ve never used Linux before and take the deep dive then it could take hours and days depending on how fast you can absorb all that information.

    “Easy” is very subjective, there’s stuff that’s so dumbed down for the sake of “easy” that it makes my life harder when I need to do more complex stuff. I know people for whom linear algebra in 11 dimensions is easy for them to do and solve. Easy is relative to your own personal experience level and what you’re trying to accomplish.

    Install it in a VM as a test run, you’ll see by yourself.




  • The problem with Fedora and especially the atomic versions is that when you Google “how to do X on Linux” you pretty much always get information for Ubuntu and Debian derivatives. The atomic versions have it mildly harder because now you also have to learn how immutable distros work, and you can’t just make install something from GitHub (not that it’s recommended to do so, but if you just want your WiFi to work and that’s all you could find, it’s your best option).

    It’s not as bad as it used to be thanks to Flatpak and stuff, but if you’re really a complete noob the best experience will be the one you can Google and get a working answer as easily as possible.

    Once you’re familiar and ready to upgrade then it makes sense to go to other distros like Fedora, Nobara, Bazzite, Kionite and whatnot.

    I don’t like Ubuntu, I feel like Mint is to Ubuntu what Manjaro is to Arch, Pop_OS is okay when it doesn’t uninstall your DE when installing Steam. But I still recommend those 3 to noobs because everyone knows how to get things working on those, and the guides are mostly interchangeable as well. Purely because it’s easy to search for help with those. I just tell them when you’re tired of the bugs and comfortable enough with Linux then go start distrohopping a bit to find your more permanent home.


  • API documentation isn’t a tutorial, it’s there to tell you what the arguments are, what it does and what to expect as the output and just generally, what’s available.

    I actually have the opposite problem as you: it infuriates me when a project’s documentation is purely a bunch of examples and then you have to guess if you want to do anything out of the simple tutorial’s paved path. Tell me everything that’s available so I can piece together something for what I need, I don’t want that info on chapter 12 of the example of building a web store. I’ve been coding for nearly two decades now, I’m not going to follow a shopping cart tutorial just in the off chance that’s how you tell how the framework defines many to many relationships.

    I believe an ideal world has both covered: you need full API documentation that’s straight to the point, so experienced people know about all the options and functions available, but also a bunch of examples and a tutorial for those that are new and need to get started and generally learning how to use the library.

    Your case is probably a bit atypical as PyTorch and AI stuff in general is inherently pretty complex. It likely assumes you know your calculus and linear algebra and stuff like that so that’d make the API docs extra dense.


  • Yeah mine’s doing that too, and my dmesg is flooded with USB disconnect and reconnects.

    The thing probably is overheating and shutting off. I believe I’ve seen videos of them catching fire too, not sure if it’s that one or another webcam that looks similar.

    Mine’s on a USB hub with buttons for each port so I just leave its port off until I need the camera and only turn it on when needed.





  • Everyone’s making money on IPv4, so there isn’t the incentive just yet to really switch or invest in even supporting it. Major clouds now charge per IP, residential ISPs are starting to make having a public IP a feature they charge extra for otherwise you get CGNAT, mobile carriers don’t want people to host stuff and are quite happy with CGNAT.

    And then there’s the implementation part where everyone seems to go out of their way to do it wrong and cause trouble. My OVH server for example assigns me a non-routed /56 and I can only use about 8 of them before their router starts ignoring the rest.

    At home I have to do 6rd over PPPoE over VLAN which causes my router to not be able to do hardware accelerated routing and I lose 3/4 of my connection speed on top of the resulting tiny MTU, and it turns out IPv6 doesn’t like that. And then a few days later their 6rd endpoint changes and your connection dies until restarted manually, and somehow you end up with another IPv6 block and ugh it’s just so horribly broken.

    I want IPv6 to work but damn, ISPs aren’t making it easy to adopt it in the first place.


  • What distro I’m using isn’t that helpful of a question because it’s largely a matter of taste and technical needs. I use Arch in large part because I do some rather exotic things that would be harder to set up on most mainstream distros whereas Arch just gives me a completely blank slate to work with and configure my system the exact way I want it to work. My desktop also has some server duties, it runs VMs, it has multiple GPUs and also drives my TV room independently of my main workstation area.

    I usually recommend whichever distro gets you the closest to having everything the way you like out of the box as a starting point just because it’s less frustrating when most things works out of the box. The Arch experience is nothing works out of the box because it doesn’t even come with a box. Arch isn’t necessarily a bad choice even for beginners, but the learning curve is much steeper as a result and some people do like to just learn everything whereas some others prefer to start with the shallow part of the pool rather than diving it headfirst. It’s not like you have to commit to any distribution forever, you can start with something simple to use, learn your way around Linux and then you can upgrade to another distribution as your needs and wants evolves.