• 0 Posts
  • 54 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • Do you use autocomplete? AI in some of the various ways that’s being posited is just spicy autocomplete. You can run a pretty decent local AI on SSE2 instructions alone.

    Now you don’t have to accept spicy-autocomplete just like you don’t have to accept plain jane-autocomplete. The choice is yours, Mozilla isn’t planning on spinning extra cycles in your CPU or GPU if you don’t want them spun.

    But I distinctly remember the grumbles when Firefox brought local db ops into the browser to give it memory for forms. Lots of people didn’t like the notion of filling out a bank form or something and then that popping into a sqlite db.

    So, your opinion, I don’t blame you. I don’t agree with your opinion, but I don’t blame you. Completely normal reaction. Don’t let folks tell you different. Just like we need the gas pedal for new things, we need the brake as well. I would hate to see you go and leave Firefox, BUT I would really hate you having to feel like something was forced upon you and you just had to grin and bear it.




  • And just so we’re clear, I’m not saying everything Leah said is golden. Humans are human and say things that don’t jive 100% of the time. It’s entirely possible for something to have both folks handle a situation in a manner that is less than ideal. All I’m indicating is for you to step back for a second. It will absolutely help you out here.

    Ideally you can perhaps look at this from Leah’s point of view. But that’s solely up to you. Best thing for you though is to just bring it down a notch. That’s the only thing that I’m pretty sure is a good idea right now. What’s past that, I think only you can best determine that. But I honestly think some deep breaths are what’s immediately needed.

    I’m pretty sure post that you’ll have it handled. And I don’t know how old you are but I’ll say that panicked hyping a situation only gets worse as you age. So developing ways to deal with it is just part of growing up for 30 to 50 year olds. This notion that we’re done “growing” at some magical number is bunk.

    I had my car start stuttering on the highway once and thought for sure that I was going to die. My brain just spiraled a situation where I needed to just pull over and see what was wrong into a flight or fight response. Ultimately, it was just a loose hose and I fixed it. But for a moment there I was panicking myself way past a point of being reasonable.

    It just happens and sometimes we just need to force ourselves to take a pause. That’s all the advice I think I can give you here. I think once you chill for a bit, you’re smart enough to figure out the what’s next part.


  • when I was really just frustrated

    Buddy that all reads as harassing. The IRC logs are especially a bad look for you, because you said:

    im looking to add this board to my resume

    And now that entire chat log is tied to it.

    I’m not sure why you thought hounding someone and harping about it for nearly eight hours on IRC was a good idea. But now you’ve come to the Fediverse to find some absolution or something.

    You can be frustrated, that’s fine, but when that frustration turns into that long of a hanging on the bell that’s evident in that chat log and then two hours later you came here with this, that is past frustration.

    Leah also indicated:

    if i give in to you now, you will try to harass/abuse me again in the future.

    And Leah has a point. You’ve shown no sign of taking a moment to collect yourself. I get you are upset. Sometimes the best way to handle upset is to just shut up for a day or two. And trust me, I struggle with doing that myself.

    Like everything you’ve done in your frustration, I’ve been down that road. And I’m pretty sure in your head you are telling yourself, but the difference is that… because that’s exactly what I’d say to someone telling me this. That my situation is different somehow and that I must rectify this injustice immediately!

    and if it was bullying, I apologize then.

    What you need to do is two things. One, learn from this so that in the future you can do… Two, chill out. I think you’ll find in more professional environments sorry is okay, but I have learned from my mistakes and will do better is more preferred.

    This whole thing could have been max three messages on IRC. “Why wasn’t I credited? What was wrong with my submission? How do I improve going forward?” The end.

    I think the biggest thing here for me is that in open projects, leads are fielding multiple people and working on their stuff. Every message you send is “Hey stop what you are doing and pay attention to me!” So you really want to be respectful of their time by really trying to be succinct on whatever is bugging you.

    And you are on the contrib page.

    All round good guy, an honest and loyal fan.

    And I think you’re wondering how “testing” vs “developed” looks on your resume? But that chat log is now going to be front and center no matter what’s said on the contrib page. It really doesn’t matter if you got “developed” pasted on the contrib page.

    All of this Mastodon interactions and IRC logs isn’t a good look. It’s not the end of the world. I think everyone has felt frustration like this before, like there’s some magical set of words to say that’ll fix everything. But you’ve got to let it go. You’re just digging down with posts like this. And you don’t have to let it go forever, just you’ve really added a lot of friction to have this go surface of the sun warm. You need to let it cool, come back refreshed, and maybe see if you can repair the relationship you have with the team.

    But you’ve got to understand. Your post here paints one picture and your interactions with Leah on Mastodon and IRC are something else. And that difference between the is especially not good as it comes off as a lot of sour and bitterness on this “slight” that you perceived as such an injustice.

    And hell’s bells. If you sit on this for seventy-two hours and you still feel massively wronged, go fork you a project and call it FOSSITboot or whatever and show everyone your prowess. If you’ve got skills to pay the bills, then if you build it they will come.

    Lots of love for you, but just take a moment from everything. I assure you, it’ll do you wonders to decompress.


  • It absolutely could. Heck, RPMs and DEBs pulled from random sites can do the exact same thing as well. Even source code can hide something if not checked. There’s even a very famous hack presented by Ken Thompson in 1984 that really speaks to the underlying thing, “what is trust?”

    And that’s really what this gets into. The means of delivery change as the years go by, but the underlying principal of trust is the thing that stays the same. In general, Canonical does review somewhat apps published to snapcraft. However, that review does not mean you are protected and this is very clearly indicated within the TOS.

    14.1 Your use of the Snap Store is at your sole risk

    So yeah, don’t load up software you, yourself, cannot review. But also at the same time, there’s a whole thing of trust here that’s going to need to be reviewed. Not, “Oh you can never trust Canonical ever again!” But a pretty straightforward systematic review of that trust:

    • How did this happen?
    • Where was this missed in the review?
    • How can we prevent this particular thing that allowed this to happen in the future?
    • How do we indicate this to the users?
    • How do we empower them to verify that such has been done by Canonical?

    No one should take this as “this is why you shouldn’t trust Ubuntu!” Because as you and others have said, this could happen to anyone. This should be taken as a call for Canonical to review how they put things on snapcraft and what they can do to ensure users have all the tools so that they can ensure “at least for this specific issue” doesn’t happen again. We cannot prevent every attack, but we can do our best to prevent repeating the same attack.

    It’s all about building trust. And yeah, Flathub and AppImageHub can, and should, take a lesson from this to preemptively prevent this kind of thing from happening there. I know there’s a propensity to wag the finger in the distro wars, tribalism runs deep, but anything like this should be looked as an opportunity to review that very important aspect of “trust” by all. It’s one of the reasons open source is very important, so that we can all openly learn from each other.





  • Both are vendor specific implementations of processing on GPUs. This is in opposition to open standards like OpenCL, which a lot of the exascale big boys out there mostly use.

    nVidia spent a lot of cash on “outreach” to get CUDA into a lot of various packages in R, python, and what not. That did a lot of displacement from OpenCL stuff. These libraries are what a lot of folks spin up on as most of the leg work is done for them in the library. With the exascale rigs, you literally have a team that does nothing but code very specific things on the machine in front of them, so yeah, they go with the thing that is the most portable, but doesn’t exactly yield libraries for us mere mortals to use.

    AMD has only recently had the cash to start paying folks to write libs for their stuff. So were starting to see it come to python libs and what not. Likely, once it becomes a fight of CUDA v ROCm, people will start heading back over to OpenCL. The “worth it” for vendor lock-in for CUDA and ROCm will diminish more and more over time. But as it stands, with CUDA you do get a good bit of “squeezing that extra bit of steam out of your GPU” by selling your soul to nVidia.

    That last part also plays into the “why” of CUDA and ROCm. If you happen to NOT have a rig with 10,000 GPUs, then the difference between getting 98% of your GPU and 99.999% of your GPU means a lot to you. If you do have 10,000 GPUs, having like a 1% inefficiency is okay, you’ve got 10,000 GPUs the 1% loss is barely noticeable and not worth it to lose portability with OpenCL.




  • One of the specific issues from those who’ve worked with Wayland and is echoed here in Nate’s other post that you mentioned.

    Wayland has not been without its problems, it’s true. Because it was invented by shell-shocked X developers, in my opinion it went too far in the other direction.

    I tend to disagree. Had say the XDG stuff been specified in protocol, implementation of handlers for some of that XDG stuff would have been required in things that honestly wouldn’t have needed them. I don’t think infotainment systems need a concept of copy/paste but having to write:

    Some_Sort_Of_Return handle_copy(wl_surface *srf, wl_buffer* buf) {
    //Completely ignore this
    return 0;
    }
    
    Some_Sort_Of_Return handle_paste(wl_surface *srf, wl_buffer* buf) {
    //Completely ignore this
    return 0;
    }
    
    

    Is really missing the point of starting fresh, is bytes in the binary that didn’t need to be there, and while my example is pretty minimal for shits and giggles IRL would have been a great way to introduce “randomness” and “breakage” for those just wanting to ignore this entire aspect.

    But one of those agree to disagree. I think the level of hands off Wayland went was the correct amount. And now that we have things like wlroots even better, because if want to start there you can now start there and add what you need. XDG is XDG and if that’s what you want, you can have it. But if you want your own way (because eff working nicely with GNOME and KDE, if that’s your cup of tea) you’ve got all the rope in the world you will ever need.

    I get what Nate is saying, but things like XDG are just what happened with ICCCM. And when Wayland came in super lightweight, it allowed the inevitably of XDG to have lots of room to specify. ICCCM had to contort to fit around X. I don’t know, but the way I like to think about it is like unsalted butter. Yes, my potato is likely going to need salt and butter. But I like unsalted butter because then if I want a pretty light salt potato, I’m not stuck with starting from salted butter’s level of salt.

    I don’t know, maybe I’m just weird like that.


  • Over on Nate’s other blog entry he indicates this:

    The fundamental X11 development model was to have a heavyweight window server–called Xorg–which would handle everything, and everyone would use it. Well, in theory there could be others, and at various points in time there were, but in practice writing a new one that isn’t a fork of an old one is nearly impossible

    And I think this is something people tend to forget. X11 as a protocol is complex and writing an implementation of it is difficult to say the least. Because of this, we’ve all kind of relied on Xorg’s implementation of it and things like KDE and GNOME piggyback on top of that. However, nothing (outside of the pure complexity) prevented KWin (just as an example) implementing it’s own X server. KWin having it’s own X server would give it specific things that would better handle the things KWin specifically needed.

    Good parallel is how crazy insane the HTML5 spec has become and how now pretty much only Google can write a browser for that spec (with thankfully Firefox also keeping up) and everyone is just cloning that browser and putting their specific spin to it. But if a deep enough core change happens, that’s likely to find its way into all of the spins. And that was some of the issue with X. Good example here, because of the specific way X works an “OK” button (as an example) is actually implemented by your toolkit as a child window. Menus those are windows too. In fact pretty much no toolkit uses primitives anymore. It’s all windows with lots and lots of text attributes. And your toolkit Qt, Gtk, WINGs, EFL, etc handle all those attributes so that events like “clicking a mouse button” work like had you clicked a button and not a window that’s drawn to look like a button.

    That’s all because these toolkits want to do things that X won’t explicitly allow them to do. Now the various DEs can just write an X server that has their concept of what a button should do, how it should look, etc… And that would work except that, say you fire up GIMP that uses Gtk and Gtk has it’s idea of how that widget should look and work and boom things break with the KDE X server. That’s because of the way X11 is defined. There’s this middle man that always sits there dictating how things work. Clients draw to you, not to the screen in X. And that’s fundamentally how X and Wayland are different.

    I think people think of Wayland in the same way of X11. That there’s this Xorg that exists and we’ll all be using it and configuring it. And that’s not wholly true. In X we have the X server and in that department we had Xorg/XFree86 (and some other minor bit players). The analog for that in Wayland (roughly, because Wayland ≠ X) is the Compositor. Of which we have Mutter, Clayland, KWin, Weston, Enlightenment, and so on. Which that’s more than just one that we’re used to. That’s because the Wayland protocol is simple enough for these multiple implementations.

    The skinny is that a Compositor needs to at the very least provide these:

    • wl_display - This is the protocol itself.
    • wl_registry - A place to register objects that come into the compositor.
    • wl_surface - A place for things to draw.
    • wl_buffer - When those things draw there should be one of these for them to pack the data into.
    • wl_output - Where rubber hits the road pretty much, wl_surface should display wl_buffer onto this thing.
    • wl_keyboard/wl_touch/etc - The things that will interact with the other things.
    • wl_seat - The bringing together of the above into something a human being is interacting with.

    And that’s about it. The specifics of how to interface with hardware and what not is mostly left to the kernel. In fact, pretty much compositors are just doing everything in EGL, that is KWin’s wl_buffer (just random example here) is a eglCreatePbufferSurface with other stuff specific to what KWin needs and that’s it. I would assume Mutter is pretty much the same case here. This gets a ton of the formality stuff that X11 required out of the way and allows Compositors more direct access to the underlying hardware. Which was pretty much the case for all of the Window Managers since 2010ish. All of them basically Window Manage in OpenGL because OpenGL allowed them to skip a lot of X, but of course there is GLX (that one bit where X and OpenGL cross) but that’s so much better than dealing with Xlib and everything it requires that would routinely require “creative” workarounds.

    This is what’s great about Wayland, it allows KWin to focus on what KWin needs, mutter to focus on what mutter needs, but provides enough generic interface that Qt applications will show up on mutter just fine. Wayland goes out of its way to get out of the way. BUT that means things we’ve enjoyed previously aren’t there, like clipboards, screen recording, etc. Because X dictated those things and for Wayland, that’s outside of scope.


  • What’s getting yanked is that older phones won’t connect to Android Auto enabled vehicles if the phone is running Android Nougat. It must be Running Android Oreo or later.

    For those not remembering, Nougat was released in 2016 and went out of support in 2019. By the most recent metric (Dec. 2022) about 4% of all Android devices currently run Nougat. So this will affect all fifteen of the people still running this OS.

    Most devices that were originally sold with Nougat have an upgrade path to Oreo. The bigger problem is folks who purchased devices with Marshmallow (orig. 2015) or Lollipop (orig. 2014) who stopped receiving upgrades past Nougat. These are the devices that will most likely be impacted by this change.

    Personally, I like to keep my devices for at least five years, so them deprecating 2016 and earlier is okay with me.





  • I am so sorry this got so long. I’m absolutely horrible at brevity.

    Applications use things called libraries to provide particular functions rather than implement those functions themselves. So like “handle HTTP request” as an example, you can just use a HTTP library to handle it for you so you can focus on developing your application.

    As time progresses, libraries change and release new versions. Most of the time one version is compatible with the other. Sometimes, especially when there is a major version change, the two version are incompatible. If an application relied on that library and a major incompatible change was made, the application also needs to be changed for the new version of the library.

    A Linux distro usually selects the version of each library that they are going to ship with their release and maintain it via updates. However, your distro provider and some neat program you might use are usually two different people. So the neat program you use might have change their application to be compatible with a library that might not make it into your distro until next release.

    At that point you have one of two options. Wait until your distro provides the updated library or the go it alone route of you updating your own library (which libraries can depend on other libraries, which means you could be opening a whole Pandora’s box here). The go it alone route also means that you have to turn off your distro’s updates because they’ll just overwrite everything you’ve done library wise.

    This is where snaps, flatpaks, and appimages come into play. In a very basic sense, they provide a means for a program to include all the libraries it’ll need to run, without those libraries conflicting with your current setup from the distro. You might hear them as “containerized programs”, however, they’re not exactly the Docker style “container”, but from an isolating perspective, that’s mostly correct. So your neat application that relies on the newest libraries, they can be put into a snap, flatpak, or appimage and you can run that program with those new libraries no need for your distro to provide them or for you to go it alone.

    I won’t bore you on the technical difference between the formats, but just mostly focus on what I usually hear is the objectionable issue with snaps. Snaps is a format that is developed by Canonical. All of these formats have a means of distribution, that is how do you get the program to install and how it is updated. Because you know, getting regular updates of your program is still really important. With snaps, Canonical uses a cryptographic signature to indicate that the distribution of the program has come from their “Snaps Store”. And that’s the main issue folks have taken with snaps.

    So unlike the other kinds of formats, snaps are only really useful when they are acquired from the Canonical Snaps Store. You can bypass the checking of the cryptographic signature via the command line, but Ubuntu will not automatically check for updates on software installed via that method, you must check for updates manually. In contrast, anyone can build and maintain their own flatpak “store” or central repository. Only Canonical can distribute snaps and provide all of the nice features of distribution like automatic updates.

    So that’s the main gripe, there’s technical issues as well between the formats which I won’t get into. But the main high level argument is the conflicting ideas of “open and free to all” that is usually associated with the Linux group (and FOSS [Free and open-source software] in general) and the “only Canonical can distribute” that comes with snaps. So as @sederx indicated, if that’s not an argument that resonates with you, the debate is pretty moot.

    There’s some user level difference like some snaps can run a bit slower than a native program, but Canonical has updated things with snaps to address some of that. Flatpak sandboxing can make it difficult to access files on your system, but flatpak permissions can be edited with things like Flatseal. Etc. It’s what I would file into the “papercut” box of problems. But for some, those papercuts matter and ultimately turn people off from the whole Linux thing. So there’s arguments that come from that as well, but that’s so universal “just different in how the papercut happens” that I just file that as a debate between container and native applications, rather a debate about formats.