I just read “Google Continues Working On “Magma” For Mesa Cross-Platform System Call Interface” on Phoronix and didn’t get it. That made me realise my knowledge and understanding of these things is barely existent. I did write an MS paint clone on linux in C++ a really long time ago and the entire thing was with opengl (it looked like crap), but since then… nothing.

So my understanding is that the graphics card (or CPU if there’s no graphics card), writes to a component which is connected to a screen and every cycle (every 1/60 seconds if 60Hz) the contents are sent or read by the screen. OpenGL provided a common interface to do so, but has been outdated since… a while and replaced by Vulkan. Then there are libraries either built on top of are parallel to OpenGL. Vulkan can be parallel or use OpenGL if that’s the only one supported IIRC.
However, I’m not sure if OpenGL is implemented at the hardware level (on the graphics card), software level, or both.

Furthermore, I don’t understand where Magma, Meta, and MESA come in.

Maybe my core understanding is wrong or just outdated. I can’t tell. Can anybody eplain?

Anti Commercial-AI license

  • vividspecter@aussie.zone
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    56 minutes ago

    The other points have been answered, so I’ll try and give a surface view of Magma. It’s basically an abstraction layer for virtual GPU drivers used in VMs. Currently, you need specific implementations to handle all of the pathways between different types of VM guests and hosts, which gets complicated fast, and duplicates a lot of work. The idea is the Magma abstracts this away, and so host and guest GPU drivers only need to interface with Magma. Which means you can swap out different host OSes/GPU drivers and different guest OSes and GPU drivers, and as long as they interface with Magma, they should “just work”.

    Of course, whether it will work out that way in practice remains to be seen. I think Google is using it internally but it’s not in Mesa yet, so it may not even roll out widely. You can follow the MR if you want more detail or to see its progress.

    If you’re wondering why Google is implementing this it appears to be for Fuschia and Android, and compatibility between those two and with desktop Linux, with Windows support also supported as an additional value add. Chromebooks in particular should benefit from this, since ChromeOS is being retired I believe.

    And as an aside, unlike some of the traditional GPU implementations you’d find in VMs, these are or will be pretty much just the normal graphics driver that you’d use on the host. They are generally called “native contexts” and have been implemented for AMD and Intel at the least, but only on non-Windows systems for now. These implementations alone, once they are widely supported, should result in near native GPU performance in VMs, without having to use GPU passthrough (I.e. passing through a physical GPU to the VM guest). So even without Magma there’s some promising stuff happening, albeit mainly on the Linux host -> Linux guest pathway.

  • Redkey@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    47 minutes ago

    I’m not too knowledgeable about the detailed workings of the latest hardware and APIs, but I’ll outline a bit of history that may make things easier to absorb.

    Back In the early 1980s, IBM was still setting the base designs and interfaces for PCs. The last video card they relased which was an accepted standard was VGA. It was a standard because no matter whether the system your software was running on had an original IBM VGA card or a clone, you knew that calling interrupt X with parameters Y and Z would have the same result. You knew that in 320x200 mode (you knew that there would be a 320x200 mode) you could write to the display buffer at memory location ABC, and that what you wrote needed to be bytes that indexed a colour table at another fixed address in the memory space, and that the ordering of pixels in memory was left-to-right, then top-to-bottom. It was all very direct, without any middleware or software APIs.

    But IBM dragged their feet over releasing a new video card to replace VGA. They believed that VGA still had plenty of life in it. The clone manufacturers started adding little extras to their VGA clones. More resolutions, extra hardware backbuffers, extended palettes, and the like. Eventually the clone manufacturers got sick of waiting and started releasing what became known as “Super VGA” cards. They were backwards compatible with VGA BIOS interrupts and data structures, but offered even further enhancements over VGA.

    The problem for software support was that it was a bit of a wild west in terms of interfaces. The market quickly solidified around a handful of “standard” SVGA resolutions and colour depths, but under the hood every card had quite different programming interfaces, even between different cards from the same manufacturer. For a while, programmers figured out tricky ways to detect which card a user had installed, and/or let the user select their card in an ANSI text-based setup utility.

    Eventually, VESA standards were created, and various libraries and drivers were produced that took a lot of this load off the shoulders of application and game programmers. We could make a standardised call to the VESA library, and it would have (virtually) every video card perform the same action (if possible, or return an error code if not). The VESA libraries could also tell us where and in what format the card expected to receive its writes, so we could keep most of the speed of direct access. This was mostly still in MS-DOS, although Windows also had video drivers (for its own use, not exposed to third-party software) at the time.

    Fast-forward to the introduction of hardware 3D acceleration into consumer PCs. This was after the release of Windows 95 (sorry, I’m going to be PC-centric here, but 1: it’s what I know, and 2: I doubt that Apple was driving much of this as they have always had proprietary systems), and using software drivers to support most hardware had become the norm. Naturally, the 3D accelerators used drivers as well, but we were nearly back to that SVGA wild west again; almost every hardware manufacturer was trying to introduce their own driver API as “the standard” for 3D graphics on PC, naturally favouring their own hardware’s design. On the actual cards, data still had to be written to specific addresses in specific formats, but the manufacturers had recognized the need for a software abstraction layer.

    OpenGL on PC evolved from an effort to create a unified API for professional graphics workstations. PC hardware manufacturers eventually settled on OpenGL as a standard which their drivers would support. At around the same time, Microsoft had seen the writing on the wall with regards to games in Windows (they sucked), and had started working on the “WinG” graphics API back in Windows.3.1, and after a time that became DirectX. Originally, DirectX only supported 2D video operations, but Microsoft worked with hardware manufacturers to add 3D acceleration support.

    So we still had a bunch of different hardware designs, but they still had a lot of fundamental similarities. That allowed for a standard API that could easily translate for all of them. And this is how the hardware and APIs have continued to evolve hand-in-hand. From fixed pipelines in early OpenGL/DirectX, to less-dedicated hardware units in later versions, to the extremely generalized parallel hardware that caused the introduction of Vulkan, Metal, and the latest DirectX versions.

    To sum up, all of these graphics APIs represent a standard “language” for software to use when talking to graphics drivers, which then translate those API calls into the correctly-formatted writes and reads that actually make the graphics hardware jump. That’s why we sometimes have issues when a manufacturer’s drivers don’t implement the API correctly, or the API specification turns out to have a point which isn’t defined clearly enough and some drivers interpret it one way, while other drivers interpret the same API call slightly differently.

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    14 hours ago

    OpenGL is an API standard. It defines data structures, operation interfaces, and behavior.

    Mesa 3D is an implementation of OpenGL. It can be used so users of OpenGL can call it to draw stuff.

    Vulkan is a newer API standard. It is newer and was designed with a lot of new hardware and hardware capabilities in mind, and significantly reduced what the job of the API is supposed to do compared to OpenGL. Essentially giving API users many more opportunities to control graphics pipeline behavior for better efficiency and performance. Libraries and frameworks exist that provide more convenience and prepared setup or opinionated usage patterns on top of Vulkan.

    DirectX had a similar shift with DirectX version 12, which also implemented closer-to-hardware APIs similar to Vulkan vs OpenGL.

  • kayzeekayzee@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    6
    ·
    12 hours ago

    My only experience is with gpu-side OpenGL, so here goes:

    Your gpu is a separate device designed to run simple tasks with a staggering amount of parallelization. What does that mean? Basically every vertex and pixel on your screen needs to be processed before it can be displayed, and the gpu has a bunch of small cores that do all of that for every single frame your monitor outputs. A programmer defines all this using shaders. In OpenGL, the shader language is called GLSL.

    In the opengl graphics pipeline, the cpu-side code defines which effects apply to which geometry in what order. For example, you may want to render every opaque object first, and then draw the transluscent objects on top with semi-transparency (this is called deferred rendering, and is a very common technique). Maybe you’d want a different shadow map for each light-emitting object. Maybe you’d want a setting to define how much bloom to draw to the screen. Maybe you want to provide textures for the gpu to access. The possibilities are endless.

    On the gpu-side, we write code in shaders. The shaders, written in GLSL, get compiled by your device-specific drivers into the machine code your hardware uses. In OpenGL there are several types of shader, but there are two main ones: Vertex and Fragment shaders.

    Vertex shaders run first. They run on every vertex in the scene and do that math that puts each vertex in the correct location. You can also assign varying values specific to each vertex that get passed down the pipeline to the next shaders.

    Between the vertex and fragment shaders, the gpu automatically saves performance by removing any vertex that ends up off-screen, or any triangle that’s definitely not visible to the camera (this is called culling), and then fills in each triangle with pixels called fragments (in a process called rasterization). Each fragment will also have access to the varying value of it’s three vertices interpolated across the face of the triangle (ie the closest triangle will have the most influence).

    After this, the fragment shaders are run on every pixel/“fragment” on screen - this is where you’d render effects like lightning and shadows and apply textures. The fragment shaders determine the color of the pixel as it appears on your screen.

    There are other specialized shaders you can add too! But your gpu needs to be new enough to support them:

    • Compute shaders let you define work groups of threads to do parallel math that’s not directly related to the pixels on screen.
    • Tesselation shaders let you break larger geometry down into smaller pieces before rasterization.
    • Geometry shaders process each triangle and let you do things like clone geometry, which isnt possible in vertex shaders.
  • zurohki@aussie.zone
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    14 hours ago

    Not an expert, but I’ll give it a shot. That way someone will speak up to correct me. 🐸

    With an AMD GPU on Linux, you’ve got your kernel amdgpu driver which talks to the hardware, loads firmware, etc.

    Sitting on top of that is Mesa, which provides an opengl and vulkan driver. Your application talks to the opengl driver which talks to the kernel driver which talks to the hardware.

    Windows has it’s own graphics stack, which has video card drivers and DirectX drivers.

    Metal is Apple’s proprietary Vulkan knock-off, which seems to exist to force game devs to write games that only run on MacOS. This hasn’t really worked.

    Magma seems to be about inserting a layer between the kernel driver and Mesa, so you can use Mesa OpenGL drivers on top of Magma on Windows kernel drivers? That’s not really something most people are looking to do.

    • Pycorax@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 minutes ago

      Since we’re talking about Apple, there’s an upcoming library and spec called WebGPU that, contrary to it’s name, is a higher level cross platform graphics library. It’s an interesting idea, write once and depending on your platform, it would use the corresponding platform’s preferred backend (e.g. Direct3D on Windows, Metal on Macs, etc). It’s supposed to be promising and provide a easy way for any existing dev to hop in until they had to give up on SPIR-V support and come with a Metal-like shading language just to appease Apple so that they would support it due to Apple’s existing legal disagreements with Khronos.

      And from what I’ve seen from WGSL, it’s nowhere as nice as GLSL and HLSL.

      So yea, if you need any more evidence of Apple’s shitty attitude in the space.

    • Aatube@kbin.melroy.org
      link
      fedilink
      arrow-up
      4
      arrow-down
      5
      ·
      14 hours ago

      I think it’s more like Vulkan was a Metal knockoff. Metal released (for mobile) June 2014 and Vulkan research kicked off July 2014. Vulkan was only announced March 2015, and I would think it took more than three months of work for Apple to release Metal for macOS June 2015. And then Vulkan’s specs and SDK were released in February 2016. Though I doubt Apple was pushing for Metal to become an open cross-platform standard either.

      • kautau@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        10 hours ago

        AMD donated the Mantle API to the Khronos group (which Apple has been a part of since 2008 focused on OpenCL) and that group developed Vulkan. Of course Apple has a proprietary version. They have their own silicon, own OS, why wouldn’t they have their own graphics layer? Neither of them are knockoffs. Vulkan is open source and widespread for many applications, Metal is proprietary, and applies purely to Apple OS’s.

  • MonkderVierte@lemmy.zip
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    13 hours ago

    Slightly related: there’s things like SDL_gui, which build directly on SDL2. But SDL is a library to interface media, kinda an abstraction. How… does that work?

    I know already about immediate vs. retained mode and that toolkits like Dear ImGui are more barebones (and often used in games), while kits like Qt/Slint have a Model/View pattern, have a data handling and/or messaging system.

    But above just doesn’t fit.

    Edit: apparently SDL handles draw calls too?

    • Redkey@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      In my (admittedly limited) experience, SDL/SDL2 is more of a general-purpose library for dealing with different operating systems, not for abstracting graphics APIs. While it does include a graphics abstraction layer for doing simple 2D graphics, many people use it to have the OS set up a window, process, and whatever other housekeeping is needed, and instantiate and attach a graphics surface to that window. Then they communicate with that graphics surface directly, using the appropriate graphics API rather than SDL. I’ve done it with OpenGL, but my impression is that using Vulkan is very similar.

      SDL_gui appears to sit on top of SDL/SDL2’s 2D graphics abstraction to draw custom interactive UI elements. I presume it also grabs input through SDL and runs the whole show, just outputting a queue of events for your program to process.