I just read “Google Continues Working On “Magma” For Mesa Cross-Platform System Call Interface” on Phoronix and didn’t get it. That made me realise my knowledge and understanding of these things is barely existent. I did write an MS paint clone on linux in C++ a really long time ago and the entire thing was with opengl (it looked like crap), but since then… nothing.

So my understanding is that the graphics card (or CPU if there’s no graphics card), writes to a component which is connected to a screen and every cycle (every 1/60 seconds if 60Hz) the contents are sent or read by the screen. OpenGL provided a common interface to do so, but has been outdated since… a while and replaced by Vulkan. Then there are libraries either built on top of are parallel to OpenGL. Vulkan can be parallel or use OpenGL if that’s the only one supported IIRC.
However, I’m not sure if OpenGL is implemented at the hardware level (on the graphics card), software level, or both.

Furthermore, I don’t understand where Magma, Meta, and MESA come in.

Maybe my core understanding is wrong or just outdated. I can’t tell. Can anybody eplain?

Anti Commercial-AI license

  • kayzeekayzee@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    6
    ·
    20 hours ago

    My only experience is with gpu-side OpenGL, so here goes:

    Your gpu is a separate device designed to run simple tasks with a staggering amount of parallelization. What does that mean? Basically every vertex and pixel on your screen needs to be processed before it can be displayed, and the gpu has a bunch of small cores that do all of that for every single frame your monitor outputs. A programmer defines all this using shaders. In OpenGL, the shader language is called GLSL.

    In the opengl graphics pipeline, the cpu-side code defines which effects apply to which geometry in what order. For example, you may want to render every opaque object first, and then draw the transluscent objects on top with semi-transparency (this is called deferred rendering, and is a very common technique). Maybe you’d want a different shadow map for each light-emitting object. Maybe you’d want a setting to define how much bloom to draw to the screen. Maybe you want to provide textures for the gpu to access. The possibilities are endless.

    On the gpu-side, we write code in shaders. The shaders, written in GLSL, get compiled by your device-specific drivers into the machine code your hardware uses. In OpenGL there are several types of shader, but there are two main ones: Vertex and Fragment shaders.

    Vertex shaders run first. They run on every vertex in the scene and do that math that puts each vertex in the correct location. You can also assign varying values specific to each vertex that get passed down the pipeline to the next shaders.

    Between the vertex and fragment shaders, the gpu automatically saves performance by removing any vertex that ends up off-screen, or any triangle that’s definitely not visible to the camera (this is called culling), and then fills in each triangle with pixels called fragments (in a process called rasterization). Each fragment will also have access to the varying value of it’s three vertices interpolated across the face of the triangle (ie the closest triangle will have the most influence).

    After this, the fragment shaders are run on every pixel/“fragment” on screen - this is where you’d render effects like lightning and shadows and apply textures. The fragment shaders determine the color of the pixel as it appears on your screen.

    There are other specialized shaders you can add too! But your gpu needs to be new enough to support them:

    • Compute shaders let you define work groups of threads to do parallel math that’s not directly related to the pixels on screen.
    • Tesselation shaders let you break larger geometry down into smaller pieces before rasterization.
    • Geometry shaders process each triangle and let you do things like clone geometry, which isnt possible in vertex shaders.