• MonkderVierte@lemmy.zip
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    18 hours ago

    Today, a single user request might touch 15 services, 3 databases, 2 caches, and a message queue.

    And is the user experience any better?

    • PushButton@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      7 hours ago

      Add docker, kubernetes, open telemetry, Prometheus, paperduty, and grafana so that your 5 users can scale your TODO crud app like they deserve!

      We all know that’s the only viable way to build a software in 2026.

    • FishFace@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 hours ago

      Probably yes, because such an aggressively architected system probably serves millions of users and would fall over if it were simpler.

      Not to say nothing is ever too complicated but there is a valid reason why these things exist

  • azertyfun@sh.itjust.works
    link
    fedilink
    arrow-up
    20
    ·
    22 hours ago

    I don’t disagree with the point being made but I think the author is underselling the value of opentelemetry tracing here.

    OTEL tracing is not mere plumbing. The SDKs are opinionated and do provide very useful context out of the box (related spans/requests, thrown exceptions, built-in support for common libraries). The data model is easy to use and contextful by default.

    It’s more useful if the application developer properly sets attributes as demonstrated, but even a half-assed tracing implementation is still an incredibly valuable addition to logging for production use.

  • FishFace@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 hours ago

    We switched to structured logging a while ago and it’s very useful. We have used sentry for ages though and that was a bigger improvement. Many hard to debug problems have actually been trivialised by it, due to the context it provides.

    This is for a monolithic application though, I dunno how this would scale for microservices.

    The goal of logging a single event per request seems very ambitious imo. But maybe there are things out there that make it easier to glom logs onto a single context object easily and transfer it between services.

    Honestly though I think you get a lot of the way there if ever structured log line related to a request includes a request id so you can just filter on that.

  • calcopiritus@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    23 hours ago

    Generally agree. Except:

    Logs that are a “debug diary” are not useless. Their purpose is to debug. That’s why there’s log levels. If you are not interested in that, filter by log levels above debug.

    Also, the different formats for fields I see as a necessary evil. Generally, more logs (of verbose log levels) = more good. Which means that there should be as frictionless to write as possible. Forcing a specific format just means that there will be less logs being written.

    The json (or any other consistent format) logs seem to be a good idea, but I would keep it to a single debug level (maybe info+error?). So if you want to get wide events, you filter by these log levels to get the full compact picture. But if you are following a debug log chain, it seems a pain to have to search for the “message” field on a potentially order-independent format instead of just reading the log.

    TL;DR

    Log levels have different purposes, and so they should have different requirements.

    • public_image_ltd@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      6
      ·
      22 hours ago

      Might be a stupid question but wouldn’t it be a good job for fucking AI to read these and tell you where the interesting parts are?

      • calcopiritus@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        15 hours ago

        Logs’ purpose is to tell you what actually happened in the system. I don’t think it is a good idea to use something that “hallucinates” to tell you what really happened.

      • kippinitreal@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        22 hours ago

        LLM would be great to parse all that data, but I think you miss OP’s point. AI can be useful to automate mundane jobs, i.e. jobs you can’t get away from. OP’s point in my view is verbose logs are noisey & difficult to parse, because you’re logging everything unnecessarily. If you Log interesting things and mark them with context & logging levels, Then you can dive in as deep as you need, when you need. Why add complexity (& other hazards) of AI when you can fix the root of the problem first yourself.