One needs to send 1 mln HTTP requests concurrently, in batches, and read the responses. No more than 100 requests at a time.

Which way will it be better, recommended, idiomatic?

  • Send 100 ones, wait for them to finish, send another 100, wait for them to finish… and so on

  • Send 100 ones. As a a request among the 100 finishes, add a new one into the pool. “Done - add a new one. Done - add a new one”. As a stream.

  • deegeese@sopuli.xyz
    link
    fedilink
    arrow-up
    23
    ·
    9 months ago

    That’s not 1M concurrent requests.

    That’s 100 concurrent requests for a queue of 1M tasks.

    Work queue and thread pool is the normal way, but it’s possible to get fancy with optimizations.

    Basically you fire 100 requests and when one completes you immediately fire another.

  • dark_stang@beehaw.org
    link
    fedilink
    English
    arrow-up
    18
    ·
    9 months ago

    Not enough info. What are you trying to actually accomplish here? If you’re stress testing and trying to measure how fast a server can process all those requests, use something like jmeter. You can tell it to do 100 concurrent threads with 10000 requests each, then call it a day.

    • cuenca@lemm.eeOP
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      Not enough info. What are you trying to actually accomplish here by asking me this question?

      • Fal@yiffit.net
        link
        fedilink
        English
        arrow-up
        21
        ·
        9 months ago

        What the shit kind of response is this. We’re trying to get enough info to answer your question

        • cuenca@lemm.eeOP
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          8 months ago

          What the shit kind of response is this. I’m tryintg to GET help for my question.

        • Gamma@beehaw.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 months ago

          They did the same a few days ago and deleted the post before reposting

          • cuenca@lemm.eeOP
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            8 months ago

            Yes, they did. The fucking commentators here are really fucking.

        • cuenca@lemm.eeOP
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          Kmz, I’m JUST trying to GET help but they’re being a bunch of cocks?

          • xor@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 months ago

            They’re trying to work out what problem you’re trying to solve, so they can give you actually useful advice for your - frankly - very vague question

            “What are you trying to achieve” is a perfectly reasonable question to ask about a deeply under-specified problem

            Edit: here’s my theory:

            This is a homework or interview question you’ve been asked, that depends on specific context that you haven’t included (because you don’t know what context is even relevant)

            You don’t want to admit that’s why you’re asking, because you know that defeats the point of you being asked in the first place.

            Hence, you’re being absurdly hostile to someone trying to help, because you can’t answer their question without admitting you’re trying to cheat

    • douglasg14b@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      9 months ago

      For most users jmeter is difficult to approach.

      Something like autocannon or ddosify may be nicer

  • Borger@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    9 months ago

    The second option. With the first option you’ll end up in situations where you have spare compute/network resource that isn’t being utilised because all the remaining ones in the current batch of 100 are being handled by other threads / worker processes.

      • catacomb@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Where did you get 100 from? I’m just asking if it’s a real limit or a guess at “some manageable number” under one million.

        It can be worth experimenting and tuning this value. You might even find that less than 100 works better.

  • Gamma@beehaw.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 months ago

    Careful everyone, they didn’t specify programming languages! Don’t even THINK of providing a few lines of Python that would answer the question 🐍

  • Hirom@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    9 months ago

    Rewrite the application to be less greedy in the number of requests it submit to the server, make (better) use of caching. That’ll probably lower the number of concurrent request that have to be handled.