Nearing the filling of my 14.5TB hard drive and wanting to wait a bit longer before shelling out for a 60TB raid array, I’ve been trying to replace as many x264 releases in my collection with x265 releases of equivalent quality. While popular movies are usually available in x265, less popular ones and TV shows usually have fewer x265 options available, with low quality MeGusta encodes often being the only x265 option.

While x265 playback is more demanding than x264 playback, its compatibility is much closer to x264 than the new x266 codec. Is there a reason many release groups still opt for x264 over x265?

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 months ago

    A lot of TV shows are direct rips from streaming services and they don’t use H.265 because of the ridiculous licensing it comes with.

    I suspect AV1 will become much more popular for streaming in a few years when the hardware support becomes more common. It’s an open source codec, so licensing shouldn’t be an issue. Then we will see a lot more AV1 releases.

      • Shimitar@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        In my experience, you always gain space savings going av1 from 264 and 265 as well. For me its always been significant savings at the same quality level.

        Ofc YMMV and use a very recent ffmpeg with the best av1 libraries.

  • Shimitar@feddit.it
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Some notes: Don’t use GPU to reencode you will lose quality.

    Don’t worry for long encoding times, specially if the objective is long term storage.

    Power consumption might be significant. I run mine what the sun shine and my photovoltaic picks up the tab.

    And go AV1, open source and seems pretty committed to by the big players. Much more than h265.

      • cuppaconcrete@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Yeah that caught my eye too, seems odd. Most compression/encoding schemes benefit from a large dictionary but I don’t think it would be constrained by the sometimes lesser total RAM on a GPU than the main system - in most cases that would make the dictionary larger than the video file. I’m curious.

        • db2@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          The way it was explained to me once is that the asic in the gpu makes assumptions that are baked in to the chip. It made sense because they can’t reasonably “hardcode” for every possible variation of input the chip will get.

          The great thing though is if you’re transcoding you can use the gpu to do the decoding part which will work fine and free up more cpu for the encoding half.

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    8 months ago

    RARBG was so good for this, their releases were of such good consistent quality

    If you search for ORARBG on therarbg site you can still find some OG releases and not random YIFY crap