• 0 Posts
  • 34 Comments
Joined 8 months ago
cake
Cake day: February 10th, 2024

help-circle
  • Anbernic devices in particular are known to ship with an SD card that’s preloaded with a fairly large game library. I own a RG351M which did indeed include a cheap card loaded with both the OS and a collection of games by Nintendo, Sega, and many others, plus some strange rom hacks. I immediately swapped that card out for a better one with a better CFW and my own files.

    Most other notable names in the emulation handhelds space like Retroid, Ayn, and Ayaneo expect users to be able to provide their own files instead, which I’d say makes more sense.


  • USB-C video is usually DisplayPort Alt Mode, which uses a completely different data rate and protocol from USB.

    Even using old 2016 hardware, a computer and USB-C cable that both only support 5 Gbps USB (such as USB 3.1 Gen 1) can often easily transmit an uncompressed 4K 60Hz video stream over that cable, using about 15.7Gbps of DisplayPort 1.2 bandwidth. Could go far higher than that with DP 2.0.

    Some less common video-over-USB devices/docks use DisplayLink instead, which is indeed contained within USB packets and bound by the USB data rate, but it uses lossy compression so those uncompressed numbers aren’t directly comparable.


  • zarenki@lemmy.mltoTechnology@lemmy.worldSome basic info about USB
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    26 days ago

    For that portable monitor, you should just need a cable with USB-C plugs on both ends which supports USB 3.0+ (could be branded as SuperSpeed, 5Gbps, etc). Nothing more complicated than that.

    The baseline for a cable with USB-C on both ends should be PD up to 60W (3A) and data transfers at USB 2.0 (480Mbps) speeds.

    Most cables stick with that baseline because it’s enough to charge phones and most people won’t use USB-C cables for anything else. Omitting the extra capabilities lets cables be not only cheaper but also longer and thinner.

    DisplayPort support uses the same extra data pins that are needed for USB 3.0 data transfers, so in terms of cable support they should be equivalent. There also exist higher-power cables rated for 100W or 240W but there’s no way a portable monitor would need that.



  • The whole point of copyright in the first place, is to encourage creative expression, so we can have human culture and shit.

    I feel like that purpose has already been undermined by various changes to copyright law since its inception, such as DMCA and lengthening copyright term from 14 years to 95. Freedom to remix existing works is an important part of creative expression which current law stifles for any original work that releases in one person’s lifespan. (Even Disney knew this: the animated Pinocchio movie wouldn’t exist if copyright could last more than 56 years then)

    Either way, giving bots the ‘right’ to remix things that were just made less than a year ago while depriving humans the right to release anything too similar to a 94 year old work seems ridiculous on both ends.


  • a variable-length integer encoding that somewhat resembles what they do in UTF-8. It means for strings < 128 chrs, the length is a single byte. Longer than that and more bytes get used as necessary.

    What you used might be similar to unsigned LEB128, which is used in DWARF, Webassembly, Android’s DEX format, and protobuf. Essentially encodes 7 bits of the number in each byte, with the high bit being 1 in any byte except the last one representing the number.

    Though unlike UTF-8 the number’s length isn’t encoded in the first byte but instead implied by the final byte. Arguably making the number’s encoding similar to a terminated string.




  • Legitimately playing 4K blu-ray video on a PC without cracking the DRM requires an insane combination of requirements:

    • Windows 10 (not 11)
    • An Intel processor between gen 7-10 (nothing newer because Intel ditched SGX in 2021)
    • Intel integrated graphics (no nvidia/amd)
    • Monitor that supports HDCP 2.2 for DRM (some 4k ones don’t)
    • An approved optical drive
    • Proprietary playback software which costs about $100 USD, separate from the cost of hardware and Windows
    • Miscellaneous other requirements for the motherboard features, bios settings, etc.

    Meanwhile MakeMKV can rip them on basically any Windows/Linux/Mac system with a compatible BDXL drive.


  • Likewise, I’m far less hesitant to accept buying digital console games than video because I generally can expect that once I download a game on my one device that I’ll pull out the same device whenever I want to play it and it’ll keep working when offline and even after the servers are gone, until the hardware fails. Modern games’ physical releases rely so heavily on updates and DLC that the cart/disc you get isn’t complete anyway; buying physical effectively becomes a digital game with an extra point of failure (and partial resellability). PC gaming complicates things but at least some games are available completely DRM-free there.

    With video content sold online, streaming directly from some server is always the focus. As soon as the server disconnects you become unable to watch by default. Even if some service lets you pre-download within its app and watch offline (which probably won’t work indefinitely without checkins anyway), that’ll defeat the portability expectations for watching your videos on any device interchangeably.

    Blu-ray video isn’t ideal considering you cannot watch it on a phone, tablet, or linux system without cracking its DRM, but that’s still way better for lasting access than anything else major movie/TV studios are willing to let consumers access without piracy.


  • This board has the StarFive JH7110 SoC. That processor has previously been in very low power single board computers like StarFive VisionFive 2 (2022) and Milk-V Mars (2023), a Raspberry Pi clone that can be bought for as low as $40. Its storage limitations (SD/eMMC rather than NVMe) show how much this isn’t meant for laptop use.

    Very underpowered for a laptop too, even when considering this is intended for developers and doesn’t need to be remotely performance competitive. Consider that this has just 4 RV64GC cores, the cheapest Intel board options Framework offers are 12 cores (4P+8E), and any modern RISC-V core is far simpler with less area than even an Intel E core. These cores also lack the RISC-V vector instructions extension.


  • A standard called SystemReady exists. For the systems that actually follow its standards, you can have a single ARM OS installation image that you copy to a USB drive and can then boot through UEFI and run with no problems on an Ampere server, an NXP device, an Nvidia Jetson system, and more.

    Unfortunately it’s a pretty new standard, only since 2020, and Qualcomm in particular is a major holdout who hasn’t been using it.

    Just like x86, you still need the OS to have drivers for the particular device you’re installing on, but this standard at least lets you have a unified image, and many ARM vendors have been getting better about upstreaming open-source drivers in the Linux kernel.


  • To the contrary, I would expect the sample to skew more towards people who have a heavily customized X session and strong opinions about window managers while drastically underrepresenting average GNOME users who stick with the default Wayland session. Someone who likes their custom setup can still be waiting for a Wayland equivalent while casual Ubuntu users have been defaulted to Wayland on new non-nvidia installs since early 2021.



  • A ground-up overhaul of the copyright system would make things so much worse, not better, considering the current climate of power. In the US for example, MPA, RIAA, Entertainment Software Association, Association of American Publishers, and others wouldn’t want public libraries or the used market to exist at all; they would push for making every single transfer of “ownership” on any media involve a payment to the rights holder. Lawmakers are far more likely to accommodate those groups’ desires than the public good.

    The worst parts of the current copyright system are the most recent. Both the DMCA and the extension of US copyright term to 95 years took effect in 1998, and the early 2000s saw many other countries passing laws to make their copyright system closer to US’s in various ways such as the WIPO Copyright Treaty which took effect in 2002 and EU’s 2006 Copyright Directive. Just about the only positive news we’ve seen in US copyright law since then is in temporary exemptions to DMCA’s anti-circumvention rules (Section 1201) which change every year. Copyright law was far less hostile to consumers and the public before the 90s than it is now, and up until 1976 it used to be expected that most media someone consumes would enter public domain within their lifetime.

    The digital era makes market relevance far more ephemeral than ever and yet the laws written for the digital era moved copyright in the opposite direction. Movie studios simultaneously judge whether a film succeeded almost exclusively based on its first week of ticket sales and also claim that depriving public domain for 95 years is necessary. Nothing should be able to justify more than 20 years of copyright. Media formats don’t even last as long as copyright; CDs and DVDs rot, game cartridges die, servers shut down, and even books printed on today’s low-quality paper will fall apart.

    Some of it is absurd to me, like the way something can be online but geographically restricted.

    This is a consequence of contract terms moreso than copyright. One issue in copyright law that this does connect to, though, is the fact that the question of whether the rightsholder keeps a work reasonably available on the market does not impact whether the work retains copyright protections. If copyright law did hypothetically include that limitation, providers would become far more likely to make sure that all content is available in all countries, but even then things could still vary in terms of which content is on which platform.


  • There’s only one case I’ve found where Wi-Fi use seems acceptable in IoT: ESPHome. It’s open-source firmware for microcontrollers that makes DIY IoT sensors and controls accessible over LAN without phoning home to whatever remote server, without trying to make anything accessible over the Internet, and without breaking in any way if the device has no route to the Internet.

    I still wouldn’t call Wi-Fi use ideal even there; mesh can help in larger homes and Z-Wave/Zigbee radios tend to be more power efficient, though ESP32 isn’t exactly suited for a battery-powered device that’s expected to run 24/7 regardless.


  • Yes.

    My home server has dropbear-initramfs installed so that after reboot I can access the LUKS decryption prompt over SSH. The one LUKS partition contains a btrfs filesystem with both rootfs and home as subvolumes. For all the other drives attached to that system, I use ZFS native encryption with a dataset that decrypts with a keyfile from that rootfs and I have backups of an encrypted copy of that keyfile.

    I don’t think there’s a substantial performance impact but I’ve never bothered benchmarking.



  • I’m not sure if this is required. Any decent e-mail server uses TLS to communicate these days, so everything in transit is already encrypted.

    In transit, yes, but not end-to-end.

    One feature that Proton advertises: when you send an email from one Proton mail account to another Proton address, the message is automatically encrypted such that (assuming you trust their client-side code for webmail/bridge) Proton’s servers never have access to the message contents for even a moment. When incoming mail hits Proton’s SMTP server, Proton technically could (but claims not to) log the unencrypted message contents before encrypting it with the recipient’s public key and storing it. That undermines Proton’s promise of Proton not having access to your emails. If both parties involved in an email conversation agree to use PGP encryption then they could avoid that risk, and no mail server on either end would have access to anything more than metadata and the initial exchange of public keys, but most humans won’t bother doing that key exchange and almost no automated mailers would.

    Some standard way of automatically asking a mail server “Does user@proton.me have a PGP public key?” would help on this front as long as the server doesn’t reject senders who ignore this feature and send SMTP/TLS as normal without PGP. This still requires trusting that the server doesn’t give an incorrect public key but any suspicious behavior on this front would be very noticeable in a way that server-side logging would not be. Users who deem that unacceptable can still use a separate set of PGP keys.


  • They say the reason for needing their bridge is the encryption at rest, but I feel like the better way to handle wanting to push email privacy forward would be to publish (or better yet coordinate with other groups on drafting) a public standard that both clients and competing email servers could adopt for an email syncing protocol for that sort of zero-access encryption that requires users give their client a key file. A bridge would be easier to swallow as a fallback option until there’s wider client support rather than as the only way.

    A similar standard for server-to-server communication, like for automatic pgp key negotiation, would be nice too.

    Still, Proton has a easy to access data export that doesn’t require a bridge client or subscription or anything. I think that’s required by GDPR. It’s manual enough to not be an effective way to keep up-to-date backups in case you ever abruptly lose access but it’s good enough to handle wanting to migrate to another provider.