• 0 Posts
  • 33 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle

  • One thing that I discovered about charging PS3 pads, which doesn’t seem to be mentioned a lot, is that they appear (my guess, unconfirmed) to require proper USB current negotiation before they will start charging. In fact, I’ve found multiple sources saying that they can be charged from any USB power source, which isn’t true.

    The original USB standard states that USB hosts should start a connection with 100mA of current, and the client can request increases in 100mA steps up to 500mA. I assume that the PS3 USB ports support this, as do pretty much all computer USB ports. But the majority of wall plug USB chargers don’t; they just allow a maximum current draw of 500mA (or more) from the start and ignore increase requests.

    It seems like the majority of equipment manufacturers ignored this part of the spec, since the host needs circuitry to limit current in any case, so many chargers don’t bother with circuitry to respond, and even when the port does respond to increase requests, the port is actually always allowing the maximum draw and simply approving all requests.

    However, I think that the PS3 pads actually wait for an “OK” response before continuing, which the majority of wall chargers (especially the cheap ones) never send. I had to use the PS3 or a PC (direct connection, not through a hub) to charge my pads until I found a cheap PS3 controller charging dock that works with any supply.


  • I have a stack of Logitec F310 controllers, and I’ve never had them fail to work on any system (Windows, Linux, Android). They’re not “pro gamer” or anything, fairly basic, but they’ve always responded smoothly for me even after many years of use. They’re inexpensive, wired, and have an “XBox - DInput” switch on the back (at least mine do; that feature may have been removed by now).

    The F310 (what I use) is wired and has no rumble feedback.

    The F510 is wired and has rumble feedback, but I’ve never used one.

    The F710 is wireless 2.4GHz (not Bluetooth) and has rumble feedback. I have two of these, and in my experience neither of them connects reliably, even under Windows with the official software installed.


  • I would also probably try to plug USB drives in once a year or so if I were being diligent, but in reality I recently found a handful of USB flash drives that I’d stored in a box in my parents’ unattached garage, and every one of them could be read completely without any issues. They ran the gamut of build quality from expensive, name-brand drives to no-name dollar-store keychains. They’d been sitting in that box, untouched, for a little over nine years, and I’m pretty sure that some of them hadn’t been used for several years even before that.

    I wouldn’t rely on it for critical data, but USB flash might not be so terrible.


  • I’m in a similar boat to you. I ripped almost all of my CDs to 320kbps mp3s for portability, but then I wanted to put all of them (a substantial number) plus a bunch more (my partner’s collection) on a physically tiny USB stick (that I already had) to just leave plugged into our car stereo’s spare port. I had to shrink the files somehow to make them all fit, so I used ffmpeg and a little bash file logic to keep the files as mp3s, but reduce the bitrate.

    128kbps mp3 is passable for most music, which is why the commercial industry focused on it in the early days. However, if your music has much “dirty” sound in it, like loud drums and cymbals or overdriven electric guitars, 128kbps tends to alias them somewhat and make them sound weird. If you stick to mp3 I’d recommend at least 160kbps, or better, 192kbps. If you can use variable bit rate, that can be even better.

    Of course, even 320kbps mp3 isn’t going to satisfy audiophiles, but it sounds like you just want to have all your music with you at all times as a better alternative to radio, and your storage space is limited, similar to me.

    As regards transcoding, you may run into some aliasing issues if you try to switch from one codec to another without also dropping a considerable amount of detail. But unless I’ve misunderstood how most lossy audio compression works, taking an mp3 from a higher to a lower bitrate isn’t transcoding, and should give you the same result as encoding the original lossless source at the lower bitrate. Psychoacoustic models split a sound source into thousands of tiny component sounds, and keep only the top X “most important” components. If you later reduce that to the top Y most important components by reducing the bitrate (while using the same codec), shouldn’t that be the same as just taking the top Y most important components from the original, full group?


  • I’m not too knowledgeable about the detailed workings of the latest hardware and APIs, but I’ll outline a bit of history that may make things easier to absorb.

    Back In the early 1980s, IBM was still setting the base designs and interfaces for PCs. The last video card they relased which was an accepted standard was VGA. It was a standard because no matter whether the system your software was running on had an original IBM VGA card or a clone, you knew that calling interrupt X with parameters Y and Z would have the same result. You knew that in 320x200 mode (you knew that there would be a 320x200 mode) you could write to the display buffer at memory location ABC, and that what you wrote needed to be bytes that indexed a colour table at another fixed address in the memory space, and that the ordering of pixels in memory was left-to-right, then top-to-bottom. It was all very direct, without any middleware or software APIs.

    But IBM dragged their feet over releasing a new video card to replace VGA. They believed that VGA still had plenty of life in it. The clone manufacturers started adding little extras to their VGA clones. More resolutions, extra hardware backbuffers, extended palettes, and the like. Eventually the clone manufacturers got sick of waiting and started releasing what became known as “Super VGA” cards. They were backwards compatible with VGA BIOS interrupts and data structures, but offered even further enhancements over VGA.

    The problem for software support was that it was a bit of a wild west in terms of interfaces. The market quickly solidified around a handful of “standard” SVGA resolutions and colour depths, but under the hood every card had quite different programming interfaces, even between different cards from the same manufacturer. For a while, programmers figured out tricky ways to detect which card a user had installed, and/or let the user select their card in an ANSI text-based setup utility.

    Eventually, VESA standards were created, and various libraries and drivers were produced that took a lot of this load off the shoulders of application and game programmers. We could make a standardised call to the VESA library, and it would have (virtually) every video card perform the same action (if possible, or return an error code if not). The VESA libraries could also tell us where and in what format the card expected to receive its writes, so we could keep most of the speed of direct access. This was mostly still in MS-DOS, although Windows also had video drivers (for its own use, not exposed to third-party software) at the time.

    Fast-forward to the introduction of hardware 3D acceleration into consumer PCs. This was after the release of Windows 95 (sorry, I’m going to be PC-centric here, but 1: it’s what I know, and 2: I doubt that Apple was driving much of this as they have always had proprietary systems), and using software drivers to support most hardware had become the norm. Naturally, the 3D accelerators used drivers as well, but we were nearly back to that SVGA wild west again; almost every hardware manufacturer was trying to introduce their own driver API as “the standard” for 3D graphics on PC, naturally favouring their own hardware’s design. On the actual cards, data still had to be written to specific addresses in specific formats, but the manufacturers had recognized the need for a software abstraction layer.

    OpenGL on PC evolved from an effort to create a unified API for professional graphics workstations. PC hardware manufacturers eventually settled on OpenGL as a standard which their drivers would support. At around the same time, Microsoft had seen the writing on the wall with regards to games in Windows (they sucked), and had started working on the “WinG” graphics API back in Windows.3.1, and after a time that became DirectX. Originally, DirectX only supported 2D video operations, but Microsoft worked with hardware manufacturers to add 3D acceleration support.

    So we still had a bunch of different hardware designs, but they still had a lot of fundamental similarities. That allowed for a standard API that could easily translate for all of them. And this is how the hardware and APIs have continued to evolve hand-in-hand. From fixed pipelines in early OpenGL/DirectX, to less-dedicated hardware units in later versions, to the extremely generalized parallel hardware that caused the introduction of Vulkan, Metal, and the latest DirectX versions.

    To sum up, all of these graphics APIs represent a standard “language” for software to use when talking to graphics drivers, which then translate those API calls into the correctly-formatted writes and reads that actually make the graphics hardware jump. That’s why we sometimes have issues when a manufacturer’s drivers don’t implement the API correctly, or the API specification turns out to have a point which isn’t defined clearly enough and some drivers interpret it one way, while other drivers interpret the same API call slightly differently.


  • In my (admittedly limited) experience, SDL/SDL2 is more of a general-purpose library for dealing with different operating systems, not for abstracting graphics APIs. While it does include a graphics abstraction layer for doing simple 2D graphics, many people use it to have the OS set up a window, process, and whatever other housekeeping is needed, and instantiate and attach a graphics surface to that window. Then they communicate with that graphics surface directly, using the appropriate graphics API rather than SDL. I’ve done it with OpenGL, but my impression is that using Vulkan is very similar.

    SDL_gui appears to sit on top of SDL/SDL2’s 2D graphics abstraction to draw custom interactive UI elements. I presume it also grabs input through SDL and runs the whole show, just outputting a queue of events for your program to process.



  • I’m not sure how common they are outside Japan, but I have a little (about 12" I think) Panasonic “Let’s Note” that I use quite a lot as a lightweight coding (and retro/indie gaming :D) device that I can throw in even my smallest bag when there’s a chance I’ll have to kill more than a few minutes. They’re designed to be a little bit rugged. I had Ubuntu on it previously, now Mint, and the only problem I’ve had is that Linux somehow sees two screen brightness systems, and by default it connects the screen brightness keys to the wrong (i.e. nonexistent) one. Once I traced the problem it was a quick and painless fix.

    They seem to be sold worldwide, so you may be able to get one cheaply second-hand. One thing to be careful about is the fact that in order to keep the physical size down, the RAM is soldered to the board. Mine is an older model (5th gen iCore), and has 4GB soldered on but also one SODIMM slot, so I was able to upgrade to 12GB total. But I’ve noticed that on most later models they got rid of the RAM slots entirely, so whatever RAM it comes with is what you’re stuck with.




  • Redkey@programming.devtoLinux@lemmy.mlVirus
    link
    fedilink
    arrow-up
    2
    ·
    3 months ago

    Before everyone had Internet at home? Well, there were bulletin boards, but even without those? Yeah, swapping floppies was how they got around. I got hit a few times as a teen, but the worst one actually came from a legitimate copy of a game I bought secondhand. It got into the boot sector and I had to nuke the HDD from orbit to get rid of that one. I’m just glad that software BIOS updates weren’t a thing yet.




  • MDN is great, especially for finding current best practice, but I’ve always found their material much more useful for reference once I’m already familiar with the general usage of whatever I’m trying to use. I often find it difficult to get to grips with something new just with MDN.

    I usually go read W3Schools first. It’s mostly a bit out of date, but not so much that it’s useless, and I find the tutorials much easier to digest. Once I’m comfortable with the basics, I switch to MDN to get up to speed.

    And OP, it sounds like you’re already wary of this, but don’t let yourself be tricked into using a hodge-podge of libraries for every little thing. A lot of JS programmers will tell you that you “need” some library or other to provide a function that you can often replicate with just two or three lines of raw JS, if you know what you’re doing.

    I think the JS library addiction stems from the bad old days of browser incompatibility, when almost everything had to be wrapped in layers of complex compatibility shims.




  • Icons that are based on English puns and wordplay are easily understood by speakers of other languages.

    This reminded me of one of those Top Gear “drive across a foreign country in weird vehicles” specials where Jeremy Clarkson needed to borrow a cable to jump-start his car, and laboriously mimed out jumping for “jump”, and walking a dog for “lead”, to a perplexed local. Richard Hammond was cracking up but finally managed to point out what a fool Clarkson was being.

    Geolocation is an accurate way to predict the user’s language.

    And as an addendum to this, in 2025 nobody should be using Windows’ “Non-latin/-unicode character set” setting to guess the user’s preferred language. That’s a pre-WinXP kludge. I’m specifically looking at you, Intel integrated graphics software writers, but you have plenty of company, don’t worry.


  • Why be like that? Whether you think their position is silly or not, this person obviously gets called out on this a lot. And rather than pitch a fit over being needled about it for the umpteenth time, they responded with links that ought to satisfy any genuine curiosity. Considering the times I’ve seen an empty “Go educate yourself!” as a response from petulant children, I’d say buddy did us a solid. They don’t owe us a personalized response.


  • Yep, surveillance_records.person_id is the same as surveillance_records.id, which is incorrect. I looked at the Github repo and there’s already a report for it.

    What I don’t understand (and apparently this is my problem, not a bug) is how we’re supposed to narrow the list down to three suspects in the next-to-last step, as the “Case Solved” text describes (Yeah, I cheated). The interviews with the two witnesses give a partial hotel name and a check-in date, but that returns dozens of results. The ending messsge congratulates us for reducing that list by using the surveillance records in some way, but I can’t see how. The only other detail I have is “The guy looked nervous”, which doesn’t seem to have any connection with the surveillance records.