Melody Fwygon

Beehaw alt of @melody@lemmy.one

@fwygon on discord

  • 1 Post
  • 85 Comments
Joined 2 years ago
cake
Cake day: July 10th, 2023

help-circle
  • I’d say they’re getting desperate to extort the few victims they manage to infect with this crap if they’re adding an extortion/blackmail component to this that isn’t your bog standard “oh files are now encrypted” malware.

    Since ransomware is pretty much known to be common enough; it’s clear that people are backing up data on a regular enough basis to be resistant to it; especially if the criminal is demanding far more money than any data they managed to take hostage is worth to the person. Since cloud services are ubiquitous now; it’s likely they already have critical documents and photos backed up safely and the ransomware fails if all the user does is find someone techy to just nuke the whole system and reinstall everything from their cloud backup.

    Using browser activity and webcam spying might seem clever but it’s just a reaching maneuver to extort people who would ordinarily just shrug off a ransomware infection but whom still have poor enough opsec online to be affected greatly by such blackmail.



  • No. Not really anyways.

    HOWEVER… The AIs in question MUST BE Competent Enough. What your definition of that will be is likely to be flexible and possibly even debatable with others depending on the situation.

    What needs to be true is that AI must not be capable of making the same mistakes a human could, but the mistakes that an AI COULD POSSIBLY MAKE are required to be mistakes that any human could reasonably and very easily catch.

    Unfortunately the above IS NOT TRUE of current AI LLM type implementations. These LLMs have no consciousness nor ability to reason beyond what a computer could. They have no creativity, despite having the ability to parse language and guess the next word.

    If you only learned the rules, grammar and vocabulary of a specific language and were given absolutely zero context or cultural and historical teaching; an LLM is what that would look like. This by itself is not enough to replace jobs.

    Is that fact enough to stop heartless corporations from trying it? Hell. The. Fuck. No. They will try it anyways, they will ‘fuck around and find out’ on the off chance that it may save them money. They don’t care that it’s the company selling the ‘AI product’'s job to lie to sell their product. The fact that some companies are that desperate to save cash is telling in and of itself about the state of the world right now…but that’s another topic for another day and another threaded post in another subcommunity on Beehaw.


  • In this case; the UK is reaching too far. Genuinely speaking; they don’t have the right to fine you if you don’t live or operate in that country. 4chan never did have any legal presence in the UK; even if it did accept ‘donations’ from UK citizens.

    At worst; the UK can block 4chan from being accessed in their country and seize any money sent to 4chan by their citizenry in the future. I doubt anyone would care if that’s what they did.

    The US specifically even states in it’s constitution that no citizen shall have laws imposed on them by another country that restrict their freedoms.


  • If it’s a tool, you aren’t necessarily able to control what it does under your direction.

    This is false. A tool, by definition, is controlled by the user of said tool. AI is controlled by user input. Any AI that cannot be controlled by said input is said to be “misaligned” and is considered a broken tool. OpenAI lays out clearly what it’s AI is trained to do and not do. It is not responsible if you use the tool they created in a way that is not recommended.

    Any AI prompt fits the definition of a tool:

    From Merriam-Webster:

    2b: an element of a computer program (such as a graphics application) that activates and controls a particular function

    In my opinion; the AI should not be equipped to bypass it’s guardrails even when prompted to do so. A hammer did not tell you to use it as a drill; it’s user decided to do that.

    The user alone has the creativity to use the tool to achieve their goal.


  • AI reads your input and guesses what to output. It’s just really good at that. It has no concept on the actual meaning of those words and how they will be interpreted.

    Yep. AI is a tool. The user is still responsible for the right-ness or wrong-ness of how they choose to use it.

    Lots of blame could be thrown around instead of addressing the larger issue.

    You’re moving the goalposts here to absolve the parents’ lack of care. That isn’t right.

    As for jail breaking AI…I don’t think a private corporation should have any input in what I say, believe or think. The hammer manufacturers can’t stop me from using it as a drill. This whole argument goes back to the old who do you blame question…the gun manufactures, the gun stores or the murderers with the guns.

    Oh look, more goalpost movement and even reframing my argument, which was simple: The AI should not assist the user in jailbreaking itself.

    Seriously; do not reply again. Your arguments did not work.



  • Looks harmless on the surface; but yet, is still in fact, boiling a frog.

    Thankfully the rollout seems fairly slow; should be enough time for most of you who find this concerning enough to switch to a custom ROM which eschews this safeguard.

    With luck this will be even something we can turn off. I certainly would demand the ability to turn this security setting OFF even if it ships “Default - ON” to protect normal users who do not have a need usually to sideload unsigned apps.

    I don’t like it myself. If we are not given a choice; I will likely flash my device over to an Open Source ROM that respects my privacy more.

    For developers; this might be a good time to make sure that there are people who can “register” semi-anonymously and share the signing keys. Genuinely, I think something could be figured out; and private registrations could become a thing; Where one person capable of registering simply vouchsafes a number of developers they personally know by sharing necessary signing keys where they too contribute to an app project.

    I think the whole implementation can’t be immune to key sharing, and I do think it’s possible to have one dev deal with the devil…Google in this case.

    While I understand some projects will rightfully not want to hand information over to Google; usually because they’re being legally attacked by Google; I believe it will be possible to simply use wider shared keys to misdirect and deflect any unwanted legal action.


  • With the Obvious exclusions being mentioned here, where you should see them first...
    • IGNORANCE, regardless of if it was willful or blissful unawareness of the dangers
    • AI researchers…and other research interests
    • Science involving intelligence
    • Other Computer Science tinkering and experimenting…

    I can’t imagine why anyone would allow an AI to interact with files that have not been thoroughly backed up and secured on a disk that is detached from any system the AI is running on.

    Secondly, I cannot imagine why one would ever permit the AI to use move commands when getting files from a directory that is external to the directory you explicitly designate as the AI’s workspace.

    Third, why would someone not make sure all the files are in the right places yourself? It takes maybe 5 minutes tops to crack open a file explorer window and do the file operations exactly as you intended them; that way it’s ensured that a ‘copy’ operation and not a ‘move’ operation is used on the files, while doing any versioning, backing up or checkpointing that is desired.

    Last of all; why would someone use an LLM to make simple commands to a machine that they could easily do in one CLI command or one GUI interaction? If one can type an entire sentence in natural language to an AI, and they are skilled enough to set up and use that AI agent as a tool, why not simply type the command they intended, or do the GUI interaction necessary to do the task?


  • Again; I must iterate how wrong you are.

    People can and do travel and move to different countries with their consoles. There can be multiple accounts per console. People can feasibly have two consoles right next to each other connected to different networks and swap carts between them. People can change consoles because they upgraded or because they have multiple consoles in the household. And people can and do resell carts all the time.

    These situations do not matter as the logic for detection is very simple. Is cartrige A with serial ABC in more places than is reasonably expected of that cartridge? With physical copies that limitation is exactly 1 place, 1 system at a time. Irrespective of who it’s registered to or who owns it. Any cartridge that has been in more than one place at one time and your system cert is logged and inserted in the next upcoming ban wave / wave of system cert revocations. This revocation goes live on Nintendo’s servers. Your system will not get the Online Service kiss of death until after this happens.

    Other checks such as location, account, how often it happens and such can and may happen after this check to automatically limit false positives and prevent you from being instantly banned. But their system works; and it’s consistent as to which condition triggers it; that’s when the identity of any physical or digital game title is in more places than it is licensed to be in. (Actively caught piracy).

    And there is no way to differentiate those scenarios even if you can/could track each cart individually.

    Except that they can, and do. See other comments around for the how and why…it’s related to Nintendo Gold Points.

    There could be a record of which consoles have played which carts, but that gives you exactly zero information about how many owners the cart has had.

    There absolutely is. An unmodified Switch console reports this sort of telemetry on a regular basis to Nintendo; and it’s clear that they can ban your system based on bad Title IDs; (basically fake title headers, or dumped cartridge headers used to conceal flash cartrige usage)

    Switch accounts aren’t associated to consoles and physical game entitlements aren’t associated to accounts. Any account can be in any console at any time and instantly show in in multiple places and while you could account for travel times it’s a pretty pointless thing to do that, to my knowledge, Nintendo is not doing.

    They don’t have to be. Just have to log that your System Certificate reported a new title. This System Certificate is used in all traffic to Nintendo as it authenticates your system to it’s network.


  • They literally have no way to do so. There is no tool in the toolset to distinguish a cart someone else bought at the store from your own carts you bought at the store and then moved from a Switch 1 to a Switch 2.

    This is absolutely not true; it’s absolutely possible and even suspected that individual game carts themselves are signed with unique serial IDs or even full certificates or cryptographic signatures.

    I think it’s more likely the previous owner did dump the cart on to a MIG Switch or similar ROM cart. While the NS1 cannot tell the difference; it can still be updated to do so.

    I think it’s likely that in order to play titles online; your Switch 1 has to get the Cart Serial number from the cart and package it all up nicely and sign it neatly with the certificate from the system. So if said Nintendo Switch 1 already transferred that title out to a Switch 2, then there would be a record on file with Nintendo saying “NS1 with Serial XYZ transferred Title cart ABC with serial DEF to Switch 2 with serial GHI”. Then when you put that cart into a different Switch 2 it notices and informs Nintendo of the new title and cart serial…which then immediately picks up on the change of ownership.

    That might not raise red flags if you handed the cart over to your friend next door; but it certainly might raise red flags if you air-mailed the cart over to your buddy a few countries over.


  • In general, I disable the ability of getting a ‘read receipt’ if at all possible. In the case of some rare platforms that don’t allow this; I also warn people that "Seeing a ‘read receipt’ indicator does not mean I was available to reply.

    In general; people who hang on to this little indicator are also committing a larger social faux pas, and you should { [(yellow/red) flag] / address / handle } it accordingly based on your relationship to that person, your goals and the situation.

    Whether that means ‘calling them out’ or kindly explaining what it actually means or explaining your approach to communications; the behavior of expecting something to happen on the receipt of a read receipt needs to be discouraged in my personal opinion.






  • I also think it’s important to point out separately that this software can be modified down into an easy to use software package that allows one to input relevant taxable financial data into it; and then when it comes time to file your taxes; you print out a long report; take it to your tax professional, who is now easily able to read that report, and use the information to appropriately file taxes.

    Heck; that report might even make it drop dead simple to fill out your own tax forms; should you decide that’s what you’d like to do; and you could feed the software the relevant tax forms in and it would fill them out to the best of it’s ability using the data provided to it.


  • I don’t see the problem. This can be forked, enhanced, updated, and modified and that version released with an appropriate copyleft license.

    Sure; some companies will do the same with utterly horrific copyright licenses. That’s fine; as it’s in the public domain. We just need some group to work to provide an appropriate copyleft licensed version.

    Ideally we should be getting on this fast; but we all know FOSS work isn’t going to be fast if it’s done by volunteers. Some funds might need raising.

    In essence; CC0 is an ultimate copyleft license; as it does not even preclude the use of copyrighting improvements on variant works.



  • When it’s explained simply; it seems to make more sense to emotive thinking.

    An example:

    “[Manufacturer] is [telling you that/acting like] you cannot choose apps outside of their [store/collection/catalog]; even if you, as an adult, would trust that app or need it to save your own sanity, health or life.”

    When I tell an Apple user that; they suck their teeth and try to make [noises/excuses] but in the end they do relent and admit that does suck. Not only can they not refute it logically, they cannot refute it emotionally.

    When I show them how I’ve riced out the experience of my Android Smartphone and how I can use my phone rapidly without encumbrance because I have everything at my fingertips in a workflow that comes native to me…they get jealous!

    Sadly where I lose them, is where I tell them all the work I put in to achieve it. I have to break the news to them that going to the store and buying a phone with freedoms just like mine isn’t possible. Perhaps that’s where we need to attack these things.

    Basically; we need to make freedom look sexy. There will invariably be things we can do with our freed devices and software systems that they cannot hope to achieve. We have to endeavor to make that difference as pronounced and noticeable as possible. When we do; that’s when FLOSS communities swell and grow. When Linux got good at gaming with Proton; the numbers swelled. Linux became “sexy” because it could game. If we give users something they can have over their peers who don’t seek freedom respecting software, they will flock to it in droves…and the companies will be driven into the poorhouse for failing to meet the user demands.