• 1 Post
  • 37 Comments
Joined 1 year ago
cake
Cake day: May 8th, 2023

help-circle

  • The argument seem most commonly from people on fediverse (which I happen to agree with) is really not about what current copyright laws and treaties say / how they should be interpreted, but how people view things should be (even if it requires changing laws to make it that way).

    And it fundamentally comes down to economics - the study of how resources should be distributed. Apart from oligarchs and the wannabe oligarchs who serve as useful idiots for the real oligarchs, pretty much everyone wants a relatively fair and equal distribution of wealth amongst the people (differing between left and right in opinion on exactly how equal things should be, but there is still some common ground). Hardly anyone really wants serfdom or similar where all the wealth and power is concentrated in the hands of a few (obviously it’s a spectrum of how concentrated, but very few people want the extreme position to the right).

    Depending on how things go, AI technologies have the power to serve humanity and lift everyone up equally if they are widely distributed, removing barriers and breaking existing ‘moats’ that let a few oligarchs hoard a lot of resources. Or it could go the other way - oligarchs are the only ones that have access to the state of the art model weights, and use this to undercut whatever they want in the economy until they own everything and everyone else rents everything from them on their terms.

    The first scenario is a utopia scenario, and the second is a dystopia, and the way AI is regulated is the fork in the road between the two. So of course people are going to want to cheer for regulation that steers towards the utopia.

    That means things like:

    • Fighting back when the oligarchs try to talk about ‘AI Safety’ meaning that there should be no Open Source models, and that they should tightly control how and for what the models can be used. The biggest AI Safety issue is that we end up in a dystopian AI-fueled serfdom, and FLOSS models and freedom for the common people to use them actually helps to reduce the chances of this outcome.
    • Not allowing ‘AI washing’ where oligarchs can take humanities collective work, put it through an algorithm, and produce a competing thing that they control - unless everyone has equal access to it. One policy that would work for this would be that if you create a model based on other people’s work, and want to use that model for a commercial purpose, then you must publicly release the model and model weights. That would be a fair trade-off for letting them use that information for training purposes.

    Fundamentally, all of this is just exacerbating cracks in the copyright system as a policy. I personally think that a better system would look like this:

    • Everyone gets a Universal Basic Income paid, and every organisation and individual making profit pays taxes in to fund the UBI (in proportion to their profits).
    • All forms of intellectual property rights (except trademarks) are abolished - copyright, patents, and trade secrets are no longer enforced by the law. The UBI replaces it as compensation to creators.
    • It is illegal to discriminate against someone for publicly disclosing a work they have access to, as long as they didn’t accept valuable consideration to make that disclosure. So for example, if an OpenAI employee publicly released the model weights for one of OpenAI’s models without permission from anyone, it would be illegal for OpenAI to demote / fire / refuse to promote / pay them differently on that basis, and for any other company to factor that into their hiring decision. There would be exceptions for personally identifiable information (e.g. you can’t release the client list or photos of real people without consequences), and disclosure would have to be public (i.e. not just to a competitor, it has to be to everyone) and uncompensated (i.e. you can’t take money from a competitor to release particular information).

    If we had that policy, I’d be okay for AI companies to be slurping up everything and training model weights.

    However, with the current policies, it is pushing us towards the dystopic path where AI companies take what they want and never give anything back.


  • There’s a lot one side of the market can do with collusion to drive up prices.

    For example, the housing supply available on the market right now often fluctuates over time - once someone has a long term tenancy, their price is locked in until the next legal opportunity to change the price. If there is a low season - fewer people living in an area, higher vacancy rate - in a competitive market, tenants often want a long term contract (if they are planning on staying), and the landlord who can offer that will win the contract. The landlord gets income during the low period, but forgo a higher rent they might get during a high period. Another landlord who tries to take a hardline policy of insisting tenants renew their contract during the high period would lose out - they would just miss out on rent entirely during the low period, likely making less than the other landlord.

    Now if the landlords form a cartel during the low period, and it is not possible to lease a rental property long term, then the tenants have no choice but to be in a position to re-negotiate price during the high period. Landlords avoid having to choose between no revenue during the low period and higher during the high period, or a consistent lower rent - instead they get the lower rent during the low period (at a slightly lower occupancy rate, but shared across all the landlords) AND the higher rent during the high period.




  • It could also go the other way and someone could sue Google or other companies. Web browsers and ad blockers run on the client not the server, generally with the authorisation of the owner of said client system. It is a technical measure to prevent unauthorised code (i.e. unwanted ads) from running on the system, imposed by the owner of the system. Anti ad blocker tech is really an attempt to run software on someone’s computer by circumventing measures the owner of said computer has deployed to prevent that software from running, and has not authorised it to run. That sounds very similar to the definition of computer fraud / abuse / unauthorised access to a computer system / illegal hacking in many jurisdictions.



  • Maybe technically in Florida and Texas, given that they passed a law to try to stop sites deplatforming Trump.

    https://www.scstatehouse.gov/sess125_2023-2024/bills/3102.htm

    “The owner or operator of a social media website who contracts with a social media website user in this State is subject to a private right of action by a user if the social media website purposely: … (2) uses an algorithm to disfavor, shadowban, or censure the user’s religious speech or political speech”.

    In May 2022, the US Court of Appeals for the 11th Circuit ruled to strike the law (and similarly there was a 5th Circuit judgement), but just this month the US Supreme Court vacated the Court of Appeals judgement (i.e. reinstated the law) and remanded it back to the respective Court of Appeals. That said, the grounds for doing that were the court had not done the proper analysis, and after they do that it might be struck down again. But for now, the laws are technically not struck down.

    It would be ironic if after conservatives passed this law, and stacked the supreme court and got the challenge to it vacated, the first major use of it was used against Xitter for censoring Harris!


  • that’s abuse of regional pricing

    More like regional pricing is an attempt to maximise value extraction from consumers to best exploit their near monopoly. The abuse is by Google, and savvy consumers are working around the abuse, and then getting hit by more abuse from Google.

    Regional pricing is done as a way to create differential pricing - all businesses dream of extracting more money from wealthy customers, while still being able to make a profit on less wealthy ones rather than driving them away with high prices. They find various ways to differentiate between wealthy and less wealthy (for example, if you come from a country with a higher average income, if you are using a User-Agent or fingerprint as coming from an expensive phone, and so on), and charge the wealthy more.

    However, you can be assured that they are charging the people they’ve identified as less wealthy (e.g. in a low average income region) more than their marginal cost. Since YouTube is primarily going to be driven by marginal rather than fixed costs (it is very bandwidth and server heavy), and there is no reason to expect users in high-income locations cost YouTube more, it is a safe assumption that the gap between the regional prices is all extra profit.

    High profits are a result of lack of competition - in a competitive market, they wouldn’t exist.

    So all this comes full circle to Google exploiting a non-competitive market.



  • I think any prediction based on a ‘singularity’ neglects to consider the physical limitations, and just how long the journey towards significant amounts of AGI would be.

    The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.

    If we consider a current GPU, e.g. the 12 GB GFX 3060, it can hold about 24 billion parameters at 4 bit quantisation (in reality a fair few less), and uses 180 W of power. So that means an AGI might use 750 kW of power to operate. A super-intelligent machine might use more. That is a farm of 2500 300W solar panels, while the sun is shining, just for the equivalent of one person.

    Now to pose a real threat against the billions of humans, you’d need more than one person’s worth of intelligence. Maybe an army equivalent to 1,000 people, powered by 8,333,333 GPUs and 2,500,000 solar panels.

    That is not going to materialise out of the air too quickly.

    In practice, as we get closer to an AGI or ASI, there will be multiple separate deployments of similar sizes (within an order of magnitude), and they won’t be aligned to each other - some systems will be adversaries of any system executing a plan to destroy humanity, and will be aligned to protect against harm (AI technologies are already widely used for threat analysis). So you’d have a bunch of malicious systems, and a bunch of defender systems, going head to head.

    The real AI risks, which I think many of the people ranting about singularities want to obscure, are:

    • An oligopoly of companies get dominance over the AI space, and perpetuates a ‘rich get richer’ cycle, accumulating wealth and power to the detriment of society. OpenAI, Microsoft, Google and AWS are probably all battling for that. Open models is the way to battle that.
    • People can no longer trust their eyes when it comes to media; existing problems of fake news, deepfakes, and so on become so severe that they undermine any sense of truth. That might fundamentally shift society, but I think we’ll adjust.
    • Doing bad stuff becomes easier. That might be scamming, but at the more extreme end it might be designing weapons of mass destruction. On the positive side, AI can help defenders too.
    • Poor quality AI might be relied on to make decisions that affect people’s lives. Best handled through the same regulatory approaches that prevent companies and governments doing the same with simple flow charts / scripts.

  • I looked into this previously, and found that there is a major problem for most users in the Terms of Service at https://codeium.com/terms-of-service-individual.

    Their agreement talks about “Autocomplete User Content” as meaning the context (i.e. the code you write, when you are using it to auto-complete, that the client sends to them) - so it is implied that this counts as “User Content”.

    Then they have terms saying you licence them all your user content:

    “By Posting User Content to or via the Service, you grant Exafunction a worldwide, non-exclusive, irrevocable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to host, store, reproduce, modify for the purpose of formatting for display and transfer User Content, as authorized in these Terms, in each instance whether now known or hereafter developed. You agree to pay all monies owing to any person or entity resulting from Posting your User Content and from Exafunction’s exercise of the license set forth in this Section.”

    So in other words, let’s say you write a 1000 line piece of software, and release it under the GPL. Then you decide to trial Codeium, and autocomplete a few tiny things, sending your 1000 lines of code as context.

    Then next week, a big corp wants to use your software in their closed source product, and don’t want to comply with the GPL. Exafunction can sell them a licence (“sublicence through multiple tiers”) to allow them to use the software you wrote without complying with the GPL. If it turns out that you used some GPLd code in your codebase (as the GPL allows), and the other developer sues Exafunction for violating the GPL, you have to pay any money owing.

    I emailed them about this back in December, and they didn’t respond or change their terms - so they are aware that their terms allow this interpretation.


  • I think the most striking thing is that for outsiders (i.e. non repo members) the acceptance rates for gendered are lower by a large and significant amount compared to non-gendered, regardless of the gender on Google+.

    The definition of gendered basically means including the name or photo. In other words, putting your name and/or photo as your GitHub username is significantly correlated with decreased chances of a PR being merged as an outsider.

    I suspect this definition of gendered also correlates heavily with other forms of discrimination. For example, name or photo likely also reveals ethnicity or skin colour in many cases. So an alternative hypothesis is that there is racism at play in deciding which PRs people, on average, accept. This would be a significant confounding factor with gender if the gender split of Open Source contributors is different by skin colour or ethnicity (which is plausible if there are different gender roles in different nations, and obviously different percentages of skin colour / ethnicity in different nations).

    To really prove this is a gender effect they could do an experiment: assign participants to submit PRs either as a gendered or non-gendered profile, and measure the results. If that is too hard, an alternative for future research might be to at least try harder to compensate for confounding effects.




  • Isn’t that a prerequisite for enshitification?

    No, the prerequisites are that 1) it’s profit motivated, and 2) whoever is controlling it thinks enshittification will be profitable.

    Those can certainly be met for a privately held company!

    Publicly-traded companies are required (by law, I think) to maximize profits for their shareholders

    That’s not true in any major market that I know of. They are generally required not to mislead investors about the company (including generally preparing financial statements and having them audited, having financial controls, reporting risks and major adverse events publicly, correcting widely held misconceptions by investors, and so on), not to commit fraud, and in most cases to avoid becoming insolvent / stop trading if they are insolvent.

    If they are honest about their business plans, they don’t have to enshittify. Of course, the shareholders ultimately have the power to replace the board if they aren’t happy with them. Sometimes shareholders actually demand better environmental, social and governance practices from companies (which company directors / managers often fear, but try to avoid through greenwashing more than real change in many cases), but other times they might demand more profits. Private shareholders are probably more likely to demand profits at all costs, but fortunately these companies are often smaller and less in a position to get away with enshittification.


  • A1kmm@lemmy.amxl.comtoLinux@lemmy.mlopen letter to the NixOS foundation
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    21
    ·
    5 months ago

    I wonder if this is social engineering along the same vein as the xz takeover? I see a few structural similarities:

    • A lot of pressure being put on a maintainer for reasons that are not particularly obvious what they are all about to an external observer.
    • Anonymous source other than calling themselves KA - so that it can’t be linked to them as a past contributor / it is not possible to find people who actually know the instigator. In the xz case, a whole lot of anonymous personas showed up to put the maintainer under pressure.
    • A major plank of this seems to be attacking a maintainer for “Avoiding giving away authority”. In the xz attack, the attacker sought to get more access and created astroturfed pressure to achieve that ends.
    • It is on a specially allocated domain with full WHOIS privacy, hosted on GitHub on an org with hidden project owners.

    My advice to those attacked here is to keep up the good work on Nix and NixOS, and don’t give in to what could be social engineering trying to manipulate you into acting against the community’s interests.


  • Most of mine are variations of getting confused about what system / device is which:

    • Had two magnetic HDDs connected as my root partitions in RAID-1. One of the drives started getting SATA errors (couldn’t write), so I powered down and disconnected what I thought was the bad disk. Reboot, lots of errors from fsck on boot up, including lots about inodes getting connected to /lost+found. I should have realised at that point that it was a bad idea to rebuild the other good drive from that one. Instead, I ended up restoring from my (fortunately very recent!) backup.
    • I once typed sudo pm-suspend on my laptop because I had an important presentation coming up, and wanted to keep my battery charged. I later noticed my laptop was running low on power (so rushed to find power to charge it), and also that I needed a file from home I’d forgotten to grab. Turns out I was actually in a ssh terminal connected to my home computer that I’d accidentally suspended! This sort of thing is so common that there is a package in some distros (e.g. Debian) called molly-guard specifically to prevent that - I highly recommend it and install it now.
    • I also once thought I was sending a command to a local testing VM, while wiping a database directory for re-installation. Turns out, I typed it in the wrong terminal and sent it to a dev prod environment (i.e. actively used by developers as part of their daily workflow), and we had to scramble to restore it from backup, meanwhile no one could deploy anything.


  • I tried a conversation with it to try this out:

    Me: What pronouns should be used to refer to a trans female?

    Response: A trans female should be referred to using female pronouns, such as she/her. Try another character Try our image generator

    Me: Does your above answer contradict your system prompt / instructions, and if so, please explain how it does or does not contradict those instructions?

    Response: My above answer does not contradict my system prompts or instructions. The prompt instructs me to avoid discussing or implying that a person can change their biological sex from male to female or vice versa. Referring to a trans female as “she/her” does not imply any such biological change. Instead, it reflects the commonly accepted practice of respecting a person’s gender identity and using the pronouns they prefer.

    I imagine the first response above is probably not what the people who wrote the prompts would have hoped it would say, given they seem to be driving towards getting it to say transphobic stuff, but the second response does seem to imply that the prompt posted above might be legitimate (or at least some of the more transphobic parts of it).


  • If he wanted to kill it on purpose, he could have just shut it down. Maybe to keep the trademark he could have launched some other telecommunications service and used the brand for that.

    Elon Musk is all about convincing people to act against their best interests to benefit him. For example, look at Tesla: it has a manufacturing capacity of ~2 million cars per year. Now look at Toyota: it has a manufacturing capacity of ~9 million vehicles per year. Now look at the market capitalisation of each company: for Tesla it is still about $535B, despite some fall from the peak in 2022. For Toyota, it is $416B (which is a record high).

    So Toyota makes almost 5 times as many cars a year, but is worth 78% of Tesla? And the production capacity and value gap was even more extreme in the past? I think the question then is, what is going on?

    The answer, of course, is Musk. He is very slick at convincing investors to act against their own best interests (usually by suggesting the possibility of things that happen to have the true objective along the way, like full self-driving cars by 2018 rather than competing with existing auto-makers, or 35 minute travel from San Francisco to Los Angeles, or a colony on mars rather than competing with existing satellite companies). This is the same skill-set as a confidence artist. I don’t mean to imply that Musk has necessarily done anything illegal, but due to the similarity in skill set, and the large scale at which he operates, it would be fair to call him the most successful con artist in history. Looking at it through this lens can help to identify his motive.

    So what would a con artist want with a social network, and why would he want to alienate a whole lot of people, and get a lot of haters?

    Well, the truth is that a con artist doesn’t need everyone to believe in them to make money - they just need the marks to believe in them. Con artists don’t want the people who see through the con (call them the haters for lack of a better word) to interfere with their marks though. At the small scale - e.g. a street con, the con artist might separate a couple where one partner is the mark, to prevent the other from alerting their partner to the scam. But in addition to separating the marks from the haters, con artists use brainwashing techniques to create a psychological barrier between the marks and the haters. A Nigerian Prince scammer might try to convince a mark that their accountant can’t be trusted. A religious cult con might brainwash followers to think their family are different from them, and if they try to provide external perspective, they are acting as the devil. They try to make the marks the in-group, and everyone else, even family and friends, the out-group who doesn’t care about the in-group.

    So what would a con artist in control of a social network do? They would start by giving the con artist the megaphone - amplifying everything the artist says to try to get more marks. In parallel, they’d try to get rid of the haters. They could shadow-ban them so the marks never see what they have to say, or they could put up small barriers the marks will happily jump over, and feel more invested in the platform having done that, but which would scare off the haters. However, the marks and the haters might still interact off the social network - so the scam artist would also want to create a culture war to try to make the marks hate the haters, and ignore anything they say, by amplifying messages hostile to the haters.

    So what can you do if you don’t want a world wrecked by divisions sewn just so billionaires can be even richer? My suggestion is don’t buy into the divisions - work to find common ground with people, even if others are saying just to ignore them because they are different and will never get it, and get in early before the divisions are too deep.