

“Social” isn’t part of the title. Meta is the company that acquired the site.
I also fail to see the ROI for buying a social media site for AI. There’s no advertising revenue to be made. At best you’re just charging a subscription fee.


“Social” isn’t part of the title. Meta is the company that acquired the site.
I also fail to see the ROI for buying a social media site for AI. There’s no advertising revenue to be made. At best you’re just charging a subscription fee.


why would something have to be closed source in order to optionally provide secure boot? Couldn’t you provide the secure-boot-enabled binaries in addition to the source for everything except the boot keys?
This is also something I don’t fully understand. Unfortunately it’s not easy to find what the requirements are to get a bootloader signed by MS. It’s possible I’m mixing up these requirements with requirements for something else that requires a NDA, but it’s really not that simple to find the requirements online.
It’s possible that the latter is actually the case and it’s not secure boot that requires it to be closed source. It’s also possible I’m entirely mistaken and they don’t need to make it closed source at all. I wish TrueNAS would give more details why it needs to be closed source - whether it’s due to a NDA or whatnot.


You can use self-signed keys.
It’s basically like saying you can trust your own certificates used by TLS on your own machine rather than going through a CA, but realistically businesses would rather use a CA.


Self sign doesn’t defeat the purpose
The whole point of signing is that the BIOS can verify that the bootloader is legitimate. For a local Arch install, it doesn’t matter because Arch doesn’t distribute signed bootloaders and the environment is wholly personal. TrueNAS sells products and services though, such as enterprise-level support. It isn’t just something used in home labs. Their customers may require things we do not, and secure boot support appears to be one of them.
Self-signing to work around the idiotic restrictions Microsoft imposes to get it signed would be one way to do that, but then the software is essentially acting as its own authority that it is legitimate. Customers would realistically rather the bootloader’s signature is valid with the built-in key provided by MS since it means that MS is confirming its validity instead - not exactly a name I would trust, but I’m personally not a TrueNAS enterprise customer either.


This transition was necessary to meet new security requirements, including support for Secure Boot
Secure boot is dumb, but explains why they’d need a repo to be closed source. To summarize it briefly, you need your bootloader to be signed to work at all with secure boot, which means you have two options: self-sign (which defeats the purpose, though some Linux distros let you do this if you want), or follow all the requirements imposed by Microsoft. As far as I’m aware, one of those requirements is that it must be closed source.


why haven’t you bragged about using Arch yet?
Well Manjaro is Arch-based, but it feels like cheating to say that. Anyway, I used Manjaro, btw.


Hey even I use Linux daily.
Actually, I’m not really sure why “even I” should be shocking. I write code for a living. Surely I should be using Linux once in a while.
Anyway RHEL is probably the only Linux distro I can think of that costs money and comes with support. The major cloud providers sometimes have their own Linux distros they use as well (looking at you, Amazon) and you can argue they are selling Linux, but not as directly as RHEL does.


Red Hat.
The other distros? No idea.


It also affects subjects like atheism, as the various religious cultures generally do not want people contemplating the idea that there isn’t a god, especially not while they’re young, they want you long indoctrinated into belief before you can explore different ideas.
This reminds me of a Pakistani person I don’t personally know, but someone I know talks to them.
In their hometown, people recite verses from the Quran as part of their religious activities. There’s only one problem: the Quran they use is written in Arabic, but everyone there speaks Urdu. People don’t actually know what the passages say, just how to say them.
So this person asked them once what the passages say. Why do we read the passages in Arabic instead of Urdu? People here don’t know Arabic.
Anyway, he got belted shortly after that.


It looks like this was briefly touched in the article, but LLMs don’t learn shit.
If I tell you your use of a list is dumb and using a set changes the code from O(n) to O(1) and cuts out 15 lines of code, you probably won’t use a list next time. You might even look into using a deque or heap.
If your code was written by a LLM? You’ll “fix” it this time (by telling your LLM to do it) and then you’ll do it again next time.
I’m sorry, but in the latter case, not only are you mentally handicapping yourself, but you’re actively making the project worse in the long term, and you’ve got me sending out resumes because, and I mean this in the politest way possible, but go fuck yourself for wasting my time with that review.


Right now it’s no big deal to any AI company because more code means more training for the AI, but will we get to the point that they’re happy with code output enough and then turn around claiming they own those?
At least in the US:
The vast majority of commenters agreed that existing law is adequate in this area and that material generated wholly by AI is not copyrightable.
So it seems unlikely that they would be able to claim any ownership.
As for the rest of your comment (the parts around ownership): you always own the copyright for any copyrightable work you create, including code. When you post on a website, according to the ToS of that site, you’re licensing your comment/code/whatever to the website (you need to for them to be able to publish your work on their website).
Some (many, most depending on what you use) websites overlicense your work and use it for other purposes as well (like GitHub), but in the US the judges have basically ruled that AI companies can pirate whatever works they want without any attempt to license them and still be fine, so the “overlicense” bit is more of a formality at this point anyway.
there should be a fork of dotnet.
Dotnet is maintained by the .NET Foundation and is entirely open source. There are thousands of forks and local clones of the repos under that organization. Rather than hoping someone does this, it’d actually be a huge benefit to everyone for you to create a local clone of the repo and update it now and then, assuming you’re worried it might go down anyway.
telemetry being totally removed
DOTNET_CLI_TELEMETRY_OPTOUT=1, though it’s lame that it’s an opt-out and not opt-in. The CLI does give a fat warning on first use at least (which hilariously spams CI output). Opt-in would be so much better though, and opt-out by default is really not great.
an alternative to nuget.org
You can specify other package sources as well, so nothing technically stops someone from making their own alternative. That being said, you’d have to configure it for each project/solution that wants to use that registry.
Setting such a thing up could be insurance in case they pull anything in the future, too.
The main thing I’d be worried about here is nuget.org getting pulled. As far as I can tell, it’s run by MS, not the foundation. That’d be basically the entire ecosystem gone all at once. Fortunately, it’s actually super easy to create private registries that mirror packages on nuget.org, and it’s actually standard practice to do this at many companies. This means that at the very least it would be possible to recover some of the registry if this happened.
For a fork, I would think these would be the main goals I’d look for:
Please cite one example of Microsoft ever giving a fuck about users.
There aren’t many examples, but one that comes to mind is the adaptive controller. It’s not cheap, but it’s also presumably low volume, and it’s unbelievably configurable.
Outside of that, I’m out of ideas. Usually every good change comes in response to user backlash, from my experience anyway. I’ve moved over to Linux by now because I’m tired of dealing with what Windows has become.


The way it was presented with regards to search engines was that it was supposed to pull data that was more up-to-date than when the model was trained. It does do that, actually, and provides better results too, on average anyway.
But that’s just one domain, and “better” doesn’t mean “good” or “accurate”. In most domains, at least where I work, we’ve found that RAG overcomplicates things for little benefit, unfortunately.


The way the current systems are trained simply doesn’t allow for accepting and adopting new information continuously.
As further evidence of this, RAG was supposed to enable this. Instead, we’ve found that RAG was nothing more than an overused buzz-term that has limited applications, and often results in hallucination anyway.


No idea who told you this, but MS employees use Teams exclusively.
As for it being terrible, it’s unfortunately hard to find a competitor that does better with the same feature set (video/screen sharing/text channels/sso/tenants/etc). Many get close (like Slack) but none have the whole package.


Since the bottom of an article is usually the least visible, I’ll paste this here to make it more visible:
“The Copilot Discord channel has recently been targeted by spammers attempting to disrupt and overwhelm the space with harmful content not related to Copilot. Initially, this spam consisted of walls of text, so we added temporary filters for select terms to slow this activity. We have since made the decision to temporarily lock down the server while we work to implement stronger safeguards to protect users from this harmful spam and help ensure the server remains a safe, usable space for the community,” a Microsoft spokesperson told Windows Latest.
Microsoft added that blocking terms such as “Microslop,” along with other phrases in the spam campaign, was not intended as a permanent policy but a short-term mitigation while the company manages to put additional protections in place.
Whether it’s true or not that the policy was temporary, I guess we’ll see.
HeliBoard is a privacy-conscious and customizable open-source keyboard, based on AOSP / OpenBoard. Does not use internet permission, and thus is 100% offline.


In some cases, it appears to be the opposite: CEOs want to do mass layoffs, so they blame AI rather than taking accountability themselves. The Amazon layoffs reek of this.
Is there a point to this? Back to the Future isn’t 2001: A Space Odyssey. It doesn’t have to predict everything.
Cars crash enough already for reasons spanning from shit driving to shit manufacturing. I don’t see the value in making them even more guaranteed to be lethal on failure, especially when innocent pedestrians and people’s roofs are downrange from these things.