

it looks like this only applies react server components, and it doesn’t look like element uses react server components
but i only had a quick skim; could be wrong, but personally i wouldn’t shut it down - not that im running a server myself


it looks like this only applies react server components, and it doesn’t look like element uses react server components
but i only had a quick skim; could be wrong, but personally i wouldn’t shut it down - not that im running a server myself
most things scale if you throw enough resources at them. we generally say that things don’t scale if the majority case doesn’t scale… it costs far fewer resources to scale with multiple repos that it does to scale a monorepo, thus monorepo doesn’t scale: i’d argue even the google case proves that… they’ve already sunk so much into dev tooling to make it work… it might be beneficial to the culture (in that they like engineers to work across the entire google codebase), but it’s not a decision made because it scales: scale is an impediment
or fixing windows by only using WSL and reading the arch wiki
that’s a good and bad thing though…
it’s easy to reference code, so it leads to tight coupling
it’s easy to reference code, so let’s pull this out into a separately testable, well-documented, reusable library
my main reason for ever using a monorepo is to separate out a bunch of shared libraries into real libraries, and still be able to have eg HMR
google does a lot of things that just aren’t realistic for the large majority of cases
before kubernetes, you couldn’t just reference borg and say “well google does it” and call it a day
i’d say it’s less that it’s inadequate, and more that it’s complex
for a small team, build a monolith and don’t worry
for a medium team, you’ll want to split your code into discreet parts (libraries shared across different parts of your codebase, services with discreet test boundaries, etc)… but you still need coordination of changes across all those things, and team members will probably be touching every part of the codebase at some point
for large teams, you want to take those discreet parts and make them fairly independent, and able to be managed separately: different languages, different deployment patterns, different test frameworks, heck even different infrastructure
a monorepo is a shit version of real, robust tooling in many categories… it’s quick to setup, and allows you a path to easily change to better tooling when it’s needed
You should really not need to do a PR across multiple repos.
different ways of treating PRs… it’s a perfectly valid strategy to say “a PR implements a specific feature”, in which case you might work in a backend, a front end, and library… of course, those PRs aren’t intrinsically linked (though they do have dependencies between them… heck i wouldn’t even say it’d be uncommon or wrong for the library to have schemas that do require changes in both the fronted and backend)
if you implement something in eg the backend, and then get retasked with something else, or the feature gets dropped then sure it’s “working” still, but to leave unused code like that would be pretty bad… backend and front end PRs tend to be fairly closely tied to each other
a monorepo does far more than i think you think it does… it’s a relatively low-infrastructure way of adding internal libraries shared across different parts of your codebase, external libraries without duplication (and ensuring versions are consistent, where required), and coordinating changes, and plenty more
can these things be achieved with build systems and deployment tooling? absolutely… but if you’re just a small team, a monorepo could be the right call
of course, once the team grows in size it’s no longer the correct option… real tooling is probably going to be faster and better in every way… but a monorepo allows you to choose when to replace different parts of the process… it emulates an environment with everything very separated
i’d say they’re pretty equivalent
a monorepo is far easier to develop a single-language, fairly monolithic (ie you need the whole application to develop any part) codebase in
(though as soon as you start adding multiple languages or it gets big enough that you need to work on parts without starting other parts of the application it starts to break down rather significantly)
but as soon as your app becomes less of a cohesive thing and more separated it becomes problematic… especially when it comes to deployments: a push to a repo doesn’t mean “deploy changes to everything” or “build everything” any more
i think the best solution (as with most things) is somewhere in the middle: perhaps several different repos, and a “monorepo” that’s mostly a bunch of subtrees or submodules… you can coordinate changes by committing to the monorepo (and changes are automatically duplicated), or just work on individual parts (tricky with pnpm since the workspace file would be in the monorepo)… but i’ve never really tried this: just had the thought for a while


the zip file itself might also be generated (you can just tack random garbage into places in the zip format and it’ll be ignored - which is extremely quick to do), in which case the hash would change… the file itself is important in case it’s an exploit in the unzip program itself, but also the contents of the file is important


not entirely true. if the file downloaded, windows does a bunch of “helpful” things with files… these are almost certainly benign (eg rendering thumbnails, getting metadata about certain file types) but almost anything is potentially exploitable (eg overflow in thumbnail generation code could lead to code execution just from browsing a website and then opening your downloads folder in explorer)
drive-by attacks don’t just effect the browser
with that said, it’d be a huge deal if this was the reality of the situation… it’s highly unlikely, but zero days exist, and the possibility is always real
i say this because this has been exploited in the past with exactly the same scenario: preview generation


new fabs is iffy… samsung chose not to scale up production because they’re betting that the AI bubble is just a bubble, and in that case any change in the short term will be bad in the long term… building a factory for DRAM takes years: let’s hope the bubble of AI enshittification doesn’t last that long


geopolitics is consistently hypocritical… especially when it comes to the US… we absolutely can, and should be telling everyone to stop being imperialist but in lieu of that, we can just tell russia to cut the shit


or the argument holds water and also the US has consistently been in the wrong for the same reasons


i closed reader view and scrolled just to see and wow the POPUPS and 50% of the page length being ads
WHAT
who uses the internet like this and finds it acceptable?!


buys you a little extra time to move to linux


the concept of someone working a 40hr week and not having money relax let alone pay their rent is literally foreign to me
it’s wild that people can work 80hr weeks and still barely scrape by


australia kinda does it like that… our minimum wage is tied to CPI (which covers much more than just food: also entertainment, rent, transport), and afaik was originally based on living standards
a wage that is fair and reasonable… sufficient to meet the normal needs of an average employee, regarded as a human being living in a civilised community.
in fact, australia invented the concept of a “living wage” in 1907
so it should be exactly that today: enough for an average person to live a decent existence (including entertainment, food, housing, etc)


and starting with this model leaves room for a “steam machine pro” for people that want more, just like playstation has done
… and also perhaps a “steam machine lite” for people that just want a little bit of retro/2d gaming on their tv


meta and ctrl switched, because if there’s something apple did right it’s using the thumb as modifier key for copy/paste/etc instead of pinkie finger which is far FAR less able to deal with repeat strain
but i also type programmers dvorak because i got pretty horrible wrist pain at one point so anything to stop me damaging my wrists :p
the vuln afaik is for remote code execution via basically a mechanism that’s kinda like a transparent RPC to the server (think like you just write frontend code with like a “getUsers” and it just automatically retrieves and deserializes the results so you can render the UI without worrying about how that data exists in the browser)
i’m not a front end engineer, and haven’t used react server components, but i am a principal software engineer, i do react for personal projects, and have written react professionally
i can’t think of a way it’d be exploitable via purely client-side means
i THINK what they mean is that you can use some of the RSC stuff without the RPC-style interfaces, and in that case they say the server component is still vulnerable, but you still need react things running on your server
a huge majority of react code is client-side only, with server-side code written in other languages/frameworks and interfaces with something like REST or GraphQL (or even RPC of course)