I think that’s a better plan than physically printing keys. I’d also want to save the keys in another format somewhere - perhaps using a small script to export them into a safe store in the cloud or a box I control somewhere
I think that’s a better plan than physically printing keys. I’d also want to save the keys in another format somewhere - perhaps using a small script to export them into a safe store in the cloud or a box I control somewhere
You need at least two copies in two different places - places that will not burn down/explode/flood/collapse/be locked down by the police at the same time.
An enterprise is going to be commissioning new computers or reformatting existing ones at least once per day. This means the bitlocker key list would need printouts at least every day in two places.
Given the above, it’s easy to see that this process will fail from time to time, in ways like accicentally leaking a document with all these keys.
I don’t think that the anti-oop collective is attacking polymorphism or overloading - both are important in functional programming. And let’s add encapsulation and implementation hiding to this list.
The argument is that OOP makes the wrong abstractions. Inheritance (as OOP models it) is quite rare on business entities. The other major example cited is that an algorithm written in the OOP style ends up distributing its code across the different classes, and therefore
Instead of this, the functional programmer says, you should write the algorithm as a function (or several functions) in one place, so it’s the function that walks the object structure. The navigation is done using tools like apply
or map
rather than a loop in a method on the parent instance.
A key insight in this approach is that the way an algorithm walks the data structure is the responsibility of the algorithm rather than a responsibility that is shared across many classes and subclasses.
In general, I think this is a valid point - when you are writing algorithms over the whole dataset. OOP does have some counterpoints encapsulating behaviour on just that object for example validating the object’s private members, or data processing for that object and its immediate children or peers.
This is exactly the answer.
I’d just expand on one thing: many systems have multiple apps that need to run at the same time. Each app has its own dependencies, sometimes requiring a specific version of a library.
In this situation, it’s very easy for one app to need v1 of MyCleverLibrary (and fails with v2) and another needs v2 (and fails with v1). And then at the next OS update, the distro updates to v2.5 and breaks everything.
In this situation, before containers, you will be stuck, or have some difficult workrounds including different LD_LIBRARY_PATH settings that then break at the next update.
Using containers, each app has its own libraries at the correct and tested versions. These subtle interdependencies are eliminated and packages ‘just work’.
Not really. People shed skin and hair constantly, and the small particles float in the air and distribute themselves throughout the volume. And your bacteria are along for the ride. One of the functions of the protective suits, gloves and hairnets is to contain these these particles and thus keep the air as clean as possible. When combined with lamina airflow, positive room pressure and other techniques, it keeps contamination down hugely.
My god that brought back memories. The first commands when sitting at a new terminal was always, always:
stty sane
stty erase '^H'
It was well into the 2000s before Unix had useable defaults.
Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model. Ouch.
Edit: you can try quantizing it. This reduces the amount of memory required per parameter to 4 bits, 2 bits or even 1 bit. As you reduce the size, the performance of the model can suffer. So in the extreme case you might be able to run this in under 64GB of graphics RAM.