Programmer, graduate student, and gamer. I’m also learning French and love any opportunity to practice :)

  • 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle
  • You have to be explicit about which module you’re using at all times, even though 99% of the time only one could apply. When the type class resolution is unique, but complicated, there’s no mental overhead for the Haskell programmer but getting all the right modules is a lot of overhead for the OCaml programmer. It also lets us write functions that are polymorphic under a class constraint. In OCaml you have to explicitly take a module argument to do this. If you want to start composing such functions, it gets tedious extremely fast.

    And then even once you’re using a module, you can’t overload a function name. See: + vs +.. Basically modules and type classes solve different problems. You can do some things with modules that you cannot ergonomically do with type classes, for example. create a bit-set representation of sets of integers, and a balanced search tree for sets of other types, and expose that interface uniformly from the same module functor. But Haskell has other ways to achieve that same functionality and more.

    OCaml’s type system cannot replicate the things you can do with Haskell’s higher kinded types, type families, or data kinds at all (except for a fraction of Haskell’s GADTs).


  • Largely reasonable?

    Haskell is not good for systems programming which sums up about 60-70% of that post. Laziness is lovely in theory but many industry uses of Haskell use stricthaskell for all or most of their code, so I certainly agree with that part too.

    Their largest complaint about using Haskell for small non-systems programs seems to be the mental overhead induced by laziness. But for me, for small programs where performance isn’t a huge concern (think Advent of code or a script for daily life) laziness reduces my mental overhead. I think that author is just especially concerned about having a deep understanding of their programs’ performance because of their systems background. I worry about performance when it becomes relevant. Debugging Haskell performance issues is certainly harder than strict languages but still totally doable.

    The lack of type classes or other form of ergonomic overloading in OCaml is easily the single “feature” most responsible for the language never taking off.






  • AbelianGrape@beehaw.orgtoProgramming@programming.devCode Smells Catalog
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 months ago

    “Monadic type” has something like three meanings depending on context, and it’s not clear which one you mean. One of them is common in math, but not so common in programming, so probably not that. But neither “parametric types with a single argument” nor “types that encode a category-theoretic monad” have the property you say, as far as I know.

    I imagine you’re probably referring to the latter, since the optional monad exists. That’s very different from returning null. The inhabitants of Integer in Java, for example, are the boxed machine ints and null. The inhabitants of Optional[Integer] (it won’t let me use angle brackets here) are Optional.of(i) for each machine int i, Optional.empty(), and null.

    Optional.empty() is not null and should not be called a “Null object.” It’s also not of type Integer, so you’re not even allowed to return it unless the function type explicitly says so. Writing such function types is pretty uncommon to do in java programs but it’s more normal in kotlin. In languages like Haskell, which don’t have null at all, this is idiomatic.


  • I’ve only ever seen “one-time” in cryptography to refer to One-Time Pads (OTP). They are literally uncrackable (because every possible plaintext could be encoded by every possible ciphertext) but they achieve that by using a shared private key. The cipher becomes attackable if the key is re-used, hence the “one-time.”

    But that key has to be exchanged somehow, and that exchange can be attacked instead. Key exchange algorithms can’t necessarily transfer every possible OTP which means eavesdropping on the exchange would make an OTP attackable. So the best option we know of that doesn’t require secret meetings to share OTPs* really is to use RSA encryption. Once we have efficient quantum-resistant schemes, they’ll be the best option we know.

    * and let’s be honest, secret meetings can be eavesdropped on as well.




  • I’m a computer scientist mainly but with a heavy focus/interest in computer architecture. My plan is to teach at a university at this point - but it seems to me like that would be a good place to create completely open standards technology from.^1Specifically because if the point isn’t to make money, there’s no reason to create walled gardens.

    There’s certainly enough interest from people who want to be able to build their own systems. What would actually worry me isn’t the ability to make a new open standard or any of that. It’s that AMD64 is very hard to compete with in this space, because the processors are just faster, and there is so much x86 software that people who build PCs usually want access to.

    AMD64’s performance is the result of years and years of optimizations and patenting new hardware techniques, followed by aggressively litigating people trying to compete. ARM performance is catching up but ARM prefers licensing their core IP over making their own systems, making it harder for them to break into the PC space even if they want to.

    A new player would be in for a long, long time of unprofitable work just to compete with AMD64 - which most people are still happy with anyway.

    ^1 some others and I are actually working on some new ISA / open soft processors for it. However it is focused at an educational setting and unlikely to ever be used outside of embedded devices at most.



  • I’ve used it to fix regressions, most recently in a register allocator for a compiler. There’s pretty much no chance I would’ve found that particular bug otherwise; it was caused by an innocuous change (one of those “this shouldn’t matter” things) clashing badly with an incorrect assumption baked into a completely different part of the allocator.

    I had seen the same effect from an unrelated bug on a different program. When I added a new test and saw the same effect, I had a “didn’t I fix this already?” moment. When I saw that the previous fix was still there, I checked if an older version of the allocator exhibited the same bug on the new test, and it did not. Bisecting found the offending change relatively quickly and further conventional testing exposed the incorrect assumption.


  • Learning how to program in any language will make it easier to pick up any other language, because the main burden for a beginner is how to think programmatically. However once you’re enough past that wall, being an expert on one language will mostly only help pick up languages that are similar. So if you knew C++, you could pick up the syntax and probably most of the semantics of the others very quickly, because they are similar in that regard. But you’d still probably struggle to actually program in C, because C is lower level (has way fewer features) than C++.

    Technically speaking, C is a subset of C++. But that doesn’t mean being a good C++ programmer automatically makes you a good C programmer.

    C# is similar to the other two in syntax as well, but it’s much more like Java than either of them.


  • If you want to make simpler games, you could start with scratch or stencyl. These tools aren’t really programming languages per se but they let you build programs out of blocks that are much easier to visualize and play around with. There’s some research that suggests they are good entry languages and some research that suggests they aren’t, so ymmv. I’ve used both, but I knew how to program already.

    For the record you shouldn’t let “usually made with” drive your decisions. Java is still popular for some games. Slay the spire, a very popular deck building game, was written in Java, which is a decently popular choice if you want to support modding. But C++ and C# are more popular simply because that’s what you use if you’re using engines like unity or unreal.

    side note: C, C++, and C# are all different languages.





  • That makes Invidious’ readme (which claims no YouTube APIs at all) disingenuous at the very least.

    More likely, you need a lawyer. I read that TOS, and I think it applies to any YouTube API endpoint, internal or otherwise. Best of luck, because I agree with Invidious’ goals…

    Side note: a browser communicating with YouTube would be communicating with youtube. Not with com.google.android.youtube.api or whatever. What I’m seeing is that Invidious tries to act like the youtube service itself, which is very different from acting like a browser.

    Edit: I’ve spent about 5 minutes looking for EU case law about this but haven’t been able to find anything except un-cited references to an exception for “producing interoperable devices.” Do you have sources? In the United States, at least, “clean room reverse engineering” has a pretty specific definition that follows four steps:

    1. A (team of) engineers reverse-engineers an existing product, in this case, the YouTube internal API.
    2. Those engineers write a specification of the product’s (outwardly-visible) behavior.
    3. A lawyer reviews that specification to ensure that it does not contain anything infringing on any copyrights relevant to the product.
    4. A separate (team of) engineers re-implement the product according to the specification.

    I don’t think what you’re doing meets that definition. You achieved step 1, and possibly step 2, and then didn’t attempt the others. You reverse engineered something for the purpose of using it - but you haven’t actually reimplemented it, which is the “clean room” part of “clean room reverse engineering.” Re-implementing it would presumably require building your own server for actually hosting videos on Invidious instances.

    There’s quite a history of this term in the US, going back to even before Intel vs. NEC, when it was very much in the public eye. NEC had designed a microprocessor with the same instruction set as the popular Intel 8080 [same instruction set = interoperability]. Internally, both devices use “microcode” to drive their execution. In the analogy, that microcode is the “InnerTube” API. NEC’s “V20” device was quite different from the 8080, and needed its own microcode. Intel claimed that NEC violated Intel’s copyright by basing NEC’s microcode on the 8080’s. As part of arguing this, NEC rewrote their microcode from scratch following proper cleanroom procedure, and the decision in the case partly relies on this to decide that NEC is in the clear. Had NEC simply injected the 8080 microcode into their NEC-V20 device directly, the case would probably have gone very differently. It would also be a very different case, because the NEC-V20 device would look completely different.

    You didn’t re-implement InnerTube. You injected InnerTube into your own service. Had you re-implemented InnerTube as part of Invidious, Invidious would look completely different.

    Anyway, all that aside, even if what you’re doing did meet the conditions of clean-room reverse engineering, I don’t think it would fall under the (again, un-cited, so maybe we’re talking about different things) interoperability exception in the EU. You’re not producing a device/service that needs to be interoperable with other devices/services. You’re producing a service with an explicit goal of operating differently.

    To be clear, IANAL, but your reasoning seems shaky.


  • It’s certainly possible to scrape data from interactions with a site directly, without using its API. This is even legal - there were no gymnastics in my response there. However, that decision has since been remanded, then re-affirmed, then challenged, and then LinkedIn obtained an injuction against HiQ which the two of them are still fighting over. So it could get properly overturned.

    I definitely thought it seemed like it would be difficult to do this to offer a youtube frontend, but plausible enough that I didn’t look into it. Thank you for this. I’m looking more closely now :)

    If they are using undocumented internal APIs, do YouTube’s API TOS apply to those? I checked the text of the TOS and it seems to me like it should apply; they say “The YouTube API services … made available by YouTube including …”. That seems broad enough to me to cover internal APIs as well, if their endpoints are accessible, but IANAL.

    Also, the open response to the C&D seems to throw shade at the TOS saying “The “YouTube API Services” means (i) the YouTube API services” but ignores that this is immediately followed by parenthetical examples and qualifiers. The TOS is defining the term so that it doesn’t have to repeatedly add the qualifiers. Nothing weird about that. That’s uh… pretty bad-faith arguing, if I’m interpreting it correctly.

    Edit: assuming you refer to the same reverse engineering points that they made above… yeah.