• 39 Posts
  • 498 Comments
Joined 3 years ago
cake
Cake day: June 11th, 2023

help-circle
  • The only way out of this is regulation, which requires political activism.

    The EU did some good process on that through GDPR and the newer digital laws regarding safety, disclosure, maintenance, and due diligence requirements. Prosecution with fines is there, but slow, and arguably too sporadic.

    Political activism in this direction is unthankful work and a lot of effort. I am reminded of someone who has pushed for public institutions to move away from US big tech for many years. Now Trump is the reason for change, and their effort can surely feel pointless.

    I do occasionally report GDPR violations, etc. That can feel pointless as well. But it’s necessary, and the only way to (support/influence) agencies to take action.



  • they asked me if I could develop some useful metrics for technical debt which could be surveyed relatively easily, ideally automatically

    This is where I would have said “no, that’s not possible” or had a discussion about risks where things you simply can’t cover with automated metrics would lead to misdirection and possibly negative instead of positive consequences.

    They then explore what technical debt is and notice that even many things outside of technical debt have significant impact you can’t ignore. I’m quite disappointed they don’t come back to their metrics task at all. How did they finish their task? Did they communicate and discuss all these broader concepts instead of implementing metrics?

    There’s some metrics you can implement on code. Test coverage, complexity by various metrics, function body length, etc. But they only ever cover small aspects of technical debt. Consequently, they can’t be a foundation for (continuously) steering debt payment efforts for most positive effects.

    I know my projects and can make a list of things and efforts and impacts and we can prioritize those. But I find the idea of (automated) metrics entirely inappropriate for observing or steering technical debt.


  • As a lead dev I have plenty of cases where I weigh effort vs impact and risk and conclude to “this is good enough for now”. Such cases are not poor management - which I assume you mean something like “we have to ship more faster, so do the shortest”. Sometimes cutting corners is the correct and good decision, sometimes the only feasible one, as long as you’re aware and weigh risks and consequences.

    We, and specifically I, do plenty of improvements where possible and reasonable. Whatever I visit, depending on how much effort it is. But sometimes effort is too much to be resolvable or investable.

    For context, I’m working on a project that has been running for 20 years.




  • I would say doneness is about completeness within context, not immutability.

    The environment may change, but within context, it can still be considered done.

    It’s fine to say and consider software never done, because there are known and unknown unknowns and extrapolations and expectations. But I think calling something done has value too.

    It is a label of intention, of consideration, within the current context. If the environment changes and you want or need to use it, by all means update it. That doesn’t mean the done label assigned previously was wrong [in its context].


    We also say “I’m done” to mean our own leave, even when there is no completeness on the product, but only on our own tolerance.

    In the same way, if you shift focus, done may very well be done and not done at the same time. Done for someone in one environment, and not done for someone in another.

    More often than ‘done’ I see ‘feature complete’ or ‘in maintenance mode’ in project READMEs, which I think are better labels.



  • From the paper abstract:

    […] Novice workers who rely heavily on AI to complete unfamiliar tasks may compromise their own skill acquisition in the process. We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI.

    We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average. Participants who fully delegated coding tasks showed some productivity improvements, but at the cost of learning the library.

    We identify six distinct AI interaction patterns, three of which involve cognitive engagement and preserve learning outcomes even when participants receive AI assistance. Our findings suggest that AI-enhanced productivity is not a shortcut to competence and AI assistance should be carefully adopted into workflows to preserve skill formation – particularly in safety-critical domains.



  • Do good work, be interested and show interest, and be in a recipiable environment.

    If your current environment is overbearing with power politics you don’t succeed in and you want change you’ll probably have to change environments.

    If you want impact consider whether smaller companies and teams would be beneficial. You may be able to fill your desires of impact and control even without becoming a formal lead role. Or become one implicitly or naturally quicker in smaller less formal and structured environments.

    You can also look for job offerings for those kinds of roles specifically. No need to seek out a climb in house when you can find more direct routes.


  • If the XML parser parses into an ordered representation (the XML information set), isn’t it then the deserializer’s choice how they map that to the programming language/type system they are deserializing to? So in a system with ordered arrays it would likely map to those?

    If XML can be written in an ordered way, and the parsed XML information set has ordered children for those, I still don’t see where order gets lost or is impossible [to guarantee] in XML.



  • while JSON is a generalized data structure with support for various data types supported by programming languages

    Honestly, I find it surprising that you say “support for various data types supported by programming languages”. Data types are particularly weak in JSON when you go beyond JavaScript. Only number for numbers, no integer types, no date, no time, etc.

    Regarding use, I see, at least to some degree, JSON outside of use for network transfer. For example, used for configuration files.



  • Kissaki@programming.devOPtoProgramming@programming.devThe lost art of XML — mmagueta
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    13 days ago

    Making XML schemas work was often a hassle. You have a schema ID, and sometimes you can open or load the schema through that URL. Other times, it serves only as an identifier and your tooling/IDE must support ID to local xsd file mappings that you configure.

    Every time it didn’t immediately work, you’d think: Man, why don’t they publish the schema under that public URL.



  • It can be used as alternatives. In MSBuild you can use attributes and sub elements interchangeably. Which, if you’re writing it, gives you a choice of preference. I typically prefer attributes for conciseness (vertical density), but switch to subelements once the length/number becomes a (significant) downside.

    Of course that’s more of a human writing view. Your point about ambiguity in de-/serialization still stands at least until the interface defines expectation or behavior as a general mechanism one way or the other, or with specific schema.


  • The readability and obviousness of XML can not be overstated. JSON is simple and dense (within the limit of text). But look at JSON alone, and all you can do is hope for named fields. Outside of that, you depend on context knowledge and specific structure and naming context.

    Whenever I start editing json config files I have to be careful about trailing commas, structure with opening and closing parens, placement and field naming. The best you can do is offer a default-filled config file that already has the full structure.

    While XML does not solve all of it, it certainly is more descriptive and more structured, easing many of those pain points.


    It’s interesting that web tech had XML in the early stages of AJAX, the dynamic web. But in the end, we sent JSON through XMLHttpRequest. JSON won.