• confusedpuppy@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    4 hours ago

    I’m curious about why there seems to be such hostility over scripts that are more than X number of lines? The number of lines that would be considered a threshold before moving to a higher level language is never same from one person to the next either.

    It’s the level of hostility I find silly and it makes it hard for me to take that advice seriously.

  • Caveman@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    11 hours ago

    I like using bash a lot for terminal automation but as soon as anything goes beyond around 7-15 lines I reach for a scripting language like python or js. Bash is just really hard and counterintuitive

  • MonkderVierte@lemmy.zip
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    14 hours ago

    When to use what

    My advice is to optimize for read- and understand-ability.

    This means to use the || operator when the fallback/recovery step is short, such as printing an error or exiting the program right away.

    On the flip side, there are many cases where an if else statement is preferred due to the complexity of handling the error.

    Fully agree. Shell scripts quickly get ugly over 50 loc. Please avoid spaghetti code in shell scripts too. The usual

    if [ -n "$var" ]; then
        xyz "$var"
    fi
    

    is ok once or twice. But if you have tens of them,

    [ -n "$var" ] && xyz "$var"
    

    is more readable. Or leave the check entirely away if xyz reports the error too.

    And please.do.functions. Especially for error handling. And also for repeated patterns. For example the above, if it’s always xyz, then something like

    checkxyz() { [ -n "$1" ] && xyz "$1"; }
    
    checkxyz "$var1" && abc
    checkxyz "$var2" && 123
    checkxyz "$var3 || error "failed to get var3" 2
    

    is more readable.

    And sometimes, a function is better for readability, even if you use it only once. For example, from one of my bigger scripts (i should have done in python).

    full_path() {
      case "$1" in
        /*)  printf "%s\n" "${1%/}";;
        *)   printf "%s\n" "$PWD/${1%/}";;
      esac
    }
    sanitize() {
      basename "${1%.*}" \
        |sed 's/[^A-Za-z0-9./_-]/ /g' \
        |tr -s " "
    }
    
    proj_dir="$(full_path "$proj_dir")"   # get full path
    proj_name="$(sanitize "$proj_dir")"   # get sane name
    

    Code as documentation basically.

    Right, about the last point: if your script grows over 200 loc despite being nicely formatted and all (if-else spaghetti needs more space too), consider going further in a real programming language.
    Shell is really only glue, not much for processing. It quickly gets messy and hard to debug, no mather how good your debugging functions are.

  • cr1cket@sopuli.xyz
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    18 hours ago

    Let me just drop my materials for a talk i’ve given about basically this topic: https://codeberg.org/flart/you_suck_at_shell_scripting/src/branch/main/you_suck.md

    Mainly because: The linked article is all nice and dandy, but it completely ignores the topic of double brackets and why they’re nice.

    And also, and this is my very strong opinion: if you end up thinking about exception handling (like the mentioned traps) in shell scripts, you should stop immediately and switch to a proper programming language.

    Shell scripts are great, i love them. But they have an area they’re good for and a lot of areas where they aren’t.

    • MonkderVierte@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      2 hours ago

      Do you need POSIX compability?

      • If not, use bash-isms without shame

      But call it a bash script then! Remember: #!/bin/sh is run by all kinds of shells; consider them POSIX. Bash is #!/bin/bash.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    What I always find frustrating about that, is that even a colleague with much more Bash experience than me, will ask me what those options are, if I slap a set -euo pipefail or similar into there.

    I guess, I could prepare a snippet like in the article with proper comments instead:

    set -e # exit on error
    set -u # exit on unset variable
    set -o pipefail # exit on errors in pipes
    

    Maybe with the whole trapping thing, too.

    But yeah, will have to remember to use that. Most Bash scripts start out as just quickly trying something out, so it’s easy to forget setting the proper options…

  • thingsiplay@lemmy.ml
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    1 day ago

    As you’ll learn later in this blogpost, there are some footguns and caveats you’ll need to keep in mind when using -e.

    I am so glad this article is not following blind recommendations, as lot of people usually do. It’s better to handle the error, instead closing the script that caused the error. I think the option -e should be avoided by default, unless there is a really good reason to use it.

    • thenextguy@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      1 day ago

      The point of using -e is that it forces you to handle the error, or even be aware that there is one.

        • Oinks@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          15 hours ago

          This is a great article. I just want to highlight this insane behavior in particular (slightly dramatized):

          set -e
          
          safeDelete() {
            false
          
            # Surely we don't reach this, right?
            echo "rm $@ goes brr..."
          }
          
          if safeDelete all of my files; then
              : # do more stuff
          fi
          

          Frankly if you actually need robustness (which is not always), you should be using a real programming language with exceptions or result types or both (i.e. not C). UNIX processes are just not really up to the task.

      • thingsiplay@lemmy.ml
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        1 day ago

        In my experience this option is too risky. Making simple changes to the script without scientifically proofing and testing it works under all cases becomes impossible (depending on how complex the script and task itself is). It has a bit of the energy of “well you have to make no errors in C, then you can write good code and it never fails”.

        This option is good if the script MUST fail under any circumstances, if any error return of a program occurs. Which is usually not the case for most scripts. It’s also useful in testing when debugging or when developing. Also useful if you purposefully enable and disable the option on the fly for sensitive segments of the script. I do not like this option as a default.

        • MonkderVierte@lemmy.zip
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 hours ago

          This option is good if the script MUST fail under any circumstances

          I mean, that or file mangling, because you didn’t catch a error of some unplanned usecase.

        • Ephera@lemmy.ml
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 day ago

          I don’t have the Bash experience to argue against that, but from a general programming experience, I want things to crash as loudly as possible when anything unexpected happens. Otherwise, you might never spot it failing.

          Well, and nevermind that it could genuinely break things, if an intermediate step fails, but it continues running.

          • thingsiplay@lemmy.ml
            link
            fedilink
            arrow-up
            6
            ·
            edit-2
            1 day ago

            Bash and the commandline are designed to work after an error. I don’t want it to fail after an error. It depends on the error though, and how critical it is. And this option makes no distinction. There are lot of commands where a fail is part of normal execution. As I said before, this option can be helpful when developing, but I do not want it in production. Often “silent” fails are a good thing (but as said, it depends on the type). The entire language is designed to sometimes fail and keep working as intended.

            You really can’t compare Bash to a normal programming language, because the language is contained and developed in itself. While Bash relies on random and unrelated applications. That’s why I do not like comparisons like that.

            Edit: I do do not want to exit the script on random error codes, but maybe handle the error. With that option in place, I have to make sure an error never happens. Which is not what I want.

            • Eager Eagle@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              22 hours ago

              Often “silent” fails are a good thing

              Silent fails have caused me to waste many hours of my time trying to figure out what the fuck was happening with a simple script. I’ve been using -e on nearly all bash code I’ve written for years - with the exception of sourced ones - and wouldn’t go back.

              If an unhandled error happened, I want my program to crash so I can evaluate whether I need to ignore it, or actually handle it.

            • Gobbel2000@programming.dev
              link
              fedilink
              arrow-up
              5
              ·
              21 hours ago

              But you can just as well make an exception to allow errors when -e is enabled with something like command || true, or even some warning message.

              I feel like, while it does occur, allowing errors like this is more unusual than stopping the script in an error, so it’s good to explicitly mark this case, therefore -e is still a reasonable default in most cases.

        • Feyd@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          1 day ago

          Ehhh I don’t think I’ve used bash outside of random stuff on my machine in years except in CI pipelines and wanting them to stop and fail the pipeline the second anything goes wrong is exactly what I want.

          • thingsiplay@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            1 day ago

            I do not want to think about every possible error that can happen. I do not want to study every program I call to look for any possible errors. Only errors that are important to my task.

            As I said, there are reasons to use this option when the script MUST fail on error.And its helpful for creating the script. I just don’t like generalizations to always enable this option.

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    3
    arrow-down
    3
    ·
    23 hours ago

    If you think you need this you’re doing it wrong. Nobody should be writing bash scripts more than a few lines long. Use a more sane language. Deno is pretty nice for scripting.