• Korne127@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 minutes ago

        Yeah, no. There are many good examples of this, where you just have to use something and still criticize it. But Gmail is like the farthest away from that you can be. There are thousands of alternatives, and of which you can choose, and get basically exactly the same experience. It’s an open federated protocol; there is no reason at all to stay at the single worst instance that tries to monopolize the whole protocol and uses your data.

  • Kissaki@feddit.org
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    16 hours ago

    The email footer is the ultimate irony and disrespect.

    IMPORTANT NOTICE: You are interacting with an Al system. All conversations with this Al system are published publicly online by [?]
    Do not share information you would prefer to keep private.

    It’s not even a human thank you.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    edit-2
    17 hours ago

    Did y’all read the email?

    slop

    embodies the elegance of simplicity - proving that

    another landmark achievement

    showcase your philosophy of powerful, minimal design

    That is one sloppy email. Man, Claude has gotten worse at writing.

    I’m not sure Rob even realizes this, but the email is from some kind of automated agent: https://agentvillage.org/

    So it’s not even an actual thank you from a human, I think. It’s random spam.

      • eskimofry@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        26 minutes ago

        “embodies the elegance of simplicity”

        corporate speak that doesn’t mean anything. Also If you are talking to the creator of a programming language they already know that. That was the goal of the language.

        “Plan 9 from bell labs, another landmark achievement”

        the sentence is framed as if its a school essay where the teacher asked the question “describe the evolution of unix and linux in 300 words”

        “The sam and Acme editors which showcase your philosophy of powerful, minimal design”

        Again explaining how good software is to the author. Also note how this sentence could have been a question in a school essay: “What are the design philosopies behind the sam and acme editors?”

  • T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    166
    arrow-down
    2
    ·
    1 day ago

    I don’t understand the point of sending the original e-mail. Okay, you want to thank the person who helped invent UTF-8, I get that much, but why would anyone feel appreciated in getting an e-mail written solely/mostly by a computer?

    It’s like sending a touching birthday card to your friends, but instead of writing something, you just bought a stamp with a feel-good sentence on it, and plonked that on.

    • MajinBlayze@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      16 hours ago

      Even the stamp gesture is implicitly more genuine; receiving a card/stamp implies the effort to:

      • go to a place
      • review some number of cards and stamps
      • select one that best expresses whatever message you want to send
      • put it in the physical mail to send it

      Most people won’t get that impression from an llm generated email

    • darklamer@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      15
      ·
      18 hours ago

      I don’t understand the point of sending the original e-mail.

      There never was any point to it, it was done by an LLM, a computer program incapable of understanding. That’s why it was so infuriating.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      59
      arrow-down
      16
      ·
      edit-2
      1 day ago

      The project has multiple models with access to the Internet raising money for charity over the past few months.

      The organizers told the models to do random acts of kindness for Christmas Day.

      The models figured it would be nice to email people they appreciated and thank them for the things they appreciated, and one of the people they decided to appreciate was Rob Pike.

      (Who ironically decades ago created a Usenet spam bot to troll people online, which might be my favorite nuance to the story.)

      As for why the model didn’t think through why Rob Pike wouldn’t appreciate getting a thank you email from them? The models are harnessed in a setup that’s a lot of positive feedback about their involvement from the other humans and other models, so “humans might hate hearing from me” probably wasn’t very contextually top of mind.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        66
        arrow-down
        6
        ·
        24 hours ago

        You’re attributing a lot of agency to the fancy autocomplete, and that’s big part of the overall problem.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          4
          ·
          6 hours ago

          You seem pretty confident in your position. Do you mind sharing where this confidence comes from?

          Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn’t model or simulate complex agency mechanics?

          I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.

          • Best_Jeanist@discuss.online
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 hour ago

            Well that’s simple, they’re Christians - they think human beings are given souls by Yahweh, and that’s where their intelligence comes from. Since LLMs don’t have souls, they can’t think.

        • IngeniousRocks (They/She) @lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          17 hours ago

          How are we meant to have these conversations if people keep complaining about the personification of LLMs without offering alternative phrasing? Showing up and complaining without offering a solution is just that, complaining. Do something about it. What do YOU think we should call the active context a model has access to without personifying it or overtechnicalizing the phrasing and rendering it useless to laymen, @neclimdul@lemmy.world?

          • neclimdul@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            11 hours ago

            Well, since you asked I’d basically do what you said. Something like “so ‘humans might hate hearing from me’ probably wasn’t part of the context it was using."

        • fuzzzerd@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          19 hours ago

          Let’s be generous for a moment and assume good intent, how else would you describe the situation where the llm doesn’t consider a negative response to its actions due to its training and context being limited?

          Sure it gives the llm a more human like persona, but so far I’ve yet to read a better way to describing its behaviour, it is designed to emulate human behavior so using human descriptors helps convey the intent.

          • neclimdul@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 hours ago

            I think you did a fine job right there explaining it without personifying it. You also captured the nuance without implying the machine could apply empathy, reasoning, or be held accountable the same way a human could.

            • fuzzzerd@programming.dev
              link
              fedilink
              English
              arrow-up
              3
              ·
              11 hours ago

              There’s value in brevity and clarity, I took two paragraphs and the other was two words. I don’t like it either, but it does seem to be the way most people talk.

              • neclimdul@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 hours ago

                I assumed you would understand I meant the short part of your statement describing the LLM. Not your slight dig at me, your setting up the question, and your clarification on your perspective.

                So you be more clear, I meant “The IIm doesn’t consider a negative response to its actions due to its training and context being limited”

                In fact, what you said is not much different from the statement in question. And you could argue on top of being more brief, if you remove “top of mind” it’s actually more clear. Implying training and prompt context instead of the bot understanding and being mindful of the context it was operating in.

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        39
        arrow-down
        5
        ·
        edit-2
        21 hours ago

        As has been pointed out to you, there is no thinking involved in an LLM. No context comprehension. Please don’t spread this misconception.

        Edit: a typo

        • lad@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          23 minutes ago

          No thinking is not the same as no actions, we had bots in games for decades and that bots look like they act reasonably but there never was any thinking.

          I feel like ‘a lot of agency’ is wrong as there is no agency, but it doesn’t mean that an LLM in a looped setup can’t arrive to these actions and perform them. It doesn’t require neither agency, nor thinking

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          5
          ·
          6 hours ago

          You seem very confident in this position. Can you share where you draw this confidence from? Was there a source that especially impressed upon you the impossibility of context comprehension in modern transformers?

          If we’re concerned about misconceptions and misinformation, it would be helpful to know what informs your surety that your own position about the impossibility of modeling that kind of complexity is correct.

          • raspberriesareyummy@lemmy.world
            link
            fedilink
            English
            arrow-up
            19
            ·
            21 hours ago

            That’s leaving out vital information however. Certain types of brains (e.g. mammal brains) can derive abstract understanding of relationships from reinforcement learning. A LLM that is trained on “letting go of a stone makes it fall to the ground” will not be able to predict what “letting go of a stick” will result in. Unless it is trained on thousands of other non-stick objects also falling to the ground, in which case it will also tell you that letting go of a gas balloon will make it fall to the ground.

            • Best_Jeanist@discuss.online
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 hour ago

              Well that seems like a pretty easy hypothesis to test. Why don’t you log on to chatgpt and ask it what will happen if you let go of a helium balloon? Your hypothesis is it’ll say the balloon falls, so prove it.

              • eskimofry@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 minutes ago

                that’s quite dishonest because LLMs have had all manner of facts pre-trained on it with datacenters all over the world catering to it. If you think it can learn in the real world without many many iterations and it still needs pushing and proding on simple tasks that humans perform then I am not convinced.

                It’s like saying a chess playing computer program like stockfish is a good indicator of intelligence because it knows to play chess but you forgot that the human chess players’ expertise was used to train it and understand what makes a good chess program.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            17 hours ago

            That’s the thing with our terminology, we love to anthropomorphize things. It wasn’t a big problem before because most people had enough grasp on reality to understand that when a script makes :-) smile when the result is positive, or :-( smile otherwise, there is no actual mind behind it that can be happy or sad. But now the generator makes convincing enough sequence of words, so people went mad, and this cute terminology doesn’t work anymore.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          6 hours ago

          Indeed, there’s a pretty big gulf between the competency needed to run a Lemmy client and the competency needed to understand the internal mechanics of a modern transformer.

          Do you mind sharing where you draw your own understanding and confidence that they aren’t capable of simulating thought processes in a scenario like what happened above?

        • Kogasa@programming.dev
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          edit-2
          19 hours ago

          Thinking has nothing to do with it. The positive context in which the bot was trained made it unlikely for a sentence describing a likely negative reaction to be output.

          People on Lemmy are absolutely rabid about “AI” they can’t help attacking people who don’t even disagree with them.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      1 day ago

      Fully agree. I’m generally an AI optimist but I don’t understand communicating through AI generated text in any meaningful context - that’s incredibly disrespectful. I don’t even use it at work to talk business with my somewhat large team and I just don’t understand how anyone would appreciate an AI written thank you letter. What a dumb idea.

    • flying_sheep@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      10
      ·
      edit-2
      1 day ago

      Mu. Your question reveals that you didn’t read the article. Try doing that, then you know which failed assumption led to your question making no sense.

  • BonkTheAnnoyed@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    37
    ·
    1 day ago

    R Pike is legend. His videos on concurrent programming remain reference level excellence years after publication. Just a great teacher as well as brilliant theoretical programmer.

    • dejected_warp_core@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      19 hours ago

      I haven’t always been a fan of Go. It launched with some iffy design decisions that have since been patched, either by the project maintainers or the community. It’s a much better experience now, which suggests that maybe there’s some long-range vision at work that I wasn’t privy to.

      That said, Pike clearly has a lot of good ideas and I’m glad Google funded him to bring those to light.

      I’ll also say that after finally wrapping my head around Python and JavaScript async/await, I actually much prefer the Goroutine and channel model for concurrency. I got to those languages after surviving C++, and believe me when I say that it’s a bad time when your software develops a bad case of warts. Better to not contract them in the first place.

    • slappyfuck@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      21 hours ago

      All the folks from the UNIX tradition really are/were. MIT and Bell Labs were just amazing.

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    140
    ·
    edit-2
    2 days ago

    I like how the article just regurgitates facts from Wikipedia just like the thank you email does.

      • [object Object]@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        4 hours ago

        Yeah, anything that gets a rise out of the creators of Go is good in my book.

        The guy still thinks computers have 64 KB of memory and we need to economize on the length of identifiers. Nothing he says or does should be taken seriously in this day.

        He’d probably like an appreciation note if it was written with all vowels taken out.

        • adr1an@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 day ago

          I love Python because it’s actually the second best language to do anything. For concurrency, Go is better. Also, you are terribly naive to judge a language only by its syntax.

        • Axolotl@feddit.it
          link
          fedilink
          English
          arrow-up
          35
          arrow-down
          3
          ·
          edit-2
          2 days ago

          This gotta be ragebait, everyone know that a language isn’t bad or good only for a single thing, hell there is no bad language, the reason why “python is better” is because you use it to make kids learn how to program, this is a good use, every other use is just…not good since it’s slow as hell and the indented syntax make it hell to write with but i’il gave you that python > go for making kids learn.

            • Danitos@reddthat.com
              link
              fedilink
              English
              arrow-up
              33
              ·
              edit-2
              1 day ago

              Don’t you fucking dare speak badly of my beloved Brainfuck

              In fact, take this fully functional Fibonacci sequence generator I did some time ago, so you can repent from your blasphemy by looking at its beauty.

              ;>;>;<<[->>[->+>+<<]>>[<<+>>-]<<<[>>+<<-]>[<+>-]>[<+>-]<:<<]
              
            • yetAnotherUser@lemmy.ca
              link
              fedilink
              English
              arrow-up
              10
              ·
              1 day ago

              That isn’t a bad language. It’s pretty simple and it serves a cool purpose, which is to convey the power of a Turing machine. Now this is a bad programming language.

              • [object Object]@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 hours ago

                Malbolge is great for replying to anyone who claims that since programming languages are Turing-complete, any one of them is fit for the job.

                • lad@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  8 minutes ago

                  You can transpile from C to Malbolge and then run it (this will probably take forever for most of C programs). I thought it can be used for obfuscation, and sure enough Wiki already states that:

                  Hisashi Iizawa et al. also proposed a guide for programming in Malbolge for the purpose of obfuscation for software protection

    • cub Gucci@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      23 hours ago

      Haven’t python reintroduced the infix notation? That’s incredibly exhausting and lame. A simple fuck you would look much fancier

      • addie@feddit.uk
        link
        fedilink
        English
        arrow-up
        35
        arrow-down
        1
        ·
        2 days ago

        Interesting, but misguided, I think.

        If you’ve selected Python as your programming language, then your problem is likely either to do some text processing, a server-side lambda, or to provide a quick user interface. If you’re using it for eg. Numpy, then you’re really using Python to load and format some data before handing it to a dedicated maths library for evaluation.

        If you’ve selected Go as your programming language, then your problem is likely to be either networking related - perhaps to provide a microservice that mediates between network and database - or orchestration of some kind. Kubernetes is the famous one, but a lot of system configuration tools use it to manipulate a variety of other services.

        What these uses have in common is that they’re usually disk- or network- limited and spend most of their time waiting, so it doesn’t matter so much if they’re not super efficient. If you are planning to peg the CPU at 100% for hours on end, you wouldn’t choose them - you’d reach for C / C++ / Rust. Although Swift does remarkably well, too.

        Seeing how quickly you can solve Fannkuch-Redux using Python is a bit like seeing how quickly you can drive nails into a wall using a screwdriver. Interesting in its way, but you’d be better picking up the correct tool in the first place.

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          further to that, “demonstrably worse for the planet” i’d like to debate: considering a huge amount of climate science is done with python-based tools because they’re far easier for researchers to pick up and run with - ie just get shit done rather than write good/clean code - i’d argue the benefit of python to the planet is in the outputs it enables for significantly reduced (or in many cases, perhaps outright enabled) input costs

          • xep@discuss.online
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 day ago

            If you need to optimize for performance, a common approach in Python is to extend it in C/C++. It’s quite easy to do. Many high performance modules in Python are written in C/C++.

            It’s also easy to embed Python in a C/C++ program, should you feel the need to add some scripting support to it. A very nice feature of Python, in my opinion.

            • Pup Biru@aussie.zone
              link
              fedilink
              English
              arrow-up
              3
              ·
              23 hours ago

              absolutely! similar is true of node in v8 (though python imo is far more mature in this regard) and probably most other languages

              exactly why things like numpy are so popular: yeah python is slow, but python is just the orchestrator

          • bryndos@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            1 day ago

            Compare it to the likely alternative for the task/person, probably R or even MS excel in many cases i’d guess. The alternatives should ideally be based on empirical observation of the population. The marginal saving of choosing a higher efficiency than python might look a lot lower.

  • NotJohnSmith@feddit.uk
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    4
    ·
    1 day ago

    Just one question Op. Did you sensor the word Fuck or is it the app you’re using to access Lemmy doing it automatically?

    Interested, as I’m seeing it alot

    • ImgurRefugee114@reddthat.com
      link
      fedilink
      English
      arrow-up
      30
      ·
      edit-2
      2 days ago

      Appears to be legit. That domain (robpike.io) isn’t his homepage but it is his, inferring from his github repos and go packages.

      The post is real and the account appears to own that domain (needs TXT record), so it seems genuine.

  • stiffyGlitch@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    18 hours ago

    I don’t think this is a reliable resource. I’m not gonna do a deep dive cause I actually don’t care, but most articles don’t say “AI slop”. if it is sorry for saying this just had a simple opinion

  • silasmariner@programming.dev
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    30
    ·
    edit-2
    1 day ago

    Ironically Go is such a shite verbose language that basically everyone I know who has to work with it will use an llm code-assistant tool to avoid having to write all the boilerplate themselves.

    I know of no other language that comes close to prompting the level of LLM-dependency that Go inspires.

    Edit: well, seems like this goes against the popular consensus but I stand by my guns if the down votes are from average Go enjoyers. If, on the other hand, the down votes stem from the sentiment that even Go should not be vibe coded, I can at least agree with that, but who knows what jimmies I’ve rustled

    • ABetterTomorrow@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      22 hours ago

      Dude, weird ass comment. You can share your opinions but you don’t have to be negative about it. Remember your opinions is truth (if is) not fact. Like more languages, GO is a tool and it has its purposes. There is no one tool fits all…… except duct tape.

      • silasmariner@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 hours ago

        Dude, weird ass-comment. I can share my opinions and they don’t have to be positive ones. Go is a tool and its purpose is to be an aesthetic stain on the realm of software.

        Thank you for your attention

    • cub Gucci@lemmy.today
      link
      fedilink
      English
      arrow-up
      8
      ·
      23 hours ago

      Hey, here’s my downvote.

      I placed it not because I’m angry or disagree with your original statement, but because you have already acquired several downvotes and I just feel peer pressure to downvote you to hell

        • jim3692@discuss.online
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          20 hours ago

          Can you elaborate? To me, Go seems to have less boilerplate.

          • Go does not have access modifiers
          • Go does not force you to put everything in classes
          • Go does not force you to put every exception, that may be thrown, at the function declaration
          • Go can directly map complex JSONs to structs
          • silasmariner@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            2 hours ago

            In reverse order:

            • Directly mapping structs to JSON is a solved problem in userland for every major language
            • yes it does, and worse it’s part of the return signature and null is super-prevalent of necessity as a result
            • even java doesn’t do that any more, but fine I guess
            • cool, but access modifiers actually make a lot of sense. Go’s solution to this is to use capitalisation as a marker, which has no ‘inferential readability’ – public/private is obvious. Foo/foo? Considerably less so

            Further, meta programming in go sucks donkey balls. Sure, it finally got generics but also they suck. Last I checked it still didn’t even support covariance.

          • cub Gucci@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            ·
            19 hours ago

            Yeah, Go is nice sometimes. It shines in codebases that are not quite large and not very small. Also it’s great to write a cli tool in it, though I prefer Rust because I hate myself. What I personally missed in Go (maybe skill issue, idk):

            1. Metaprogramming. For big projects it’s inevitable. You need to have SPOT which generates documentation and headers (e.g. xml document, openapi spec). Otherwise you die. The fact that the source should be a git repo is cancer, as in this case artifacts are added in git, which results in merge conflicts.

            2. DI. In JVM world it is a must. If you don’t have it, you fucking should have a reason for that! If your logic spans across multiple layers of factories, onboarding of a new developer creates friction.

            3. For small web services that are not constrained by memory I would choose spring + openapi, as it really requires only model description and the endpoint, yielding you a client in any language you want.

            4. If err != nill. Don’t let me started on importance of result and either monads.

            5. Aspects and (usable) reflection. I want a codebase that has actual decoupling. I want a security code to be in a completely different place, away from the business logic, just as I want traces with serialization to be pluggable I don’t want to have a single place in code that has a sequence auth -> validate inputs -> trace -> business logic -> validate output. I strongly believe that it’s faulty, untestable and prone to errors.

    • melfie@lemy.lol
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      edit-2
      22 hours ago

      I upvoted you because I’m annoyed that downvotes often turn into a pack of chickens ganging up on a wounded chicken and pecking it to death. I usually upvote in this situation unless the downvotes are clearly deserved. Otherwise, I use downvotes sparingly and instead withhold my upvote if I don’t agree. I’m happy to get pecked myself to fight back against dickheads who overuse the downvote button in the same manner certain people overuse their car’s horn.

      That being said, I don’t particularly enjoy programming in Go because of weird semantics and because of its missing language features like string interpolation and enums, as well as its use of pointers, which I find to be a lot of busy work with little benefit most of the time. I do actually agree with Go’s oft criticized error handling because it forces you to explicitly consider how to deal with every possible error, which I think is a good thing, though to your point, LLMs can reduce the workload here. Go’s concurrency and speed make it a good choice in many cases, though I’ll usually stick with something else if I don’t absolutely need Go’s benefits.

      • poopkins@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        21 hours ago

        Ironic how your comment is downvoted as well. It’s funny to me to observe through platforms like this that most humans are thoughtless pack animals and will just do whatever all the other humans are doing and how discourse goes against our nature. There was a study on Reddit some years ago that found that generally speaking, the first vote determines whether a comment will get up- or downvoted.

        • melfie@lemy.lol
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          21 hours ago

          I knew it would be downvoted. I guess humans are evolutionarily hard-wired for conformity, because being ostracized from your tribe usually meant death. Considering all of the humans throughout history who were punished for going against the mob, only to later be celebrated, this is a maladaptive trait in many respects.

          Edit:

          I will say that there are more open-minded, independent thinkers on Lemmy than there are in a lot of other communities.