• 0 Posts
  • 25 Comments
Joined 7 months ago
cake
Cake day: January 31st, 2024

help-circle
  • This process is akin to how humans learn…

    I’m so fucking sick of people saying that. We have no fucking clue how humans LEARN. Aka gather understanding aka how cognition works or what it truly is. On the contrary we can deduce that it probably isn’t very close to human memory/learning/cognition/sentience (any other buzzword that are stands-ins for things we don’t understand yet), considering human memory is extremely lossy and tends to infer its own bias, as opposed to LLMs that do neither and religiously follow patters to their own fault.

    It’s quite literally a text prediction machine that started its life as a translator (and still does amazingly at that task), it just happens to turn out that general human language is a very powerful tool all on its own.

    I could go on and on as I usually do on lemmy about AI, but your argument is literally “Neural network is theoretically like the nervous system, therefore human”, I have no faith in getting through to you people.


  • Calling the reward system hormones, doesn’t really change the fact that we have no clue where to even start. What is a good reward for general intelligence? Solving problems? That’s our current approach, which has the issue of the AI not actually understanding the problems and just ending up remembering question answer pairs (patterns). We need to figure out what defines inteligence and “understanding” in an easily measurable way. Which is something people knew almost a hundred years ago when we came up with the idea of neural networks, and why I say we didn’t get any closer to AGI with LLMs.


  • In theory. Then comes the question of how exactly are you gonna teach/train it. I feel our current approach is too strict for proper intelligence to emerge, but what do I know. I honestly have no clue how such a model could be trained. I guess it would be similar to how people train actual braincells? Tho that field is very immature atm… The neat thing about the human brain is, that it’s already preconfigured for self learning, tho it does come with its own bias on what to learn due to its unique needs and desires.



  • You can think of the brain as a set of modules, but sensors and the ability to adhere to a predefined grammar aren’t what define AGI if you ask me. We’re missing the most important module. AGI requires cognition, the ability to acquire knowledge and understanding. Such an ability would make larger language models completely redundant as it could just learn langue or even come up with one all on its own, like kids in isolation for example.

    What I was trying to point out is that “neural networks” don’t actually learn in the way we do, using the world “learn” is a bit misleading, because it implies cognition. A neural network in the computer science sense is just a bunch of random operations in sequence. In goes a number, out goes a number. We then collect a bunch of input output pairs, the dataset, and semi randomly adjust these operations until they happen to somewhat match this collection. The reasoning is done by the humans assembling the input output pairs. That step is implicitly skipped for the AI. It doesn’t know why they belong together and it isn’t allowed to reason about why, because the second it spits out something else, that is an error and this whole process breaks. That’s why LLMs hallucinate with perfect confidence and why they’ll never gain cognition, because the second you remove the human assembling the dataset, you’re quite literally left with nothing but semi random numbers, and that’s why they degrade so fast when learning from themselves.

    This technology is very impressive and quite useful, and demonstrates how powerful of a tool language alone is, but it doesn’t get us any closer to AGI.



  • The 5 year old baby LLM can’t learn shit and lacks the ability to understand new information. You’re assuming that we and LLMs “learn” in the same way. Our brains can reason and remember information, detect new patterns and build on them. An LLM is quite literally incapable of learning a brand new pattern, let alone reason and build on it. Until we have an AI that can accept new information without being tolled what is and isn’t important to remember and how to work with that information, we’re not even a single step closer to AGI. Just because LLMs are impressive, doesn’t mean they posses any cognition. The only way AIs “learn” is by countless people constantly telling it what is and isn’t important or even correct. The second you remove that part, it stops working and turns to shit real quick. More “training” time isn’t going to solve the fact that without human input and human defined limits, it can’t do a single thing. AI cannot learn form it self without human input either, there are countless studies that show how it degrades, and it degrades quickly, like literally just one generation down the line is absolute trash.


  • Language models are literally incapable of reasoning beyond what is present in the dataset or the prompt. Try giving it a known riddle and change it so it becomes trivial, for example “With a boat, how can a man and a goat get across the river?”, despite it being a one step solution, it’ll still try to shove in the original answer and often enough not even solve it. Best part, if you then ask it to explain its reasoning (not tell it what it did wrong, that’s new information you provide, ask it why it did what it did), it’ll completely shit it self hallucinating more bullshit for the bullshit solution. There’s no evidence at all they have any cognitive capacity.

    I even managed to break it once through normal conversation, something happened in my life that was unique enough for the dataset and thus was incomprehensible to the AI. It just wasn’t able to follow the events, no matter how many times I explained.



  • My company is very lenient with how I spend my time (as long as I’m somewhat in the office and get my work done, which I god danm do!) and it’s absolutely amazing. Often before a big release I run out of work and since no one is tracking it, I can just work on optimizing/cleaning our code or fixing some UX issue. I mean what are people gonna complain about? Me not doing the work I already completed? If they ever start tracking us I’m jumping ship, our new team pushes out the best code this company ever had and if that’s not enough, then nothing could be.

    I’m also confused about this whole “constant meetings” thing. At work I have an analyst that does the vast majority of client communication for me. From how people talk about work, it makes me think “analysts” aren’t a thing in other companies. My gf (also a developer) didn’t even know what an “analyst” could be. Like seriously? I love that guy! Life would suck soo much without him. The only meetings I attend are technical or educational in nature. And our monthly team leader meetings, just because he wants to make sure everything is ok with us.



  • GIMP’s layer system is definitely unique, sadly it hasn’t much in common with the selection tool. In that sense, yes, it is unintuitive when migrating from other apps. I’d argue it’s not that complicated, as gimp even highlights the buttons you should be pressing like a mobile game, but it is a complete non sequitur so back on topic…

    If you use “select all” in any program to cancel selections, I don’t know what to tell you. Like ok, GIMP is the jankiest of em all if you do that, no contest, but the rest doesn’t behave correctly either if your expectation is that it’ll work just like it did before you did any selecting. The flashing selection line around the whole page should be a pretty strong indicator of something being different.

    Honestly, many GUI program, doesn’t even have to be a raster art program; vector art like illustrator, 3D modeling like maya, some music programs, our custom spreadsheet stuff at work, even many file explorers, as far as I remember they all have the ctrl-shift-a shortcut and all would behave quite differently if you used ctrl-a excepting the same result. I’m genuinely at a loss where you’d get the idea to use ctrl-a to cancel a selection. Like I understand the intuition you proposed, but at what point do you just forget everything else you ever did on your computer?


  • Inkscape is a vector art program, it is fundamentally different to any raster art program. Like just download it and try to make just about anything with it, if you never used a vector art program, you’ll be absolutely lost. If you know GIMP, Krita or Photoshop you at least have a basic understanding of the others.


  • LANIK2000@lemmy.worldtolinuxmemes@lemmy.worldCtrl + Shift + A
    link
    fedilink
    arrow-up
    43
    arrow-down
    2
    ·
    edit-2
    3 months ago

    I’m confused. Just tried the selection tool in GIMP and Krita on my PC and sketchbook on my tablet. Works the same way as far as I can tell. Just select, draw in there, copy/paste, ctrl-shift-a to unselect. Moving is more convenient in Krita and Sketchbook, true, but like that can’t be it right? I’m at a loss.




  • LANIK2000@lemmy.worldtolinuxmemes@lemmy.worldThis is hilarious
    link
    fedilink
    arrow-up
    97
    arrow-down
    12
    ·
    3 months ago

    I hate how the US has sexualized every random word or sentence. I’m here telling my American friend how funny it is that German’s call smart phones a “handy”, like “haha silly random word that makes sense tho haha :)”, but no, ma American bud breaks down laughing imagining German’s giving each other hand jobs.

    Also the constant stopping during any sentence to go “oh, I know what YOU’RE thinking, get your mind out of the gutter!”. No, I don’t, and now I have the privilege of trying to remember every single word that wa just said and trying to see what inside there could possibly be a penis. This from of “joke” never fails to annoy the shit out of me. Like please can we just continue, or do you really have to recite this copy pasta while I stare like an absolute dunce at you?




  • You mean the 2 ProgramData folders? Altho who the hell puts config stuff there? Anyways, the 2 official settings apps, the 3 AppData folders and then the registry for every little thing Microsoft doesn’t want you to edit for whatever reason? And then the countless 3rd party config apps for every device aiming to make this process easier? Yea I totally don’t Google where to toggle stuff on windows as step #1, noo… And W11 just has a slightly better 2nd official settings app, so sadly not too different.

    Also who the hell puts config stuff on Linux into /local or /share? It was always in ~/.config (personal) or /etc (system wide) from my experience.