• 1 Post
  • 101 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle




  • I wouldn’t pay for an AI subscription but I have no problem using my own PC for work on the condition that they give me a VM to remote into. Mainly because I like using my three big monitors and the shitty laptops my previous employers provided are either underpowered or locked down to the point where multi-monitor support is really poor.

    I do pay for tools that I use outside of work and if it’s something that helps me with my day job, I have no problem using it for that. That said, using AI to generate code is usually a waste of time. Unless it’s something really, really basic.





  • Just the other day, the Mixtral chatbot insisted that PostgreSQL v16 doesn’t exist.

    A few weeks ago, Chat GPT gave me a DAX measure for an Excel pivot table that used several DAX functions in ways that they could not be used.

    The funny thing was, it knew and could explain why those functions couldn’t be used when I corrected it. But it wasn’t able to correlate and use that information to generate a proper function. In fact, I had to correct it for the same mistakes multiple times and it never did get it quite right.

    Generative AI is very good at confidently spitting out inaccurate information in ways that make it sound like it knows what it’s talking about to the average person.

    Basically, AI is currently functioning at the same level as the average tech CEO.









  • I think there is potential for using AI as a knowledge base. If it saves me hours of having to scour the internet for answers on how to do certain things, I could see a lot of value in that.

    The problem is that generative AI can’t determine fact from fiction, even though it has enough information to do so. For instance, I’ll ask Chat GPT how to do something and it will very confidently spit out a wrong answer 9/10 times. If I tell it that that approach didn’t work, it will respond with “Sorry about that. You can’t do [x] with [y] because [z] reasons.” The reasons are often correct but ChatGPT isn’t “intelligent” enough to ascertain that an approach will fail based on data that it already has before suggesting it.

    It will then proceed to suggest a variation of the same failed approach several more times. Every once in a while it will eventually pivot towards a workable suggestion.

    So basically, this generation of AI is just Cliff Clavin from Cheers. Able to to sting together coherent sentences of mostly bullshit.