Developer, 11 year reddit refugee

Zetaphor

  • 2 Posts
  • 39 Comments
Joined 6 months ago
cake
Cake day: March 12th, 2024

help-circle





  • This is where ChatGPT and Codium.ai has been a godsend for me. Something that would have taken me a few hours to 1+ days to iterate on is now reduced down to anywhere from minutes to an hour. I don’t even always see it all the way through to completion, but just knowing that I can iterate on some version of it so quickly is often motivation enough to get started.

    If you’re paying for the Plus subscription, GPT-4 with Code Interpreter is absolutely OP. Did you know you can hand it a zip file as a way of giving it multiple files at once?



  • That entirely depends on the employer, but in my anecdotal experience that has been the case. Especially in more recent years versus the start of my career (nearly 20 years ago).

    The reality is that Computer Science is useful for building strong engineers over the long-term, but it doesn’t at all prepare you for the reality of working in a team environment and contributing code to a living project. They don’t even teach you git as far as I’m aware.

    Contributing to open source demonstrates a lot of the real-world skills that are required in a workplace, beyond just having the comprehension and skill in the language/tool of choice you’re interviewing for.



  • Just a heads up, you replied multiple times to this. If the client you’re using doesn’t submit immediately, that just means it’s not doing error handling properly and not disabling submit buttons while the request is in flight. You’ve actually submitted once for each time you pressed the button


  • Build an open source portfolio. Being able to show employers what I was capable of was a massive benefit both then and now. You can say you know all of these things, but when you’re looking at hundreds of applications one of the first things they do to reduce the pile is filter out people who don’t have some kind of online presence like Github. This allows them to see that you’re actively engaged with the field and if they want to interview you, to look at your code quality and experience.

    A personal website that highlights your best work is also a good idea, as it helps to even further distill down the things you’re ultimately going to end up talking about in an interview. It doesn’t need to be anything fancy, just something that shows your competent. I wouldn’t expect the person interviewing you to actually hit view source and criticize your choice in frontend framework.



  • This is also just the reality of the job market, especially in this industry. Dev positions get hundreds if not thousands of applications which all vary widely in quality.

    I have 20 years of experience and a six figure salary, the last time I went looking for work and was putting out applications I sent out easily over 100 applications and only had 4 interviews. I’ve found it’s best to form a relationship with a competent recruiter, and work with them anytime you’re back on the market. They’re incentivized to find you a decent position so that they can make their commission. Of course finding one that is decent is almost as hard as the process of sending out applications, but once you do it’s a relationship worth maintaining.


  • Zetaphor@zemmy.cctoProgrammer Humor@lemmy.mlEarly disappointment
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’ve never been to college and my job title today is Software Architect, I’ve been doing this for nearly 20 years.

    It was extremely hard at first to get a job because everyone wanted a BA, but that was also 20 years ago. Once I had some experience and could clearly demonstrate my capabilities they were more open to hiring me. The thing a degree shows is that you have some level of experience and commitment, but the reality is a BA in CompSci doesn’t actually prepare you for the reality of 99% of software development.

    I think most companies these days have come to realize this. Unless you’re trying to apply to one of the FANG corps (or whatever the acronym is now) you’ll be just fine if you have a decent portfolio and can demonstrate an understanding of the fundamentals.


  • I certainly experienced this at the start of my career. Everyone wanted me to have at least bachelors degree despite the fact that I was able to run circles around fresh college graduates. It wasn’t until someone gave me a chance and I had real world experience that people stopped asking me about my college education. In fact later into my career when they learn about the level of experience I have and that I’m entirely self-taught, it’s often seen as something positive. It’s a shitty catch-22





  • Quoting this comment from the HN thread:

    On information and belief, the reason ChatGPT can accurately summarize a certain copyrighted book is because that book was copied by OpenAI and ingested by the underlying OpenAI Language Model (either GPT-3.5 or GPT-4) as part of its training data.

    While it strikes me as perfectly plausible that the Books2 dataset contains Silverman’s book, this quote from the complaint seems obviously false.

    First, even if the model never saw a single word of the book’s text during training, it could still learn to summarize it from reading other summaries which are publicly available. Such as the book’s Wikipedia page.

    Second, it’s not even clear to me that a model which only saw the text of a book, but not any descriptions or summaries of it, during training would even be particular good at producing a summary.

    We can test this by asking for a summary of a book which is available through Project Gutenberg (which the complaint asserts is Books1 and therefore part of ChatGPT’s training data) but for which there is little discussion online. If the source of the ability to summarize is having the book itself during training, the model should be equally able to summarize the rare book as it is Silverman’s book.

    I chose “The Ruby of Kishmoor” at random. It was added to PG in 2003. ChatGPT with GPT-3.5 hallucinates a summary that doesn’t even identify the correct main characters. The GPT-4 model refuses to even try, saying it doesn’t know anything about the story and it isn’t part of its training data.

    If ChatGPT’s ability to summarize Silverman’s book comes from the book itself being part of the training data, why can it not do the same for other books?

    As the commentor points out, I could recreate this result using a smaller offline model and an excerpt from the Wikipedia page for the book.