• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: August 11th, 2023

help-circle
  • It baffles me that these types of jobs exist in the same area as mine. My company doesn’t care what hours I work as long as I get things done, has gone fully remote and never going back, encourages people to not burn themselves out and take time off, we have actual unlimited PTO (i.e. nobody coming after me for using too much), etc. I always thought that’s just the Silicon Valley mentality, but I keep seeing news of big tech companies doing all kinds of crazy backwards things and I don’t get it. All the perks I get are not because my company is run by angels, it’s because they understand we’re actually more productive that way.




  • Not sure why you’d remember the ones you rarely need. I just memorized the things I use. Remembering stuff you use is much easier than learning a programming language. I’ve been programming for over 30 years and I’ve been using vim as my only “IDE” for the last 14 years. It would take me significantly less time to teach someone vim than to teach them programming.





  • I rushed to just grab that codeblock from Wikipedia. But the selection of which characters are considered emoji is not arbitrary. The Unicode Consortium (their Unicode Emoji Standard and Research Working Group to be exact) publishes those list and guidelines on how they should be rendered. I believe the most recent version of the standard is Emoji 15.1.

    Edit: I realized I’m going off track here by just reacting to comments and forgetting my initial point. The difference I was initially alluding to is in selection criteria. The emoji. for assigning a character a Unicode codepoint is very different from the criteria for creating a new emoji. Bitcoin has a unique symbol and there is a real need to use that symbol in written material. Having a unicode character for it solves that problem, and indeed one was added. The Emoji working group has other selection criteria (which is why you have emoji for eggplant and flying money, and other things that are not otherwise characters. So the fact that a certain character exists, despite its very limited use, has no bearing on whether something else should have an emoji to represent it.


  • There’s no ambiguity. Emoji are characters in the emoticons code block (U+1F600…U+1F64F). Emoji are indeed a subset of characters, but anything outside that block is not an emoji.

    Edit: jumped the gun on that definition, just took the code block from Wikipedia. But there is no ambiguity on which character is an emoji and which is not. The Unicode Consortium publishes lists of emoji and guidelines on how they should be rendered.




  • That is totally a non-trivial problem, which requires a lot more conception before it can be solved.

    Most candidates don’t realize that. And when I say they split by single space I mean split(' '). Not even split(/\s+/).

    Does “don’t” consist of one or two words? Should “www.google.com” be split into three parts? Etc.

    Yes, asking those questions is definitely what you should be doing when tackling a problem like this.

    If I got that feature request in a ticket, I’d send it back to conception.

    If I got it, I’d work together with the product team to figure out what we want and what’s best for the users.

    If you asked me this question in an interview, I’d ask if you wanted a programmer, a requirements analysis, or a linguist and why you invite people for a job interview if you don’t even know what role you are hiring for.

    That would be useful too. Personality, attitude, and ability to work with others in a team are also factors we look at, so your answer would tell me to look elsewhere.

    But to answer that question, I’m definitely not looking for someone who just executes on very clear requirements, that’s a junior dev. It’s what you do when faced with ambiguity that matters. I don’t need the human chatGPT.

    Also, I’m not looking for someone perfectly solving that problem, because it doesn’t even have a single clear solution. It’s the process of arriving to a solution that matters. What questions do you ask? Which edge cases did you consider and which ones did you miss? How do you iterate on your solution and debug issues you run into on the way? And so on


  • I always feel bad when I try out a new coding problem for interviews because I feel I’m going to offend candidates with such an easy problem (I interview mostly for senior positions). And I’m always shocked by how few are able to solve them. The current problem I use requires splitting a text into words as a first step. I show them the text, it’s the entire text of a book, not just some simple sentence. I don’t think I’ve had a single candidate do that correctly yet (most just split by a single space character even though they’ve seen it’s a whole book with newlines, punctuation, quotes, parentheses, etc).


  • Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.

    Here’s another interesting article about this debate: https://ourworldindata.org/ai-timelines

    What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.


  • See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.