• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle

  • I completely agree that if there are tools that can allow a vehicle to “see” better than a human it’s absurd not to implement them. Even if musk could make a car exactly as good as a human, that’s a low bar. It isn’t good enough.

    As for humans: if you are operating a vehicle such that you could not avoid killing an unexpected person on the road, you are not safely operating the vehicle. In this case, it’s known as “over driving your headlights”, you are driving at a speed that precludes you from reacting appropriately by the time you can perceive an issue.

    Imagine if it wasn’t a deer but a chunk of concrete that would kill you if struck at speed. Perhaps a bolder on a mountain pass. A vehicle that has broken down.

    Does Musk’s system operate safely? No. The fact that it was a deer is completely irrelevant.












  • I agree with everything you just said, but i think that without greater context it’s maybe still unclear to some why I still place chatGPT in a league of it’s own.

    I guess I’m maybe some kind of relic from a bygone era, because tbh I just can’t relate to the “I copied and pasted this from stack overflow and it just worked” memes. Maybe I underestimate how many people in the industry are that fundamentally different from how we work.

    Google is not for obtaining code snippets. It’s for finding docs, for troubleshooting error messages, etc.

    If you have like… Design or patterning questions, bring that to the team. We’ll run through it together with the benefits of having the contextual knowledge of our problem domain, internal code references, and our deployment architecture. We’ll all come out of the conversation smarter, and we’re less likely to end up needing to make avoidable pivots later on.

    The additional time required to validate a chatGPT generated piece of code could have instead been spent invested in the dev to just do it right and to properly fit within our context the first time, and the dev will be smarter for it and that investment in the dev will pay out every moment forward.


  • As a Sr. Dev, I’m always floored by stories of people trying to integrate chatGPT into their development workflow.

    It’s not a truth machine. It has no conception of correctness. It’s designed to make responses that look correct.

    Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

    ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.