Code analysis firm sees no major benefits from AI dev tool when measuring key programming metrics, though others report incremental gains from coding copilots with emphasis on code review.
Some tools deserve blame. In the case of this, you’re supposed to use it to automate away certain things but that automation isn’t really reliable. If it has to be babysat to the extent that I certainly would argue that it does, then it deserves some blame for being a crappy tool.
If, for instance, getter and setter generating or refactor tools in IDEs routinely screwed up in the same ways, people would say that the tools were broken and that people shouldn’t use them. I don’t get how this is different just because of “AI”.
Okay, so if the tool seems counterproductive for you, it’s very assuming to generalize that and assume it’s the same for everyone else too. I definitely do not have that experience.
It’s not about it being counterproductive. It’s about correctness. If a tool produces a million lines of pure compilable gibberish unrelated to what you’re trying to do, from a pure lines of code perspective, that’d be a productive tool. But software development is more complicated than writing the most lines.
Now, I’m not saying that AI tools produce pure compilable gibberish, but they don’t reliably produce correct code either. So, they fall somewhere in the middle, and similarly to “driver assistance” technologies that half automate things but require constant supervision, it’s quite possible that the middle is the worst area for a tool to fall into.
Everywhere around AI tools there are asterisks about it not always producing correct results. The developer using the tool is ultimately responsible for the output of their own commits, but the tool itself shares in the blame because of its unreliable nature.
Some tools deserve blame. In the case of this, you’re supposed to use it to automate away certain things but that automation isn’t really reliable. If it has to be babysat to the extent that I certainly would argue that it does, then it deserves some blame for being a crappy tool.
If, for instance, getter and setter generating or refactor tools in IDEs routinely screwed up in the same ways, people would say that the tools were broken and that people shouldn’t use them. I don’t get how this is different just because of “AI”.
Okay, so if the tool seems counterproductive for you, it’s very assuming to generalize that and assume it’s the same for everyone else too. I definitely do not have that experience.
Have you read the article? It’s a shared experience multiple people report, and the article even provides statistics.
Removed by mod
You can bury your head under the sand all you want. Meanwhile, the arguments proving the tech “flimsy af” will keep piling up.
cio.com (which I’ve totally heard of before) – the forefront of objective reality and definitely not rage-clickbait
It’s not about it being counterproductive. It’s about correctness. If a tool produces a million lines of pure compilable gibberish unrelated to what you’re trying to do, from a pure lines of code perspective, that’d be a productive tool. But software development is more complicated than writing the most lines.
Now, I’m not saying that AI tools produce pure compilable gibberish, but they don’t reliably produce correct code either. So, they fall somewhere in the middle, and similarly to “driver assistance” technologies that half automate things but require constant supervision, it’s quite possible that the middle is the worst area for a tool to fall into.
Everywhere around AI tools there are asterisks about it not always producing correct results. The developer using the tool is ultimately responsible for the output of their own commits, but the tool itself shares in the blame because of its unreliable nature.
Copilot produces useful and correct code for me 5 days a week. I’m sorry you don’t see the same benefits.
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod