I’ve experimented a bit with chatGPT, asking it to create some fairly simple code snippets to interact with a new API I was messing with, and it straight up confabulated methods for the API based on extant methods from similar APIs. It was all very convincing, but if there’s no way of knowing that it’s just making things up, it’s literally worse than useless.
I’ve experimented a bit with chatGPT, asking it to create some fairly simple code snippets to interact with a new API I was messing with, and it straight up confabulated methods for the API based on extant methods from similar APIs. It was all very convincing, but if there’s no way of knowing that it’s just making things up, it’s literally worse than useless.
Except that in code, you can write unit tests and have checks that it absolutely has to get precisely correct.
If you have to write the code and tests yourself… That’s just normal coding then
Yeah but I don’t. That’s the whole damn point.