To me, it makes sense for things that are simple to review, have clear, binary acceptance criteria, and little to no meaningful attack surface or dangerous failure modes. If you are trying to make an AI develop a bulletproof filesystem device driver or network stack you’re a fucking maniac and should be pilloried in the town square. If you want to throw an AI-generated github actions build script at me that’s perfectly fine and once I’ve reviewed it thoroughly it doesn’t bother me one bit if it’s AI-generated.
Where in my comment did I mention my stance on LLM usage? All I said was I knew people would be annoying about Torvalds using LLMs to generate code, and that there was an instance of that here.
Maybe you should try to learn what value Torvalds sees in it then and be a little less knee jerk about it
Yeah, because it’s nuanced and people suck at nuance.
And his take is reasonable, I think. Use it for unimportant shit that would otherwise waste your time.
To me, it makes sense for things that are simple to review, have clear, binary acceptance criteria, and little to no meaningful attack surface or dangerous failure modes. If you are trying to make an AI develop a bulletproof filesystem device driver or network stack you’re a fucking maniac and should be pilloried in the town square. If you want to throw an AI-generated github actions build script at me that’s perfectly fine and once I’ve reviewed it thoroughly it doesn’t bother me one bit if it’s AI-generated.
It’s an assistant, not a replacement.
A programmer who never uses AI is just as stupid as a programmer who never uses search engines.
Where in my comment did I mention my stance on LLM usage? All I said was I knew people would be annoying about Torvalds using LLMs to generate code, and that there was an instance of that here.
Next time you need to go the doctor ask your AI instead please.