I’m not a developer, I do not work in a technology field, but I used to. I know Linux sysadmin, security, and a very small amount of python.
ChatGPT has allowed me to “write” code that I use everyday in my business. So, I’m not a developer, but it lets me do things I otherwise would not be able to.
My business is still too small to even consider hiring a developer, so it’s allowing me to grow my business.
I’m just writing this to point out that “devs” are not the only people using chatgpt to write code.
Chatgpt and other LLMs are fantastic technical task assistants but, and this is a big but, you need to treat their work the same way you’d treat work from a new intern. Verify the output before you trust it.
I’m not a developer, I do not work in a technology field, but I used to. I know Linux sysadmin, security, and a very small amount of python.
ChatGPT has allowed me to “write” code that I use everyday in my business. So, I’m not a developer, but it lets me do things I otherwise would not be able to.
My business is still too small to even consider hiring a developer, so it’s allowing me to grow my business.
I’m just writing this to point out that “devs” are not the only people using chatgpt to write code.
Chatgpt and other LLMs are fantastic technical task assistants but, and this is a big but, you need to treat their work the same way you’d treat work from a new intern. Verify the output before you trust it.
It’s just making some front end stuff for other people to use to access PDFs on my server that need some level of protection and access control.
So, it’s been pretty easy to verify.
I’m too paranoid about trusting it or even myself to write code that could have irreversible effects.
Thanks for the advice🙏