It worries me how many people prefer using AI over doing their own thinking. How much of your life will you "live" on autopilot? Hollowing out your own soul little-by-little when you do things like that.
This is 100 times scarier, and more likely, than the "chatgpt will become skynet and nuke the world" and "Ai will replace every jobs in 5 years" pipe dreams
The even scarier thing is, there are people I know who are well educated etc and in my conversations with them I hear more and more about how they are relying on chatgpt for information re. surgery and illness and so on. As if chatgpt came up with the information itself, as opposed to, being a more superior interface that uses the same data as Google Search - at least with Google Search you actively knew you the source of the information.
I believe this speaks to something deeper about humans - only those with great discipline will be able to prevent themselves from being sucked in and losing their valuable human capital. It doesnt seem to matter whether one is dumb or smart.
Here's what I tell people at work: It's OK to use AI, but you must say it's AI. If you post something and say "Here's what GPT-5 says" - great. Love the efficiency. If someone asks you to do something and you respond with clearly AI-generated crap masquerading as your own, you will be getting a piece of my mind.
I use AI mostly for writing docs and always make sure the documents have an "AI generated content" notice as the first thing readers see.
In the codebase itself I add in-line comments pointing to precisely where AI was used.
AI has proven very useful for providing extensive in-line comments too as my employer is pushing hard for our Ops guys to learn IaC despite the vast majority having zero-to-none development experience.
Contextual comments explaining _what, why & how_ loops/conditionals/etc. work has (so far anyway) proven quite successful.
It's funny how people can't help themselves, despite realizing that just around the corner is "if all you are is an interface to chatgpt, what are we paying you for exactly?".
It worries me how many people prefer using AI over doing their own thinking. How much of your life will you "live" on autopilot? Hollowing out your own soul little-by-little when you do things like that.
This is 100 times scarier, and more likely, than the "chatgpt will become skynet and nuke the world" and "Ai will replace every jobs in 5 years" pipe dreams
The even scarier thing is, there are people I know who are well educated etc and in my conversations with them I hear more and more about how they are relying on chatgpt for information re. surgery and illness and so on. As if chatgpt came up with the information itself, as opposed to, being a more superior interface that uses the same data as Google Search - at least with Google Search you actively knew you the source of the information.
I believe this speaks to something deeper about humans - only those with great discipline will be able to prevent themselves from being sucked in and losing their valuable human capital. It doesnt seem to matter whether one is dumb or smart.
This is why you ask for sited sources and then check those sources.
Theres a reason why sources are not cited by default in chatgpt responses. Youre missing the entire point buddy.
> How much of your life will you "live" on autopilot?
If you start doing it in school, presumably the rest of your life, since you'll have no skills or ability to learn.
Here's what I tell people at work: It's OK to use AI, but you must say it's AI. If you post something and say "Here's what GPT-5 says" - great. Love the efficiency. If someone asks you to do something and you respond with clearly AI-generated crap masquerading as your own, you will be getting a piece of my mind.
> "Here's what GPT-5 says"
This drives me nuts. It's often wrong, but then I have to do the research to prove it before the conversation can get back on track.
I use AI mostly for writing docs and always make sure the documents have an "AI generated content" notice as the first thing readers see.
In the codebase itself I add in-line comments pointing to precisely where AI was used.
AI has proven very useful for providing extensive in-line comments too as my employer is pushing hard for our Ops guys to learn IaC despite the vast majority having zero-to-none development experience.
Contextual comments explaining _what, why & how_ loops/conditionals/etc. work has (so far anyway) proven quite successful.
It's funny how people can't help themselves, despite realizing that just around the corner is "if all you are is an interface to chatgpt, what are we paying you for exactly?".