Are you using AI for Programming?
I’ve been wanting to discuss this topic for a while, but what prompted me to write this article was the recent MIT study, “Your Brain on ChatGPT.” I highly recommend giving it a read. While the study has its limitations (being conducted on a small sample of 54 students), its findings resonate with my observations.
I have witnessed firsthand a decline in the critical thinking abilities of my fellow graduates, coinciding with the rise of Large Language Models (LLMs). Ironically, the quality of the code they produce has, on the surface, improved. This highlights a crucial point: they often don’t understand the code they’re writing because it is generated by AI. And since AI generates it, the syntax is usually flawless.
I’m not saying AI is inherently bad. It can be a powerful tool to optimize your workflow. The problem is that many people don’t know how to use it effectively. They see it as a shortcut to escape the learning process.
In my opinion, AI is a statistical model. The more data it has on a subject, the better its results will be. Conversely, with less data, the results will be poor. For example, LLMs may perform better on codebases written in TypeScript and JavaScript, but they may struggle with less popular languages, such as Rust or Go.
The Illusion of Effortless Creation
Let’s consider a common scenario. A university assigns a relatively simple project: create a CRUD (Create, Read, Update, Delete) application. As I mentioned, AI is excellent at generating code for well-defined problems like this. So, students input the project requirements directly into an LLM. As expected, they receive functional code. If they encounter any issues running it, they can prompt the LLM again for the necessary instructions. After that, they simply submit the work.
Through this process, the students have learned nothing. The entire purpose of the assignment, which was to help them understand the subject matter, was defeated. Many students who use AI in this way lack a fundamental understanding of even a single programming language. They struggle with basic tasks, such as creating a file and writing data to it. When they get stuck, they conveniently rely on the LLM to think of them.
This is a dangerous trend for our industry. We need people who can think outside the box and spend their time deeply contemplating problems. When we start outsourcing that critical thinking, things can go wrong very quickly. The MIT study I mentioned earlier supports this concern. It found that participants who used ChatGPT to write essays “consistently underperformed at neural, linguistic, and behavioral levels,” including decreased brain activity and a weaker sense of authorship. This suggests that relying on AI can impair memory and learning.
When Should We Use AI?
So, when is it appropriate to use AI without hindering our cognitive abilities? In my opinion, here are a few scenarios, specifically within our industry:
- Prototyping: When I have a clear understanding of what I want to build and need to create a quick prototype.
- Understanding Specific Topics: To get a quick overview of a particular concept, but with extreme caution, as LLMs can and do “hallucinate” or generate incorrect information.
- Assisting with Research: Gathering information and summarizing existing knowledge.
There are likely other valid uses, but these are the main ways I incorporate AI into my workflow.
What Can We Do to Stop the Decline?
The solution is simple: don’t do something without understanding it. If you’re given an assignment, don’t just blindly paste the requirements into an LLM. Take the time to understand what’s being asked of you. Research the necessary topics to gain a solid foundation. While you can use AI in this learning process, I don’t recommend it precisely because of the risk of hallucinations. If the AI provides you with flawed information, you won’t have the knowledge to identify it.
Instead, ask questions in communities like Reddit and Stack Overflow. Examine how other people have addressed similar problems in open-source projects on GitHub. Try to understand their code. It will take time, but it will ultimately make you a better developer.
The discussions on platforms like Reddit and Stack Overflow, as well as the code in GitHub repositories, are the very data used to train these LLMs in the first place. So why settle for a vague understanding from an AI when you can go directly to the source and gain a much deeper and more accurate understanding?
As I mentioned earlier, utilize AI for repetitive tasks that you clearly understand but are too time-consuming to perform manually. Please, don’t use AI for things you don’t understand.
AI-based startups may not like this message, as it’s not great for their business model. However, I’m not saying you shouldn’t use AI. I’m saying don’t use it without understanding.
References
https://www.media.mit.edu/projects/your-brain-on-chatgpt/overview/