Your team members may be tempted to rely on AI to help them write code for your company, either for cost or speed rationales or because they lack particular expertise. But you should be wary. — Pixabay
In the complex “will AI steal my job?” debate, software developers are among the workers most immediately at risk from powerful AI tools. It’s certainly looking like the tech sector wants to reduce the number of humans working those jobs. Bold statements from the likes of Meta’s Mark Zuckerberg and Anthropic’s Dario Amodei support this since both of them say AI is already able to take over some code-writing roles. But a new blog post from a prominent coding expert strongly disputes their arguments, and supports some AI critics’ position that AI really can’t code.
Salvatore Sanfilippo, an Italian developer who created Redis (an online database which calls itself the “world’s fastest data platform” and is beloved by coders building real-time apps), published a blog post this week, provocatively titled “Human coders are still better than LLMs.” His title refers to large language model systems that power AI chatbots like OpenAI’s ChatGPT and Anthropic’s Claude.
Sanfilippo said he’s “not anti-AI” and actually does “use LLMs routinely,” and explained some specific interactions he’d had with Google’s Gemini AI about writing code. These left him convinced that AIs are “incredibly behind human intelligence,” so he wanted to make a point about it. The billions invested in the technology and the potential upending of the workforce mean it’s “impossible to have balanced conversations” on the matter, he wrote.
Sanfilippo blogged that he was trying to “fix a complicated bug” in Redis’s systems. He made an attempt himself, and then asked Gemini, “hey, what we can do here? Is there a super fast way” to implement his fix? Then, using detailed examples of the kind of software he was working with and the problem he was trying to fix, he blogged about the back-and-forth dialogue he had with Gemini as he tried to coax it toward an acceptable answer. After numerous interactions where the AI couldn’t improve on his idea or really help much, he said he “asked Gemini to do an analysis of (his last idea, and it was finally happy.”
We can ignore the detailed code itself and just concentrate on Sanfilippo’s final paragraph. “All this to say: I just finished the analysis and stopped to write this blog post, I’m not sure if I’m going to use this system (but likely yes), but, the creativity of humans still have an edge, we are capable of really thinking out of the box, envisioning strange and imprecise solutions that can work better than others,” he wrote.
“This is something that is extremely hard for LLMs.” Gemini was useful, he admitted, to simply “verify” his bug-fix ideas, but it couldn’t outperform him and actually solve the problem itself.
This stance from an expert coder goes up against some other pro-AI statements. Zuckerberg has said he plans to fire mid-level coders from Meta to save money, employing AI instead. In March, Amodei hit the headlines when he boldly predicted that all code would be written by AIs inside a year.
Meanwhile, on the flip side, a February report from Microsoft warned that young coders coming out of college were already so reliant on AI to help them that they failed to understand the hard computer science behind the systems they were working on –something that may trip them up if they encountered a complex issue like Sanfilippo’s bug.
Commenters on a piece talking about Sanfilippo’s blog post on coding news site Hacker News broadly agreed with his argument. One commenter likened the issue to a popular meme about social media: “You know that saying that the best way to get an answer online is to post a wrong answer? That’s what LLMs do for me.” Another writer noted that AIs were useful because even though they give pretty terrible coding advice, “It still saves me time, because even 50 percent accuracy is still half that I don’t have to write myself.”
Lastly, another coder pointed out a very human benefit from using AI: “I have ADHD and starting is the hardest part for me. With an LLM it gets me from 0 to 20% (or more) and I can nail it for the rest. It’s way less stressful for me to start now.”
Why should you care about this? At first glance, it looks like a very inside-baseball discussion about specific coding issues.
You should care because your team members may be tempted to rely on AI to help them write code for your company, either for cost or speed rationales or because they lack particular expertise. But you should be wary. AIs are known to be unreliable, and Sanfilippo’s argument, supported by other coders’ comments, point out that AI really isn’t capable of certain key coding tasks.
For now, at least, coders’ jobs may be safe… and if your team does use AI to code, they should double and triple check the AI’s advice before implementing it in your IT system. – Inc./Tribune News Service