Why I'm not Scared of AI

AI is all the rage these days. Software devs are either scared for their livelihood or extreme skeptics. One hand believes AI will have our jobs within the decade. The other thinks AI is just a bunch of ifs and elses. Neither side is invalid for their thoughts, but I think there is a lot of nuance in this new era of technology.

For myself, I tend to lean into the skepticism a bit more. AI is cool, but for the real decisions, I rely on my noggin’ and documentation. I have let AI lead the way on a couple projects, but was never fully satisfied by the results. The biggest issue currently is the loss of context. It seems to happen so fast. You’ll be whipping something up really sweet, then in the next prompt it leads you in an off-base direction. Then you have to spend the time to remind it of the context it should already know about from a few minutes ago.

As of today the newest advancement in context is from ChatGPT. Full chat context is available, allowing your chats to cross reference each other. I have not tested it out yet. Which leads me to the next issue.

Models change a lot as well. You could be getting awesome outputs that are pushing you along at a nice rate. The new model comes out, or they update the current one, and it all of a sudden writes worse code. So you either go back to the old model, or you switch it up and go to a new provider.

I have bounced around from ChatGPT to Claude to DeepSeek to Grok. I have several reasons for why I switched when I switched, but ultimately it boiled down to curiosity. They say Claude is the best at writing code, but I disagree, maybe those people don’t write ruby. The best Ruby on Rails help I have had is from Grok or ChatGPT. This, to me, just highlights another potential issue. Having to “keep up with the Jones’” when it comes to LLMs.

I also find myself using LLMs for troubleshooting. There is one caveat with this. you have to be extra careful. LLMs will tell you to do some crazy stuff. I mean, it will literally throw the kitchen sink at an error message. Best thing to do is read some forums/discussions about the error code first to know what the problem likely is. You might even get the solution from that. If not, you’ll be better prepared to sift through all the steps and solutions your AI will spit out. I find myself constantly telling my LLMs to be concise. Even so, with troubleshooting, it will have you changing environment variables on your OS in a heartbeat. That isn’t usually the solution XD. Just be careful pasting error messages and following steps AI tells you will fix it.

One the other hand, I have had pleasant experiences with LLMs. One huge thing that I can always rely on LLMs to know is awesome third-party plugins/libraries. Sometimes the best third-party plugins don’t have the best SEOs, so Google would never show it based on the ambiguous search queries I send it. AI understands what I need better than Google does, sometimes. This is a huge plus in my book. It also has helped me refactor code to be more concise. There are a lot of huge wins with AI.

Vibe coding is the new buzz word. For me, vibe coding means throwing on a punk playlist, grabbing a Coke Zero, and letting the juices flow. What it means to most is using code editors like Cursor and letting it take the reigns and build a project. There have been many new startups take this route for low overhead, speed, etc. The list goes on. However, we go back to the first issue—context. AI will write code and lose its own context. This leaves your app with vulnerabilities. Get ready for DDoS and buffer overflows.

AI is great, but it still needs a hand to hold and a guard rails. This is why I am not afraid for my job. Juniors shouldn’t be either. LLMs are a horse you have to lead to water when it comes to software development and sometimes you can’t make it drink.