Me, Myself and AI

How and why I’ve used AI as a programming tool.

I think of myself as a good programmer. I’m not tooting my own horn here – as a computer scientist, I am still a novice in many areas. But as a programmer, I think I can hold my own. And I think that’s because I have developed two qualities that all good programmers share.

The first quality of a good programmer is a passion for problem solving. They relish the process of understanding a problem, researching the prerequisite knowledge, and then implementing a solution. Failure just means another chance to solve it. And when the problem is finally solved, it provides a rush of dopamine that rivals sex and most hard drugs.

The last thing a good programmer would want is to be given the answer to a problem and never understand the solution. That is the sole definition of failure.

But this is balanced by a second quality. Good programmers respect anything that helps them better understand a problem. A good programmer wants to possess every possible tool, understand those tools, and utilize them to derive (or even better, automate) a solution.

So how do these qualities fit in with this new generation of AI?

“Our intelligence is what makes us human, and AI is an extension of that quality.”

Yann LeCun

Between Google, Stack Exchange, and pirated academic textbooks, I can’t even begin to count how many times I’ve utilized the Internet to solve a problem. Despite its many pitfalls, the internet is a marvelous thing – it is a massive collection of human intelligence. That intelligence is the foundation of tools like ChatGPT and Copilot, and when we use these tools, we’re tapping into that intelligence.

In the words of ChatGPT itself…

I am an extension of human knowledge in the sense that I have been trained on a diverse and extensive dataset of human language. This training includes a wide variety of texts from books, articles, websites, and other forms of written communication. The goal of my design is to process and generate language in a way that can assist users in accessing information, answering questions, and solving problems by leveraging the vast amount of knowledge encoded in my training data.

ChatGPT

Yet as exciting as it is to have access to this seemingly bottomless keg of knowledge, it comes with the same inherent risks that browsing the internet has always had. The information we gather from the Internet can be incomplete. It can lack context and nuance. Sometimes its confidently yet completely wrong. At it’s worst, it can be willfully deceptive.

But isn’t that a valid description of our collective intelligence? Brilliant, but sometimes deeply flawed? If we’re willing to accept this risk and reward when we use the Internet, we have to be willing to do the same with AI.

For me, the rewards of using AI have been well worth this risk. When questions pop into my head about whether I can improve how I’ve structured a project, I’m just one prompt away from having a comprehensive answer. If I need a quick answer on how to use something in my project’s tool chain (e.g. a GCloud CLI command or a Git command), I can get that information almost immediately instead of diving into documentation.

I’m comfortable using AI as a tool because I treat it the same way I treat anything I use online: I assume that it’s lying to me. If something it tells me doesn’t work or just seems off, I do more research. The use of AI, when approached with a good sense of judgement and a skeptical mindset, becomes less about outsourcing our intellect and more about expanding it.

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *