Will AI Replace Computer Programmers?

Some time ago in the Dutch news there was a headline called: "AI leads to declining turnover among translators: 'Rates are plummeting'". The article describes how professional translators are losing their job because of recent advancements in AI. The Large-Language models (LLMs), for example ChatGPT from OpenAI, make anybody able to make a decent translation.

Will something similar happen for computer programmers?






For a while I've been wanting to write on what the implications could be for the software industry for ever more powerful AI, but I quickly found out I'm in over my head. AI has become horrendously complicated. So much so that nobody knows how the state-of-the-art models actually work, not even their creators [1].

What is true in this space is quickly changing, and, as it is right now, nobody knows what is true or not. I think the best I can do is express hopes for the future.

My hope for the future of AI in coding is that these models are excellent coding assistants. Over time they will likely become ever more helpful. Right now the models are able to relieve the programmer from any repetitive work. For example, in my work I have been using ChatGPT to generate SQL tables, or generate tests given a class. Sometimes even giving ChatGPT complete interfaces and just ask it to generate a new instance with certain specifications gives nice results. True, we still have to tweak the code here and there, but I can say that using these models correctly can speed up a lot of repetitive programming work by a lot. Probably soon most developers will be using some sort of AI assistant for coding.

This is not where these models will stop. Likely in some future iteration we will have models that, given some clearly formatted prompt, are able to generate software packages complete with files and tests. Maybe, in the future, many people that code now will, with the help of AI, become more of a software architect. These modern architects will perhaps be able to instruct an AI or even a group of AIs to make complex systems, given guidance. Next-generation models show this capability is perhaps going to be there faster than one might think.

However, I believe it is not a good idea to leave AI models in charge to create business-critical software without human supervision. One of the main reasons for this is the unpredictable nature of the model's output: There is currently no guarantee that giving a certain input to an LLM will deterministically result in a certain output. As the number of parameters for LLMs go from billions to trillions to quadrillions, I think it unlikely that the unpredictable nature of these models will change quickly. This is especially true for closed-source models that currently dominate the landscape. Only by poking some API are we able to guess what a model will return on a given day.

Another reason I believe it unwise to leave AI in charge of software projects without human supervision is that LLMs are currently very poorly understood. Yes, it is true that it is relatively straightforward nowadays to code your own LLM, given that all libraries that are needed are open source. But there is nobody that can actually explain why or how an LLM makes certain generalizations. There is nobody that can really explain why they actually work.

There is a scientific field dedicated to understanding neural networks and LLMs called "Mechanistic Interpretability." You can find an interesting video on it here [2]. It is amazing that with all the smart people around, there is now software out there that nobody understands, and millions of people use it every day. Let's hope the people at the mechanistic interpretability groups know what they are doing.

As the capabilities of AI in the software engineering space increase, there is going to be a need for constant benchmarking. Many businesses use these models now as-is without checking on a daily basis if the performance of a model has, in any way, changed or degraded over time. Checking the stability of these models is imperative because it will help keep businesses stable. Just like other software, these models will fail in unpredictable ways.

As stated before, I believe that AI has the potential to make a single coder much, much more efficient. But does that mean we will need fewer people that are able to code? The demand for new software, as the world works right now, is currently still increasing, and every year the number of developers grows. I don't see this trend ending very soon. Likely AI is going to fuel a productivity boost in the software space, enabling programmers to solve ever more and complicated challenges.

Having said that, I do believe that a human + AI system will always outperform a human-only or AI-only system. Several studies have shown that this combination does yield better results, and I'm optimistic about the possibilities of using AI to make working as a coder more efficient. I'm generally optimistic about this technology. I'm a fan of Arthur C. Clarke's work, and he was also known as a technological optimist. However, there are some lessons in his work we can take before we let our HAL9000 of the future take control over our spaceships.

References: [1] Arxiv. (2021). On the Opportunities and Risks of Foundation Models (arXiv:2108.07258). https://arxiv.org/abs/2108.07258 [2] Mechanistic Interpretability video: https://www.youtube.com/watch?v=YpFaPKOeNME


Deep Learning Introduction Part 1: Input Data and Neural Network Architecture

This is part 1 of a blog series on deep learning. This series is intended as an introduction into deep learning, with an emphasis on the theory and the math that drives this technology. For this blog series we will be using Python 3 and Pandas. Also, some familiarity with linear algebra (vectors, matrices) will help. This first part will be about preprocessing example data, and outlines how a basic architecture of a feedforward neural network could look like. I hope you enjoy this series!







I Love Vulfpeck

When taking an adventure within the deep caverns of Youtube you may find a true gem. It was during one of these adventures one day that I happened to find the beautiful funk band that is Vulfpeck. Youtube's almighty machine had rightly decided I needed some more funk in my life, and Vulfpeck delivered.

Beind kind of confused a few minutes when seeing the first Vulfpeck video, I was immediately hooked afterwards. These guys have a lot going for them. One of the first things you'd notice is the solid musicianship that all of these guys have. Not to mention seeing multiple people of their group switch instruments between videos. Someone that lays down the drum beat in one video, may play the piano, sing or play the guitar in another.

Theres also something pretty satisfying in seeing how much fun they have as a group when playing. You can tell that much of what they do is partly improvised on the spot, and they seem to be challenging each other during the recording. Instead of getting the feeling these are overly polished and well rehearsed songs, the recordings they make manage to retain many of the qualities of a solid improvisation.


Welcome

Hello, my name is Arent van Korlaar. This website is a place where I want to share some of my love for music and also want to share some free learning material on topics in Artificial Intelligence. To me both are an important source of inspiration in my life, so I love writing about them.

The picture in the backdrop is a photo I shot of one of my favorite places in Holland, in the dunes of Zoutelande.

If you are interested in setting up a blog like this yourself, check out this repository. I've basically forked this repository, and adapted some of its code for my own purposes.

My Github
My Linkedin

Arent

profile-picture


arentvankorlaar.nl 2019 | About