When to embrace AI tools
One prime example in agentic tools is a task that potentially many C++ programmers have encountered in their careers. Change, build, see build errors, fix, repeat. This is really not the best of C++ tasks, but sometimes it is necessary, particularly with larger libraries and especially complex build and dynamic library dependencies. I encounter this often when building with Unreal Engine, as the process of getting the application to run is a many-step process between building an editor build, it actually running in the editor, and subsequently verifying whether the packaged application is running.
I most often just look at the text. For many issues there are simple fixes. Sometimes I need to adapt or rewrite some dependency. It can be tedious, especially if I have not considered the correct build method. AI can solve this: a code assist agent can run, check output, fix, and run again. This might take it a while and you burn the equivalent of a rainforest in the meantime, but it'll get you there. This is a task that is very well suited for AI assistance, as it is largely a trial-and-error process with a lot of repetition. The AI can handle the repetition, and I can focus on the actual coding tasks.
What I use AI for
I tend to seek out specific tasks I can employ AI in. Particularly, I tend to choose certain tasks that are tedious. Let me provide with examples from the past week:
- Fixing build errors in C++ code (as mentioned above)
- Combining the output of OpenStack with the Ansible files
- Writing and converting automation taks, such as Bash->Powershell
- Adding tedious automation tasks, such as cmake packaging logic when the components are only partially compatible
These tasks have the commonality that their scope is limited and the main workload is just typing stuff down. These tasks are particularly hated by me, and moreover: I would not really feel comfortable giving such a task to a student, as it would feel like punishment to me.
When not to use AI
This is purely subjective, but a core concept which AI tools are very focused on when providing text is paraphrasing. This is actually a good thing for text production tasks, as otherwise we would dump out training data all the time. Not to say that this does not happen: It has been shown regularly that training data can be directly extracted from models: Zhang et al. (2023) showcased directed prompts that could reliably extract training data. However, in most cases the paraphrasing of sentences is one of the core tools surrounding AI co-writing and AI-driven text generation.
Why this is an issue: Paraphrasing is a great tool when the paraphrization is on the same, let us call it, "level of detail" or "level of abstraction". However, AI tools lack the internal logic to know which level of abstraction/detail is currently being used, as moving to a coarser or finer statement is not necessarily a large change in wording. Typical application cases of this are co-writing, such as reacting to reviewer feedback, as opposed to letting AI paraphrase an abstract or similar, which by now it typically excells at.
A very easy example: Did you notice the description of the news post and the actual full article do not neccessarily align? This is due to the fact that the concise news post was generated using GPT-5. The article is hand-written. The prompt was fairly simple: "Please insert a short news post hinting at posts/20251009-aiassistance.htm", which I know is not how you are supposed to write prompts. Yet the base layer of doing the task I provided was already only partially matched in a way where the news post slightly undermines the actual content of the article. It does state that this is a opinion, but it abstracts the content in a way that makes it seem like a tutorial. "This is a very subtle difference, but it is there." is what AI-autocomplete wants me to write here. The different is huge, actually, but the way I am phrasing the sentences before this somehow hints at it being a very technical distinction, when the difference between "opinion" and "tutorial" would be apparent to everybody just hearing the two words.
Example: Adapt a text to react to feedback
Reviewer feedback is often quite general in terms of the proposed changes. Very seldomly will reviewers tell you specifically which changes they want to see, and often if they do, they will not uniformly do so. Granted: There are very good reviewers out there who will go to a quite high level of detail, resulting in constructive and direct propositions. However as a reviewer, if language is also an issue, I will make note of it but not further comment on it if there are other issues that might lead to sections being partially rewritten.
AI will take instructions and make action points from them. It might change level of detail, language, and particularly glaring is the fact that it might combine sentences into one in the interest of clearer or more concise language if such a comment was provided. I have found, in early attempts to co-write using AI, that the output will uselessly combine sentences into one, changing the meaning of both. I then have to rewrite the whole text again, reacting to the reviewer myself.
Looking forward: What does a programmer do?
A programmer is also a technical role in a lot of companies as well as public institutions, particularly in Germany. The role description that was part of my programmer job, which I have done for multiple years at a University, was largely to assist with the creation of (rather specific) software in the context of a research project. Note that this was actual "programming" work, as opposed to "software engineering", since we did not have a system in place for the development, milestone tracking, or delivery. Our software was research software aimed at producing data to answer research questions, and not a software product that you could hand to someone else and say "you can use it, everything is explained and really easy".
This also means that individual tasks, the day-to-day operation of my job, were largely centered around algorithms. Algorithm discovery and building a piece of code with both an expected input and output. For context: This was early synthetic data generation back when this topic was not necessarily as present in any of the computer science communities, as it is heavily domain specific. Sometimes, I would implement algorithms from other papers, such as Silver et al. (1998) for feature tracking algorithms.
This is also why this is such an important conversation to have. Does the above sound like something you would ask an AI agent to do? For me it sure does. Even in public institutions in Germany, where people obviously do not make much money compared to industry, it would still be cheaper to buy one of the better chat tools (or perhaps there are even free ones, see GitHub Education) and use it for tasks similar to above. And I know: people will say that the student worker would just be assigned more abstract tasks, seeing as they are nor inherently able to produce the above results much quicker. This also required more educational background. Entry-level jobs were great for on-the-job training. If all my students are now just DevOps engineers, a collection of tasks which is not as easily automated, they and the collective we will start lacking software engineers that can build more parts of the pipeline.