Does it still matter?
Lately I have been utilizing the AI tools that are provided to us, especially through my day to day while working. I have started to see some kind of productivity boost, but I am not sure how big the impact actually is.
Mostly I use Anthropic models (Sonnet, Haiku and Opus), and I use them through the CLI tools. On my job I use them through copilot CLI, while for my personal stuff I use them through claude CLI. I have set various different MCP servers for my agents, which make the whole experience even flawless. After a lot of experimentation I have started to conclude that they cannot replace (good) human engineers. Simply because they just predict stuff, based only on “how will it work?”. They cannot really analyze and take into consideration security issues or business needs and even if someone tries to force them to take these things into consideration they start giving textbook answers, like:
If you want to improve performance, think about caching or avoid N+1 queries…
This does not really save time. Actually, I believe that utilizing LLMs like this creates the opposite effect.
On the other side, when I use them while I am digging something, or when I try to understand how something works, then I feel that they excel on these areas. I always try to validate the input that they provide me, simply because they will never mention that they are not really sure about what they display as a well-known fact. Many times, I find occasions, where a model insists that something is a “big truth”, while the “big truth” is a “big lie” instead.
Anyway, my biggest concern, about the use of AI, is the fact that anyone can create in 1 minute an article about anything and I feel that this fact is really dangerous. I am trying really hard to keep my writings original, because I think that the way a human being writes is a kind of artistic expression, even in the context of a job like this, where we are trying to “communicate” with machines.
The good think is that it is actually easy to understand that something was written by AI. They are always just too good to be true and they do not have any “color” on their writing. All of them have almost identical style, and most importantly, they never, under no circumstances, present any self-doubt about the information they present. Simply because an LLM predicts the best possible token for a given input, they are not able to understand when they are not sure to speak about something.
But I feel that this perfectness of their written communication, and how easy it is nowadays to know about something, prevents us (actual humans) from sharing our knowledge. Many times I feel like there is no meaning to share something that I learned, because if someone asks an LLM, they can do the same exactly thing. What did I provide? More training data for the model? Data that acutally might have flaws in it. This is the reason that I feel a bit dumb to write a post on this blog, because it feels a bit pointless…
The world changes, the technology evolves in ways that we are not even capable of imagining…
If you asked people 50 years ago, what is the future that awaits us? Everybody was talking about flying-cars. 50 years have gone behind, but still the cars are on the ground. But what really changed? What really changed is the sharing of information.
Our cars never went to the air (yet), but our knowledge and our communication systems went. And the internet is the flying cars that we got and no one really promised us.
This is exactly the way I think AI will change our lifes in the future. I know that I cannot know how things are going to evolve, but I am really curious to see the change, and that’s because there are some specific science fields which will be revolutionized by AI, like medical prognosis, personalized medicine, understanding of medical diseases and specific parts/organs of the human body. All of them will require humans to be there.
The conclusion is that we still need to share our knowledge. This is a very important moment where we need to secure the “openess” of the internet. If all the data of the internet is generated by AI, then no one will care about it. One very horrific part of the story is that the AI is closed-source, and the only thing that drives it is money. It can and will be used for dark purposes, like spread lies or drive the attention of the people somewhere else, because the ones who own it are the worst kind of people.
Again, I feel that soon some of these things are going to change, more or less because from sustainability perspective, the way these companies try to win this AI-race is full of bad choices. The earth cannot handle this energy-draining, so we can be able to create “erotica” content through LLMs…
PS: I felt that I need to write this jumble, and I feel that I want to write more free-thoughts texts like this in the future…