Generative AI: A Big Regression for the Humanity

January 15, 2026

Topics:

technology, AI, rant

Last modified:

January 28, 2026

It is year 2026 and you've probably heard of this artificial intelligence craze, unless you've been living under a rock for the past three years. I want to make it clear that I'm talking specifically about generative artificial intelligence, which is a whole different can of worms than many other things also considered to be artificial intelligence. In this article, I'd like to share my opinion on this mad subject and try to communicate to you why I think generative AI is a regression for the humanity.

What Is Generative Artificial Intelligence

Artificial intelligence in the most general sense is a computer program that mimics natural or human intelligence. Generative artificial intelligence is a specific subset that mimics

  • human conversation (text),
  • human art (images, music and videos) and even
  • real life pictures and videos.

Generative AI, as the name suggests, generates such content from a single prompt, content that is not made by any human person. A chess playing program, for example, is not a generative AI and is not the subject of this article.

Problem 1: How Is Generative Artificial Intelligence Created

Generative AI was not recently made possible because of the advancement in the computer hardware, but because of the huge data collection, aggregation and processing that has been happening for a very long time in the world. Large language models, which are the basis of all of these technologies, are trained with an enormous amount of human text, be it from various books, articles, stories, poetry, music, messages, letters, individual chat conversations, group chat conversations and so on and so forth. With all of this information encoded in their neural network, which is merely a very large array of numbers, they can predict a response based on some input.

The problem with this seems to me to be very overlooked: most of that data is copyright protected and shouldn't be used for training material. Nowadays, anything you create and post on the internet on a web page or on some media platform is stolen (copied) by web crawlers or greedy corporations and included in a training set for LLMs or other generative AIs. Even this article eventually will be picked up by some crawler and used without my permission for training LLMs, for other people to use and for the company to make a lot of money. The same goes for many copyrighted books, articles, poetry and music, which somehow "happen" to land in the hands of corporations. No author gets compensated with any money, sometimes they are not even asked for their consent, and other times they have no choice, because of the lack of alternatives. For example, everything you post on YouTube may be used for training generative AIs and you have no other choice but to accept!

This is completely unethical. Corporations are in a race to gather as much data as possible, data that they have no right upon, in order to create something which will make them money. And all of those poor authors whose compositions are in the training data receive nothing!

Some people say that humans already do what LLMs do: they take inspiration from different places in order to create something new. But I say this is false. When a person creates a piece of art, for example, they indeed take inspiration from many already existing art, but not in the same way as LLMs do. That person someday looks at a painting or reads a poem and forgets almost everything. They remain only with an impression and a vague idea. Then, together with their own life experience and other factors, they create that piece of art. Thus they didn't violate anyone's copyrights. Generative AIs, on the other hand, when given a painting or poem as training data, they consume all of that information in its entirety and encode those patterns in their neural network during the training phase. That is way different from humans and that, in my opinion, violates every author's copyrights.

I've recently watched a movie called The Wind Rises, in which the protagonist feels bad for working on developing aircrafts, which he knows will be used for destruction, i.e. for World War II. He conforts himself by thinking that developing aircrafts is beautiful, that he does it purely for passion and by knowing that they can be used for good (people and resources transporation). I think generative AIs are a similar situation, but with a few caveats. While LLMs might have some use cases, other generative AIs, like the ones generating art and real life pictures and videos simply have no good use case and today they are used either for evil or stupidity. I'm going to talk more about this in a later section.

Problem 2: Large Language Models in the Information Technology Industry

IT is the domain I'm myself working in. Since I know how some things work here, I think I can have something to say.

Nowadays, more and more companies let LLMs write the computer code for them, code that ends up in production, on people's computers, phones, IoT devices, on servers, routers, cars, airplanes, rockets and so on. This is often - but not necesarilly always - absolutely disastruous. The fact that programmers no longer write the code themselves is a complete lack of responsibility. Indeed, LLMs write code much faster, but that code is often worse than code carefully thought and written by hand and it is often incorrect or partially incorrect. Programmers no longer think through the code, they don't check it well enough, and some don't read it at all! Then they say it looks good enough and they push it into production. No wonder why so many programs and applications these days have so many bugs!

Another issue that is incredibly overlooked, in my opinion, is that the code, once written, must be maintained, which can't be because of LLMs. Most of the times code is in fact read and rewritten and very rarely written for the first time. If no human programmer writes a piece of code, then that code has no author and no real maintainer! Then practically no one on Earth knows how and why that piece of code works, no one has the ability to fix it if it's broken, to explain it if it's misunderstood and to improve it if it's bad. That code can end up in your car's systems and cause you to crash. The bottom line is that programmers must comprehend their own code, but LLMs reduces that.

Of course, not all software is critical. It's not important if your TODO application has a minor bug. But come on, we are programmers and computer scientists! We should have more respect for ourselves and treat writing and developing programs and systems as a real science and a real responsibility. If a piece of code is only 90% correct, then it is 100% buggy. It is just buggy. It has a bug. We have to be responsible for our code, we have to maintain our code, then we should have written the code ourselves from the start!

Since an LLM tries to predict a response from your input, it's inherently unreliable. Neural networks inherently only predict things, never say with certainty that something is true after a careful analytical or mechanical computation. This fact is tied to other problems, because LLMs are sometimes wrong or slightly wrong, which is a big issue for many domains.

Problem 3: Large Language Models in Education

Since ChatGPT launched in 2023, students have been using it (and other LLMs) for doing their homework and assignments. And by that I mean they let LLMs do the whole homework and learning for them. Except the last part that no LLM can learn for them. They have to make the effort in order to learn. They have to write and complete that assignment in order to learn, or else it's pointless. This is a very big issue, because generations alpha, beta and later will grow up and become adults, except they become adults stupider than ever, having learned nothing in their early lives. One of them could be your future doctor, manager, politician or lawyer!

This problem is exacerbated by the fact that neither teachers do their job properly. Teachers shouldn't completely stop learning. And what about the normal student-teacher interaction that has existed for thousands of years before LLMs?! Should students really be allowed to use LLMs as a learning tool in school and eliminate the interaction with human teachers? If things continue to work like this, then humanity will long, long die because of stupidity before any "super intelligent evil AI" will destroy the world! What I mean is that humanity is much more threatened by their own greed than by some intelligent out of control AI.

In general, development in technology has helped us humans live better lives, to be more productive and so on. Washing machines helped us focus on more important tasks, like learning or spending time with people. Pocket calculators helped us avoid long and tedious calculations, where in fact it's more important to get the research right. But LLMs are becoming a tipping point in technology advancement. People, instead of using them for increased productivity, use them because of lazyness! It's no longer about a better life, but a life where I don't need to work anything and I don't need to THINK anything. Let's use the technology to improve our lives, not eliminate thinking and human interaction.

Problem 4: Generative Artificial Intelligence Is Pushed by Corporations for Money

I don't believe anyone wanted generative AIs to be everywhere. If you have any idea how corporations work, then you should already know they push AI everywhere for one single reason only: money. Really they don't care about the people. They don't care about the consequences of AI generated code or students not learning anything. Corporations really want for generative AI to be the last technology to remain and for everyone on Earth to use them every day in some way or form, because that will bring them money and power. This is where the governments might have some role in all of this madness. Or else they wouldn't encourage the use of LLMs in schools, or they would have made laws to stop corporations from stealing other people's works. Corporations want to be a monopoly.

Problem 5: Using Generative Artificial Intelligence for Art Is Unethical

Let's now focus our discussion on art and AIs that generate music and illustrations at a click of a button. Really how fair is it for real artists to spend hours or even days on a drawing or a piece of music and other people who pretend to be artists to make similar content in a few seconds with a simple prompt? I've heard people arguing that generated art is still copyrightable, because someone has written the prompt, someone has done "the work". But they miss the fact that it's 100% work compared to 0.001% work. If I tell my friend to draw me something, then does that make me the author of the drawing? No! Even though I "prompted" them to do the drawing, they actually did the work. And that doesn't make me an author, neither an artist. People want to believe they can be artists, developers, writers or composers by prompting an AI! That doesn't sound right. I'm sick of searching the internet for some information only to find 90% AI generated articles and images.

Any piece of art has meaning and value, because a human being with a unique combination of thoughts, feelings and experiences created that piece of art. Computer generated, mass produced art is not real art. Even though corporations want them to be. Let's not forget that computer generated art generally looks and sounds very bad. Humans are different from animals, because, besides thought and reason and emotions, they appreciate art! Let's not downgrade ourselves to animals, alright?

Problem 6: Computer Generated Real Life Pictures and Videos Have No Use

AI generated pictures and videos that try to mimic real life pictures and videos, deep-fakes, are pure non-sense. They simply have no real use case. They don't help anyone. Really they are only used to trick people for political reasons, i.e. spreading misinformation, or are used for rather stupid entertainment. How can anyone say "Hey, I'd like to watch some AI generated video that doesn't make any sense"?! How is brainrot entertainment? It only contributes to the regression of human intelligence! How can anyone say "I don't care if I watch a real video from YouTube channel X, or an AI generated fake video of X"?! Thus especially AI generated videos, deep-fakes, are incredibly dangerous.

Problem 7: Generative Artificial Intelligence Consumes a Lot of Power

Believe it or not, powerful computers consume lots of energy. How about hundreds of these powerful computers called servers that are running constantly and at full power? Servers are equipped with very large and loud fans, because they need proper cooling, as they run non-stop at full power. And they not only run CPUs, but also very power consuming GPUs. All of that only for some people to ask ChatGPT what should they eat for dinner or to ask it to write for them that fake and stilted letter of application.

Is All Generative Artificial Intelligence Bad?

I don't want to say all generative AI is bad and has no use. I think in some very niche cases LLMs might have some utility, but only if used judiciously. Programmers might use LLMs only for some one off non-important scripts AFTER checking them thoroughtly. Or they might use them in a domain in which they already have knowledge, but they only want a little assistance. One rule of thumb that I'd say is to never ask an LLM about something you can't validate yourself.

Art, video and audio generating AIs, on the other hand, I'd say are 99.8% bad. Maybe someone can find that 0.2% use case.

Conclusion

It's pretty clear to me that the world doesn't really need generative AIs. The world would be better of without them. The world has survived and thrived for thousands of years without them. They don't make our lives easier, instead they destroy art and eliminate the need for thinking, some of the things that make us humans. Why invest so much money in artificial intelligence, when you could invest it in natural, human intelligence? Thus I think it's not worth developing and pushing generative AIs in every corner of the world.

Relevant illustrative YouTube video: https://www.youtube.com/watch?v=rNo5fs1iDrs

View All Articles