+994 055 505 05 37
office@icgroups.az
About Artificial Intelligence — Yuri Robyshev
AI in the form we have today differs somewhat from how it was envisioned in the IT community some 30 years ago and earlier.
What was assumed back then, in the now distant 20th century?
There were business software systems. Understandingly, data were fed into these programs, they processed the data, and produced some form of structured output according to predefined criteria. In the past they did this more slowly—now they do it faster. There used to be less data—now there is more. And why were these data needed? They were needed to make decisions based on them. Managers at any level would look at the data and decide whether to change something (or change nothing). But logically, those decisions could be made not by a human, but by a computer. Especially since a computer is far better at analyzing and correlating data, finding relationships and patterns. This was seen as one of the main goals of AI.
And what is AI doing now? Processing video, audio, and text.
The achievements in these areas are impressive, but is this really what we all need?
And even in these areas, things are far from smooth.
Will the total adoption of AI lead to the complete erosion of the value of content? Generative models reduce the marginal cost of content production and remove barriers to entry.
What determines the value of a product or service?
First and foremost, demand, which is generally limited and does not always depend on the purchasing power of potential customers.
Second, the scarcity of supply, which is determined by high barriers to entry and limited availability.
How does this work in practice? Suppose there is a group of talented graphic designers who studied for a long time and have an average of more than 7–8 years of experience working with specialized image creation and editing software (that is, solid professionals, of whom there are not many on the market), with a maximum capacity to produce an output volume of X.
The market balances supply and demand, with minor fluctuations in demand around X.
Then AI appears, capable of doing the work of professional graphic designers literally orders of magnitude faster. As a result, the original group of professionals can now produce not X, but 40X of supply, while the market does not need more than X. This causes prices to collapse by tens of times and leads to large-scale layoffs of graphic designers.
Moreover, complex graphic editors are replaced by easy-to-use AI auto-generators with extensive auto-editing capabilities. This lowers the barrier to entry in this market segment, flooding the market with amateurs who overwhelm it with AI-generated content, expanding supply by hundreds of times more. In the end, the original X turns into 1,000X or 10,000X, completely destroying the value of the final product.
Everything that AI is capable of generating will cost nothing (graphics, video, audio, text).
Any job related to collecting, processing, interpreting, and reproducing information is under threat.
Medical professionals in the context of primary consultations and medical documentation processing, tutors, coaches and consultants, accountants and auditors, copywriters and content marketers, secretaries, stenographers, editors, translators and linguists, lawyers, financial advisors and analysts, consulting specialists, and of course programmers.
This does not kill professions outright, but it radically reduces demand in the low- and mid-skilled labor segments, which in the long run will also destroy the upper tier of professional evolution, since professionals emerge from low- and mid-skilled workers.
The speed of displacement will likely be higher than the speed of retraining workers into other professions due to the pace of AI development, deployment, and scale of application.
AI does not create jobs (former secretaries, translators, or engineers are not going to become LLM core engineers, model training specialists, data center deployment engineers, or power cable installers). Instead, it dramatically transforms the employment structure with a pronounced displacement effect.
The better, more reliable, and more stable AI becomes, the worse the macroeconomic effect will be—manifesting in rising unemployment, falling incomes, and declining demand.
AI is already vacuuming up capital, diverting venture and corporate investment away from other technological areas. The valuation of AI companies is based on exponential expectations of future growth. However, the erosion of content value and the displacement of workers undermine purchasing power and demand for many AI products. Eventually, the expectations bubble will burst, and enormous amounts of capital will vanish.
Some analysts already believe that the capitalization of the U.S. market driven by expectations of primary, secondary, and tertiary effects from potential AI applications amounts to $35 trillion (!).
Yes, LLMs are an astonishing invention that makes it possible to “revive” the dead (with highly realistic voice simulation, facial animation, and movement), create remarkable deepfakes—opening fantastic opportunities for fraud—generate endless streams of content indistinguishable from human nonsense on social media, write sentimental greeting cards for friends you don’t want to talk to, and, of course, help schoolchildren and students cheat on homework at a fundamentally new and unprecedented technological level.
LLMs are capable of generating millions of recipes for inedible dishes, composing talentless poetry at an industrial scale, and convincingly proving on the internet that the Earth is flat, using thousands of flawlessly generated pseudo-scientific articles.
The main driving force of the new era has become concentrated faith in miracles multiplied by a panicked fear of missing out on participation in collective madness.
If the market values a company burning billions on generating cat pictures at one trillion dollars, then so be it.
Apparently, this is the new normal being imposed from above—a world where hallucinations are officially recognized as a more valuable asset than anything even remotely resembling reality.
I like the definition: AI is a hallucination.
If we look back to 2015–2016, the capitalization of the top 10 Big Tech companies was around $2 trillion (now it is $24 trillion). Yet by that time, an industrial revolution on the internet had already taken place (the emergence of search, the development of web technologies and everything related to them), mobile communications had been fully deployed, the mobile device industry (smartphones and related gadgets) and the mobile app ecosystem had been created, social networks and their derivatives had emerged, AR/VR technologies, 3D printing, Big Data, the current hardware and software configuration we are accustomed to, the Internet of Things (IoT), streaming services, and so on.
If the market valued all of that at $2 trillion ten years ago, then chatbots that generate nothing but losses are valued at $22 trillion.
At the same time, there is absolutely no logical connection between AI development and monetization, especially at comparable scales, despite the undeniable usefulness of AI in specific tasks.
Capital expenditures by Amazon, Google, Microsoft, Meta, and Oracle amounted to $97.3 billion in Q2 2025, a significant portion of which is directed toward expanding AI infrastructure, according to the companies’ own reporting.
To understand the scale, one must look at the dynamics. In Q1 2025—$77.8 billion; Q4 2024—$76.3 billion; Q3 2024—$61.2 billion; Q2 2024—$55.7 billion; Q1 2024—$46 billion.
For all of 2024—$239.1 billion; in 2023—$154.3 billion; in 2022—$158.1 billion; in 2021—$131 billion; in 2020—$97 billion; and in 2019—$71 billion. From 2017 to 2022, a significant portion of investments went into cloud infrastructure.
Before cloud investments: in 2016—$32.1 billion; in 2015—$25.2 billion; in 2014—$23.7 billion.
Over the last 12 months, actual investments amounted to $312.6 billion, but given company intentions, planned investments for 2025 may exceed $350 billion.
Taking into account price changes in the semiconductor industry and the growth in company scale, the cloud effect is estimated at $70 billion, while the AI hype effect amounts to $230 billion per year.
Accumulated capital expenditures since January 2023 have already reached nearly $570 billion, with another $165–185 billion expected in H2 2025—almost $750 billion in investments over three years.
To this must be added electricity and utility costs, R&D expenses for absurdly expensive AI specialists whose average annual salaries exceed $1 million, marketing, and other costs.
n total, well over $1 trillion over three years, with an exponentially growing spending trajectory. Today, nearly $500 billion per year must be poured in just to sustain the AI market.
Meanwhile, the combined annual revenue of the largest AI providers is only $32–35 billion over 12 months. This is direct AI revenue, not profit.
Not a single company has presented a monetization concept, a roadmap, projections, or even a vision of the future in an AI-driven world over the past three years.
All discussions about return on investment boil down to claims that AI improves the user experience of existing products, increases customer retention through recommendation algorithms, and boosts ad conversion through more accurate targeting. But this is manipulation—it has nothing to do with LLMs and belongs to a completely different area of AI that does not require hundreds of billions of dollars in annual investment.
The industry flagship, OpenAI, has only $13 billion in revenue with an audience of 700 million users and a subscriber conversion rate of less than 5%. They have already hit a growth ceiling—chatbots are not needed by billions; there simply are not that many interested people (the default audience is smaller than that of social networks).
Even if they reach 1 billion users with a 6–7% paid conversion rate, that would be around $26 billion in revenue, plus perhaps another $15 billion from corporate API clients. That is the ceiling, given competition and free Chinese alternatives.
There are many reports of fantastic AI successes in science and engineering.
First and foremost, it should be noted that it is not AI that has the potential for scientific and technological progress, but scientists and engineers who use AI as a fairly effective tool in certain areas.
It is like saying that Excel created a sophisticated financial model or AutoCAD designed an engine component. That is why I am skeptical of news claiming that “AI discovered a new law” or “made a breakthrough invention.”
On its own (in isolation), AI is useless, as it currently lacks motivation, goal-setting, and the necessary synthesis of cognitive functions for creative or scientific breakthroughs.
LLMs as a tool (with sufficient stability and reliability) in the hands of professional scientists—yes, they are useful and can accelerate progress. But this acceleration may be offset by the degradation of human capital, neutralizing AI’s positive contribution.
In the future, AI may analyze all existing scientific papers, find non-obvious correlations, propose hypotheses based on existing data, and automate routine calculations, effectively summarizing and comparing unstructured data sets. Potentially, AI helps scientists cope with the growing complexity of their fields.
However, AI’s use in science will likely lead to an explosive growth in secondary research papers, as AI can easily generate new combinations of old ideas. This will look like accelerated progress, but the overwhelming majority of such works will be secondary or devoid of practical or scientific value.
Modern AI models are brilliant interpolators; they recombine what already exists in their training data. They find the most probable paths between points in the multidimensional space of knowledge on which they were trained.
Fundamental breakthroughs in key scientific and technological areas require a different combination of cognitive skills.
AI efficiently solves the problem of processing massive amounts of accumulated scientific information, but by its nature it is incapable of the nonlinear, intuitive leaps required for paradigm shifts. Instead, it creates an imitation of progress—an explosive growth in secondary publications.
Moreover, LLMs cannot create a hypothesis that does not exist in the data. Breakthroughs in fundamental science often require new data that can only be obtained through physical experiments.
LLMs are excellent for digital tasks but are limited by data, stability, and physical constraints. In the digital environment, where limitations are defined only by computational power and algorithms, AI acts as a catalyst for rapid growth, significantly accelerating code writing and optimization, digital content creation, data analysis, and virtual system modeling.
Progress in the physical world is constrained not by computing speed, but by fundamental laws of nature, material properties, chemical reaction rates, and the complexity of biological systems. These processes do not accelerate exponentially with increased computational power.
The main long-term barrier is the translation of a digital model into a physical object. AI can model millions of potential protein structures in an hour, but their physical synthesis and laboratory testing (not to mention clinical trials) still take years. Physical experimentation and industrial scaling—not idea generation—are the main bottlenecks of progress.
And regarding degradation: there is a risk of cognitive atrophy due to excessive dependence on AI systems.
Unfortunately, the human brain always follows the path of least resistance.
Neural connections responsible for specific skills (critical analysis, systemic cause-and-effect analysis, planning ability, establishing dynamic hierarchical relationships, operating with high-level abstractions, and many others) are strengthened through regular use.
If these functions are delegated to AI, the corresponding neural connections weaken. The brain essentially optimizes its resources by shutting down unused cognitive functions.
It is impossible to skip evolutionary stages; more complex skills cannot develop without the development and reinforcement of basic skills.
AI can provide virtually unlimited access to knowledge, but it deprives humans of the need to go through the trial-and-error process that forms deep, intuitive procedural knowledge and broad cognitive skill development.
Complex cognitive skills require not just analysis of individual data points, but their synthesis into a coherent whole.
AI systems often provide ready-made answers or solutions, hiding the process of analysis and synthesis. The user receives the result without understanding how it was achieved, which undermines the ability to see interconnections, assess systems holistically, and predict chains of effects from decisions.
What distinguishes an expert from a novice is the ability to spot an error almost immediately—before the final product is even assembled—by identifying the point of distortion and taking corrective action. A novice, by contrast, accepts any output on faith, regardless of correctness or reliability.
There are scenarios where AI is useful—whe it takes over routine and computationally intensive operations (initial data collection, information sorting, comparison, summarization, and correlation of large data sets).
This frees up resources for higher-order tasks: goal setting, strategic planning, creative synthesis, and decision-making under uncertainty—provided that humans have well-developed primary skills in data collection, processing, comparison, and generalization.
However, as AI evolves, humanity will delegate not only routine tasks but also the processes of analysis, decision-making, and idea generation to AI, which inevitably leads to the atrophy not only of basic information-handling skills but also of creative, algorithmic, systemic, and higher-order skills related to a fundamental understanding of reality.
The maturation of an expert is not the accumulation of information, but a sequential ascent up an evolutionary ladder, where the development of cognitive skills follows a nonlinear trajectory.
This means that AI disrupts mental and intellectual maturation at the early and intermediate stages, freezing humanity at a very shallow level of cognitive evolution.
Virtually all of humanity is at risk of degradation, but people born after 2010 are in particular danger, since they do not know a world without AI and will, without any doubt, be much less intelligent than older generations—and the next generation will be even less intelligent than the current one.