Daron Acemoglu is an American economist of Turkish-Armenian descent, born in 1967 in Istanbul. He is a professor of economics at the Massachusetts Institute of Technology (MIT) and one of the leading contemporary researchers in the field of institutional and political economy and economic development. His most famous books include Why Nations Fail (2012) and Power and Progress (2023). In 2024, Acemoglu was awarded the Nobel Prize in Economics, together with Simon Johnson and James Robinson, for their research on how institutions are formed and how they determine the prosperity of nations.
Last week, Acemoglu gave a public lecture in Sofia on the topic of "Who Will Own the Future" as part of the Green Transition Forum organized by the information portal dir.bg, after which he answered questions from the moderators and the audience. Capital Weekly published an abridged version of some of the questions and answers.
What is something that everyone believes will definitely occur in the future, but you believe is actually wrong?
It depends on the time frame, but I think one thing that many people believe is that the economy will look radically different in 10 years because of AI. That many professions will disappear. I don't think so. It's true that the economy will have evolved in 10 years and that AI will be used much more widely. But most of the professions that exist today will continue to exist. There will be journalists, there will be radiologists, there will be teachers, there will be electricians, administrators, and assistants. The AI revolution, which is indeed a revolution, will unfold much more slowly and gradually than most people imagine.
You have described two different visions of AI - one in which it completely automates human work, and another in which it complements human capabilities. How can societies that will be affected by this technology resist the current direction (the first model)?
There are three main aspects here. What do we mean when we talk about this more humane approach that benefits workers? The key, in my opinion, is to use AI to expand the capabilities of diverse groups of workers with different skills to perform more complex tasks. The example I gave of a novice electrician who, with the help of AI, is now operating at the level of an expert, is indicative. Ultimately, the goal is to move from simpler to more complex and new tasks. I think the dream is that this should apply not only to journalists, scientists, and artists, but also to physical labor. Is this dream completely unrealistic? No - although it is not at the forefront, there are already companies developing such technologies, and much more than we realize is technologically possible.
Second, who has a say in this? I will tell you who obviously does not have a say: about 6 billion people outside the US, China, and the EU. For the vast majority of humanity, AI is and will remain something that "happens to them." But I don't see a way out of this. The only way out would be a much more cooperative international environment in which voices from different parts of the world are heard. But obviously, that's not the case. Even in the best possible world, this would still be a major challenge. So if we go in an alternative direction, we should not expect those 6 billion people to be able to contribute to it.
Who, then, can? Ultimately, innovators and the democratic process-the meeting of these two forces. In my opinion, we in the West have overused regulations in recent decades. But there are areas where regulation is really necessary, and AI is one of them. But regulations alone are not enough. If you impose regulations on a dynamic industry that completely rejects or does not accept them, it simply will not work. I believe that the technology sector itself needs to change its focus and values. When a head of state asked me about this, I replied: the problem with the future of AI will be solved tomorrow if, tomorrow morning, 60-70% of all researchers, entrepreneurs, and engineers in the field of AI say: "We want to prioritize AI that benefits working people, that benefits humanity." It's that simple. But I don't know how we would get there. Economic incentives are very important, but ideological frameworks are just as important.
What do you think about Yuval Noah Harari's theory that a time will come when AI will view humans the way homo sapiens viewed Neanderthals, as a species of little value?
First, there is serious disagreement among scientists, entrepreneurs, and experts about when we will reach so-called AGI (Artificial General Intelligence), i.e., artificial intelligence capable of performing tasks in all areas as well as the best humans. Once this is achieved, it is assumed that machines will be able to program and improve themselves-a process that could lead to the so-called "singularity" or superintelligence (ASI, Artificial Superintelligence). Then comes the question of what its goals, its capabilities will be, and how it will treat humans.
Personally, I think that achieving AGI will prove to be much more difficult than is commonly believed. Although many industry leaders disagree with me, I do not expect AGI within the next 10-20 years, and perhaps much later. And if we do achieve it, the transition beyond it, to ASI, will be even more difficult. And even then, it is unclear what the capabilities of such a system would be. So, yes, science fiction is interesting. Consume it, but don't worry too much.
Are tech giants becoming more powerful than governments? Can they still be controlled, and are we losing some of our freedom given the information they have about us and how they use it? Could this process become irreversible?
My answer is: yes and no. Yes, we should be concerned, but no, we are not there yet. Undoubtedly, we are already at a stage where these companies are huge in size and power. Google, Apple, and Amazon are about 100 times larger than Standard Oil was when the US authorities decided to break it up. These are truly giant companies, and they don't just dominate a particular business niche, they dominate and control information.
Similar to the different visions of digital technologies and AI, there is another division if we go back in time: centralized versus decentralized information. Soviet planning is all about centralized information. Hayek (Austrian economist Friedrich Hayek, known for his defense of liberal democracy and the free market) is a supporter of decentralization. The initial enthusiasm for computers was driven by the desire for decentralization. Many of the technological pioneers were inspired by the idea of destroying companies like IBM because they perceived them as overly centralized monopolies of information that stifled the revolutionary potential of IT.
Today, however, we are once again in a world of highly centralized information. But there is a glimmer of hope that technology is not yet advanced enough to effectively use this centralized information. Content moderation, for example, perhaps the ultimate form of centralized control, is proving extremely difficult. No company is even close to adequate content moderation. It is simply too complex. But this may change over time. The development of AI could make the processing and use of centralized information much more efficient, and then we would really have something to worry about.
And one more thing. It is true that Google is bigger than any average European country. But if the European Union adopts regulation, Google must comply with it, otherwise it will lose access to the European market. This means that European lawmakers still have serious power. But here's the catch. It's always a mistake to try to regulate technology in which you are not a leader. Europe has fallen so far behind in digital technology and AI that, in my opinion, there is no way it can impose effective or adequate regulation. This is perhaps a separate issue, but Europe really needs to wake up and take joint leadership in digital technology.
Do you think there is something fundamentally different about social media that threatens democracy in a way never seen before? What are the consequences of this?
The answer has two layers. First, yes, misinformation is a real problem and, in my opinion, with the advent of AI, this problem will deepen. Ultimately, as AI becomes more powerful, it will also become a more effective tool for manipulation. And manipulation in general, of which disinformation is just one form, is toxic to democracy, to civil society, and to our ability to have an informed public debate. But I also believe that if this were the only problem, people would adapt. In fact, we are already adapting: more and more people can recognize fake videos, doctored images, and other scams. It's not perfect, but our ability to adapt is impressive.
However, I am increasingly convinced, although the evidence is not yet conclusive, that the greater challenge of social media is not disinformation, but that it is destroying our real social networks, those through which civil society is built. People create communities when they interact on different levels: physically, through conversations, debates, working together, participating in clubs, schools, and friendships. All of this requires time, effort, and commitment. But social media is replacing these real social connections with virtual ones, and people are not getting what is vital not only for their mental well-being but also for their development as citizens. That is the big challenge.
Many economists and businesspeople predict a global economic crisis, which was first postponed due to COVID-19 and then due to the current wars. How do you think a digital currency like Bitcoin will perform in such a situation?
We need to distinguish between three things: digital assets, digital currency, and cryptocurrency.
Everything we can do analogously, we can also do digitally, replicate it. So I see no reason why well-regulated and well-designed digital assets should not be part of our system, and I think they will be. Central banks should have digital currencies.
Whether these currencies should completely replace the alternatives is a separate debate. But in my opinion, digital currencies should be part of the portfolio. However, this will not fundamentally change international balances. Digital assets may change some aspects, for example, making countries more interdependent in terms of risks or isolating certain risks more successfully. But I don't see a single real problem that cryptocurrencies can solve, I don't see why cryptocurrencies should exist at all.
Everyone is surprised, even shocked, by the tariffs that the United States imposed on many countries a few months ago, including Europe. How do you explain this behavior and how will this economic war affect the European Union and, respectively, the US?
First, to understand Trump's policy, we must ask ourselves whether his international policy is an autonomous goal or is subordinate to his domestic goals. I am convinced that it is the latter. There is no consistent international policy. Trump has a consistent and potentially dangerous domestic policy, a domestic agenda. It is linked to the establishment of executive presidential power, which means a presidency that is much less constrained by the courts, the legislature, agencies, and civil society. And in this process, he also wants to concentrate power in his own hands, in the hands of his family, and in the hands of his close associates. I believe that many of his international policies are a consequence of this. He wants to reward his electorate in visible, sometimes real, but often unreal ways - for example, attempts to bring manufacturing back to the country, which is an important part of his promises to his voters. But even more so, customs policy is a very important part of this domestic program. Let's be clear that there is absolutely no economic theory that even comes close to justifying Trump's customs policy. There are economic theories that say tariffs are good or bad. But if there are to be tariffs, they must be at reasonable levels and predictable, so that there is no uncertainty and no constant back-and-forth changes, which would make it difficult for businesses to plan, invest in the medium term, build supply chains, etc.
Why would anyone introduce highly arbitrary tariffs that differ between two countries, so that a company operating in Vietnam has an advantage, while another in China is at a disadvantage? when this is very uncertain-tariffs can increase, they can decrease, they can be negotiated, they can change in one way or another every month? Why would you do that? If your goal is to concentrate power in your own hands, this is a great way to do it. Because you already have enormous power over Bulgaria, over Romania, over the EU, over Vietnam. And that is very important. This way, you can get concessions on issues that interest you. And you can also look like a powerful statesman. But more importantly, you have enormous power over national companies because with one decision you can destroy their supply chains and their entire business. This is the context in which I understand Trump's customs policy, which is why it is moving in this dangerous direction. But because it has no economic logic, it will not be long-lasting. However, while it is in force and uncertainty remains, it will cause enormous damage.
Daron Acemoglu is an American economist of Turkish-Armenian descent, born in 1967 in Istanbul. He is a professor of economics at the Massachusetts Institute of Technology (MIT) and one of the leading contemporary researchers in the field of institutional and political economy and economic development. His most famous books include Why Nations Fail (2012) and Power and Progress (2023). In 2024, Acemoglu was awarded the Nobel Prize in Economics, together with Simon Johnson and James Robinson, for their research on how institutions are formed and how they determine the prosperity of nations.
Last week, Acemoglu gave a public lecture in Sofia on the topic of "Who Will Own the Future" as part of the Green Transition Forum organized by the information portal dir.bg, after which he answered questions from the moderators and the audience. Capital Weekly published an abridged version of some of the questions and answers.