AI 2027: How far are we from achieving general intelligence (AGI)? A comprehensive analysis of the arguments of supporters and skeptics in one article

AI 2027: How far are we from achieving general intelligence (AGI)? A comprehensive analysis of the arguments of supporters and skeptics in one article

Preface: Why will "2027" become an amplified AI node?

Since 2023, the pace of progress in generative AI tools has shocked the world. From the popularity of ChatGPT to the functional superposition of GPTs, Claude, and Gemini, AI has evolved from "writing copy" to "helping you make decisions." Many people have begun to put forward more radical assumptions: Will we be able to see true AGI by 2027?

AGI, Artificial General Intelligence, means that AI will no longer just answer questions, but will be able to learn, reason, understand and plan like humans. The founders of companies such as Anthropic and OpenAI have recently stated publicly that such a goal could be achieved around 2027. This kind of talk is both exciting and scary...

As a heavy user of AI, I work with these tools and observe industry trends every day. At the same time, I deeply feel the need to use more comprehensive research to balance different viewpoints. Otherwise, I will be really anxious about the new AI research every day!
Therefore, today’s article does not attempt to predict the future, but returns to a more rational and objective perspective: starting from the arguments for/against AI 2027, understand why "AI 2027" has become the focus of the spotlight, and what mentality we should use to look at it! Let’s watch it together!

If you only have a minute, here are 3 takeaways:

  1. The year 2027 is seen by many industry and research leaders as a possible turning point for the emergence of AGI. According to Anthropic, OpenAI and other organizations, AI's capabilities are rapidly approaching general multi-task performance, and in the next few years it may break through the general intelligence threshold that was previously considered distant.
  2. Many experts still have reservations about the definition and implementation path of AGI. Scholars such as Gary Marcus and Yann LeCun emphasize that current language models lack true logical structure, physical understanding, and interpretability, and there is still a fundamental gap between them and "understanding".
  3. Taiwan's society is still insufficiently prepared for AI education, policies, and risk management. Most applications are concentrated on automation and content generation. In the face of the institutional impact and ethical challenges brought by AGI, it is still necessary to strengthen public understanding and cross-domain planning.

Defining AGI: What exactly is this “general intelligence” we’re talking about?

AGI is not a more powerful version of ChatGPT, but an intelligent entity that can handle a variety of tasks like humans. It's not just about getting a high score on a test (like the SAT or a medical question bank), it's about being able to learn across tasks, understand context, adapt to the environment, make long-term plans, and even apply what you've learned to solve problems you've never seen before.

Today’s AI, despite its impressive performance in text generation, speech recognition, or programming, is mostly still “narrow domain AI”—it performs well on a single task but lacks generality and lasting strategic reasoning.

To give a most intuitive example: If I ask ChatGPT to help me write a product brief, it can do it; but if I ask "please help me design a business strategy, coordinate three departments simultaneously, monitor the results and provide instant feedback", the current AI still cannot do it independently. This is the gap between LLM and AGI.

Supporters’ view: Why AI might reach general capability by 2027

In the view of Anthropic CEO Dario Amodei, the time when AI will surpass humans in "almost all tasks" may be after 2027. One of his reasons is that our current language model capabilities are significantly improved every few months, with almost linear progress in various standardized tests (such as MMLU, GSM8K, HumanEval).

These supporters believe that once the model scale, data quality, training methods (such as RLHF, chain-of-thought prompting) are optimized for several rounds, AI will have general capabilities similar to "human reasoning and planning." Especially in the fields of AI Agent technology, long-chain task execution (such as AutoGPT), and multimodal processing (such as GPT-4V), they have seen prototypes leading to AGI.

From a creator's perspective, I have also witnessed the real changes brought about by these advances:
For example, I can use Agent to help me collect market information, organize presentation logic, and draft a first draft. The efficiency has been significantly improved. This makes me begin to believe that we are on the path of "task automation → knowledge automation → decision automation".

But will “AGI definitely appear by 2027”? This part is still under discussion because there are still many non-technical barriers in between.

Skeptics' Voices: Why Some Experts Don't Believe AGI Will Come So Soon

Not every researcher is optimistic about the timeline for AGI. Yann LeCun, chief scientist of Meta AI, explicitly stated that existing LLMs lack a "world model" - that is, a deep understanding of physical causality and the structure of reality.

Gary Marcus also pointed out that today's AI still relies heavily on pattern memory and training samples, and lacks real abstract concepts and common sense frameworks. He even said that the types of mistakes made by today's AI are still very "inhuman", such as giving random answers to simple logic questions.

These arguments can be associated with a common misunderstanding: we often think that AI’s strength is “logic”, but in fact it is better at “statistics”. It is a language relay master learned based on hundreds of millions of pieces of data, not a system that truly understands "why".

Based on past experience interacting with the model, it will obviously get stuck once it goes beyond the training distribution of the corpus, such as encountering Taiwanese proverbs on the Chinese Internet, cross-cultural metaphors, and abstract concepts. This reminds us that so-called "power" often means that we selectively see its intelligence in certain tasks and ignore its ignorance in other places.

The reality of technology: the speed and limits of model evolution

Of course, it is undeniable that the progress of AI technology in the past few years has been amazing. From GPT-2 to GPT-4, each generation of models has made significant leaps in language understanding, reasoning capabilities, and application scenarios. According to data released by OpenAI, GPT-4 has achieved above-average human performance in the MMLU test, and Anthropic's Claude 3.5 has also demonstrated cross-domain comprehension capabilities in logical reasoning tasks. In particular, the model's instruction following and code reasoning capabilities have been widely used in scenarios such as program development, customer service, and educational assistant.

In particular, the rapid iteration pace of companies such as OpenAI, Anthropic, and Google DeepMind has made many people optimistic about the emergence of AGI in 2027. But will such progress really continue linearly?

At this point, we need to ask ourselves a realistic question: Are these technological advances continuing along the "same track"? Or is it gradually approaching some kind of ceiling?

Stuart Russell (AI expert at the University of California, Berkeley) warned in a public speech in 2023: "We may be investing a lot of resources to reinforce the wrong path of intelligence." What he meant was that if we rely too much on large language model architectures, we may ignore the importance of true understanding and common sense modeling.

As a user, while I have watched the models’ capabilities increase, I have also seen their progress slow down in several key areas. Issues such as the persistence of memory in multi-round conversations, the stability of cross-context reasoning, and confident wrong answers (hallucination) when information is outdated or in unknown areas have not yet been fundamentally resolved.

In addition, language models still face the problem of "unexplainability". DeepMind admitted in its 2024 paper "Tracr: Compiling Interpretable Programs for Transformers" that even with tracing of internal models, most reasoning steps still cannot be fully understood and controlled. This means that we may have very powerful intelligence tools that we ourselves do not fully understand, which is a very dangerous thing.

Based on these issues, I am more inclined to regard the current large-scale language models as "very powerful tools", but not yet "systems with autonomous thinking". Their growth has speed, but the direction is still limited.

Safety and Risk: What kind of Pandora’s box will the emergence of AGI open?

If AGI really arrives in 2027, what will happen to our world? This issue is not just a technical one, but an overall test of society, institutions and ethics.

The alignment issue has now become one of the core issues in mainstream discussion: Anthropic, OpenAI and DeepMind are all developing technologies such as RLAIF (Reinforcement Learning from AI Feedback) and Constitutional AI. In plain words, they hope to build "human preferences" into the model during the training process. But this "alignment" is not foolproof. According to AI researcher Paul Christiano, even if a model appears to behave well initially, once its capabilities are breached, human understanding and constraints on it may immediately become ineffective.

Control issues are even more worrying. Nick Bostrom once pointed out in his book Superintelligence: "If an intelligent being smarter than you doesn't want to be shut down, you may not be able to shut it down at all." Although this sentence sounds far-fetched, today when simulating AI Agent autonomous task planning, we can already see unexpected chain reactions.

As a long-time user of AI tools, my intuitive experience is that when the model helps me complete certain complex tasks, I also begin to become more dependent on it. This dependence gradually changes from improving efficiency to "making it decide faster." This phenomenon has actually been reflected in the operating logic of many start-ups.

If this reliance extends to public systems, health care, and justice—places that require careful decision-making and human judgment—then the question may not be whether the model will be wrong, but when we discover that it is wrong.

How should Taiwan’s small and medium-sized enterprises, education, and policies respond?

If AGI does arrive in the next few years, how prepared is Taiwan?

Taiwan is making rapid progress in AI applications, but is still weak in basic research, risk management, and talent chain integration. According to data from the Executive Yuan Science and Technology Council, by the end of 2024, more than 70% of Taiwan's small and medium-sized enterprises have experience in introducing ChatGPT, Bard or localized LLM tools, but only less than 15% of them have established AI risk backup plans or ethical guidelines.

In terms of education, although the new curriculum will include "generative AI literacy" in high school technical courses starting from 2025, there is currently a lack of teacher training and modular teaching plans, resulting in a huge gap in implementation results. Although many students are good at using AI to assist learning, they have not established a basic understanding of its logic and boundaries.

 

There are also gaps in policy. Based on Taiwan’s current data governance regulations, there are no specific requirements for transparency and accountability of AI model training data. This will become a highly sensitive vulnerability area in the context of AGI development.

Therefore, if Taiwan wants to truly keep up with this wave of intelligence, it should not only "import tools" but also carry out institutional upgrades in ethical standards, data governance, cross-departmental education, and industry-university cooperation. In particular, before the government deploys public AI systems (such as transportation, medical care, and civil affairs), it must first establish an AI usage framework and risk assessment standards.

Conclusion: Instead of asking “When will AGI come?”, we should ask “Are we ready?”

Will AI reach general intelligence AGI by 2027? No one can give a definite answer to this question today. What is important is never the timing, but "our understanding and preparation for this change."

For technical teams, the focus is on safety, controllability and boundary design during the development process; for policymakers, it is institutional adjustment, industry guidance and risk management; for every AI user, it is to remain open to learning, make rational judgments and remain skeptical at all times.

The technology of the future won’t wait for us to be ready. The only thing we can do is to make ourselves users who understand it, supervise it and make good use of it!

 

Related reports

Learn U.S. Stocks in 5 Minutes》What does NVIDIA do? How to become the world's number one with graphics cards?

After being criticized for using hard labor, how did Scale AI become a unicorn in the data annotation industry?

related articles

Decrypting NVIDIA: 6 key points to help you understand the secret of the AI king’s stock price soaring 240% (Part 1) 

Taiwan’s first AI unicorn: What is Appier, with a market value of US$1.38 billion, doing?

Deciphering Notion’s entrepreneurial story: How can a small No-code idea subvert the global 60 billion productivity market?

 

What is DNS? Introduction to Domain Name System – System Design 06

Introduction to System Design Components Building Block – System Design 05

Back-of-the-envelope Back-of-the-envelope Calculation – System Design 04

Non-functional features of software design – System Design 03

Application of abstraction in system design – System Design 02

Introduction to Modern System Design - System Design 01