Is AI good at "pretending to be smart"? The true intelligence of AI like ChatGPT and what the future holds
Hello, I'm John, and I'll be explaining AI technology in an easy-to-understand way. Recently, "talkative AI" such as ChatGPT has been evolving at a tremendous pace. I'm sure many people are amazed that they can converse just like humans. But are these AIs really "thinking" when they speak? Maybe they're just pretending to be very smart.
Today, I'll explain the secret behind AI's intelligence, as well as new ideas for making AI even smarter, in a way that's easy to understand even for those without specialized knowledge!
What is the chattering AI "LLM" really good at?
First of all, AI like ChatGPT isLLM (Large Language Models)It's called "AI that is good at reading a lot of sentences and predicting what words are likely to come next." For example, if you input "It's a nice day today, so..." it will return natural follow-up words like "Let's go on a picnic!" or "The laundry looks like it's drying well."
Recently, there have been some news reports that LLMs have finally acquired the ability to think (inference), like OpenAI's "o series" and China's DeepSeek's "R1." However, some experts say that this is not the true meaning of "the ability to think." It is merely an advanced text prediction with some handy features attached.
Moreover, the race to develop AI is very fierce, with cheaper and more powerful models appearing one after another. There are even models that are so amazing that you'll think, "Wow, they can be made that cheap!?" The AI industry is in a state of war. But, price isn't the only thing that's really important. If the fundamental problems that LLM faces are not resolved and things continue to move forward, we may be faced with a slightly worrying future.
Are you in trouble if you only have an LLM? What are the "weaknesses" of AI?
No matter how much LLM has evolved, there are still some problems that remain unsolved. The most famous of these areHallucination" This is the phenomenon where AI tells plausible lies. For example, if you ask it about a historical figure, it will talk as if it said something that it didn't actually say. It's troubling, isn't it?
Other weaknesses of the LLM include:
- Information tends to be out of date:LLM only has the information up to the point of learning. Therefore, it may not be able to answer questions about new news or events. And it takes time and money to relearn it.
- It's hard to understand "Why did you answer that?":The basis for the answers that LLM gives are sometimes difficult for humans to understand. Even when you ask, "Why did you get this answer?", you might not be able to give a clear explanation.
For example, imagine this scene.
- Financial fraud checkIf you ask an LLM, "Does this transaction look suspicious?" it might answer, "Yes, it resembles a pattern of past fraud." But the LLM doesn't really understand the complex relationships between accounts or the hidden flow of fraudulent transactions. It just gives you a "plausible" answer based on past data.
- Deciding on drug combinations: Let's say you ask an LLM for their opinion on a new drug combination. If they say, "This combination increased efficacy by 30%," you might be tempted to take their word for it. But if they overlooked a serious side effect, or if the two drugs were never even tested together in the clinical trials, you could be in big trouble.
- Responding to cyber attacks: Let's say a company's security officer consults LLM saying, "There was unauthorized access to the network. What should we do?" LLM may come up with a plausible countermeasure plan, but it doesn't necessarily match the company's system configuration, the latest threat information, or the rules that need to be followed. If you simply believe the AI's advice, it could actually lead to a dangerous situation.
- Predicting future risks to the companyLet's say you ask an LLM: "What is the biggest economic risk to my company next year?" The LLM might be able to give you some answers based on data from past economic crises, but they don't have real-time economic trends, new legislation, or industry-specific risks. They don't have up-to-date information from inside your company, so their answer might be nothing more than a "good guess."
Even if the LLM gives you a confident answer like this, you need to carefully judge whether it is really correct and appropriate for the situation. Especially in situations that involve people's lives or important company information, it can be a bit scary to rely solely on the LLM.
Enter the savior: Knowledge Graph. What is it?
You may be thinking, "So, is LLM no good anymore?" But that's not the case! There is a powerful helper who can make up for the weaknesses of LLM and make you smarter.Knowledge Graph".
Knowledge graphs may sound difficult, but in simple terms,A record that organizes and clearly records the "connections" between informationFor example, information like "John writes an AI blog" and "AI blogs are for beginners" are connected with lines to create a diagram. A family tree can also be considered a type of knowledge graph that records the relationships between people.
While LLM is good at manipulating "words," Knowledge Graph is good at grasping the "relationships" and "structures" of things. By combining these two, AI can become smarter and more trustworthy!
What's great about LLM and Knowledge Graph teaming up?
One of the technologies that combines LLM and knowledge graphs is "RAG (Retrieval-Augmented Generation)" This is a mechanism that, before LLM gives an answer, it first finds relevant and accurate information from the knowledge graph and uses that information as a reference to generate an answer. It's like LLM answering a question while looking at a clever "cheat sheet (knowledge graph)."
This combination has the following advantages:
- More accurate and reliable answers: The vague knowledge of LLM is combined with the "fact-based information" of the Knowledge Graph, reducing hallucination and providing more accurate answers.
- Easy to keep up with the latest informationWhen new information comes out, instead of retraining the entire LLM, you can just update the knowledge graph, making it easier to keep the LLM up to date.
- The "why?" becomes easier to understand: If you know what information in the Knowledge Graph LLM used as a reference, it will be easier to trace the reasoning behind why it arrived at a particular answer.
- Company confidential information is also safe: Since knowledge graphs can be managed within a closed network (a company's closed information space, internal network), there is less risk of important information leaking to the outside.
For example, the question "How many servers are in our company's AWS account (a cloud service provided by Amazon)?" With only LLM, you might understand it abstractly as "You have to count the number." However, if you link it to a knowledge graph, you can refer to a database (a box that organizes and stores information) that records the company's server configuration and answer the exact number, saying "There are XX servers."
Summary: The future of AI lies in "smart combinations"!
Although recent advances in AI have been remarkable, it seems that it is still a bit of a difficult road to aim for "AI that can do anything like a human (AGI: Artificial General Intelligence)" using LLM alone. Rather, it would be more realistic to combine the strengths of LLM with the strengths of other technologies such as knowledge graphs, and make up for each other's weaknesses, which would lead to the creation of AI that is useful in our lives and work.
Just as a carpenter uses different tools such as a saw, hammer, and plane, we may be entering an era in which AI will be able to "cleverly combine and use" a variety of technologies.
I am personally very excited to see how AI will evolve in the future and how it will change our lives. In particular, I am excited that by combining different AI technologies with different "specialties" such as LLM and knowledge graphs, we may be able to find answers to difficult problems that have not been solved until now. It seems that the possibilities of AI are still expanding!
This article is based on the following original articles and is summarized from the author's perspective:
LLMs aren't enough for real-world, real-time
projects