Is AI good at "pretending to be smart"? The true intelligence of AI like ChatGPT and what the future holds
Hello, I'm John, and I'll be explaining AI technology in an easy-to-understand way.ChatGPT"Chatty AI" such as the above is evolving at an incredible pace. Many people are probably amazed that they can converse just like humans. But are these AIs actually "thinking" when they speak? Perhaps they are just "pretending" to be incredibly intelligent.
Today, I would like to introduce the secrets behind the "intelligence" of AI and new ideas to make AI even smarter.ideaI will explain this in an easy-to-understand manner even for those without specialized knowledge!
What is the chattering AI "LLM" really good at?
First of all, AI like ChatGPT isLLM (Large Language Models)It's called "AI that is good at reading a lot of sentences and predicting what words are likely to come next." For example, if you input "It's a nice day today, so..." it will return natural follow-up words like "Let's go on a picnic!" or "The laundry looks like it's drying well."
Recently, there have been some news reports that LLMs have finally acquired the ability to think (inference), like OpenAI's "o series" and China's DeepSeek's "R1." However, some experts say that this is not the true meaning of "the ability to think." It is merely an advanced text prediction with some handy features attached.
Moreover, the race to develop AI is very fierce, with cheaper and more powerful models appearing one after another. There are even models that are so amazing that you'll think, "Wow, they can be made that cheap!?" The AI industry is in a state of war. But, price isn't the only thing that's really important. If the fundamental problems that LLM faces are not resolved and things continue to move forward, we may be faced with a slightly worrying future.
Are you in trouble if you only have an LLM? What are the "weaknesses" of AI?
No matter how much LLM has evolved, there are still some problems that remain unsolved. The most famous of these areHallucination」。これは、AIがもっともらしい嘘をついちゃう現象のこと。例えば、History上の人物について質問したら、実際には言っていないことを言ったかのように語ったりするんです。困っちゃいますよね。
Other weaknesses of the LLM include:
- Information tends to be out of date:LLM only has the information up to the point of learning. Therefore, it may not be able to answer questions about new news or events. And it takes time and money to relearn it.
- It's hard to understand "Why did you answer that?":The basis for the answers that LLM gives are sometimes difficult for humans to understand. Even when you ask, "Why did you get this answer?", you might not be able to give a clear explanation.
For example, imagine this scene.
- Financial fraud checkIf you ask an LLM, "Does this transaction look suspicious?" it might answer, "Yes, it resembles a pattern of past fraud." But the LLM doesn't really understand the complex relationships between accounts or the hidden flow of fraudulent transactions. It just gives you a "plausible" answer based on past data.
- Deciding on drug combinations: Let's say you ask an LLM for their opinion on a new drug combination. If they say, "This combination increased efficacy by 30%," you might be tempted to take their word for it. But if they overlooked a serious side effect, or if the two drugs were never even tested together in the clinical trials, you could be in big trouble.
- Responding to cyber attacks:CompanySecurityLet's say a person in charge goes to LLM and says, "There was unauthorized access to the network. What should I do?" LLM may come up with a plausible solution, but it doesn't necessarily match the company's system configuration, the latest threat information, or the rules that need to be followed. Simply trusting the AI's advice could actually put the situation in danger.
- Predicting future risks to the company"Next year, the biggest event for our company will beEconomyLet's say you ask an LLM, "What are the global risks?" The LLM might be able to provide an answer based on data from past economic crises, but they don't have real-time economic trends, new laws, or industry-specific risks. They don't have up-to-date information from within the company, so their answer might be nothing more than a "good guess."
Even if the LLM gives you a confident answer like this, you need to carefully judge whether it is really correct and appropriate for the situation. Especially in situations that involve people's lives or important company information, it can be a bit scary to rely solely on the LLM.
Enter the savior: Knowledge Graph. What is it?
You may be thinking, "So, is LLM no good anymore?" But that's not the case! There is a powerful helper who can make up for the weaknesses of LLM and make you smarter.Knowledge Graph".
Knowledge graphs may sound difficult, but in simple terms,A record that organizes and clearly records the "connections" between informationFor example, "John writes an AI blog" or "AI blogs areBeginnersIt's like connecting information like "is this person?" with lines to create a diagram. A family tree can also be considered a type of knowledge graph that records the relationships between people.
While LLM is good at manipulating "words," Knowledge Graph is good at grasping the "relationships" and "structures" of things. By combining these two, AI can become smarter and more trustworthy!
What's great about LLM and Knowledge Graph teaming up?
One of the technologies that combines LLM and knowledge graphs is "RAG (Retrieval-Augmented Generation)" This is a mechanism that, before LLM gives an answer, it first finds relevant and accurate information from the knowledge graph and uses that information as a reference to generate an answer. It's like LLM answering a question while looking at a clever "cheat sheet (knowledge graph)."
This combination has the following advantages:
- More accurate and reliable answers: The vague knowledge of LLM is combined with the "fact-based information" of the Knowledge Graph, reducing hallucination and providing more accurate answers.
- Easy to keep up with the latest informationWhen new information comes out, instead of retraining the entire LLM, you can just update the knowledge graph, making it easier to keep the LLM up to date.
- The "why?" becomes easier to understand: If you know what information in the Knowledge Graph LLM used as a reference, it will be easier to trace the reasoning behind why it arrived at a particular answer.
- Company confidential information is also safe: Since knowledge graphs can be managed within a closed network (a company's closed information space, internal network), there is less risk of important information leaking to the outside.
For example, in the case of "Our companyAWSaccount (provided by Amazon)In the cloud"How many servers are there in the cloud (service)?" With only LLM, you might understand it abstractly as "counting the number." However, if you connect it to Knowledge Graph, the server configuration within the company is recorded, andDatabaseBy referring to the box (which is like a place to organize and store information), you will be able to answer the exact number, such as "There are XX units."
Summary: The future of AI lies in "smart combinations"!
The recent evolution of AI has been remarkable, but LLM alone cannot produce "AI that can do anything like a human (AGI:The swansong of the swansongIt seems that the road to achieving this goal is still a bit difficult. Rather, it would be more realistic to skillfully combine the strengths of LLM with the strengths of other technologies, such as knowledge graphs, and complement each other's weaknesses to create AI that is useful in our daily lives and work.
Just as a carpenter uses different tools such as a saw, hammer, and plane, we may be entering an era in which AI will be able to "cleverly combine and use" a variety of technologies.
I am personally very excited to see how AI will evolve in the future and how it will change our lives. In particular, I am excited that by combining different AI technologies with different "specialties" such as LLM and knowledge graphs, we may be able to find answers to difficult problems that have not been solved until now. It seems that the possibilities of AI are still expanding!
This article is based on the following original articles and is summarized from the author's perspective:
LLMs aren't enough for real-world, real-time
projects
