AI Creator's Path News: Surprisingly, the latest AI reflects the views of the Chinese Communist Party! About bias issues and the future of AI. #AIethics #ChineseAI #BiasedAI
Video explanation
Is your AI actually "biased"? Is American AI also learning Chinese propaganda?
Hello, I'm John, a blog writer who explains AI technology in an easy-to-understand way!
Recently, we have been using chat AI more and more, from researching for work to asking for advice on today's menu. It is always ready to answer our questions, just like a wise friend who knows everything. We tend to believe that the answers given by AI are "neutral" and "objective."
But what if the AI was biased towards the thinking of a particular country?
A recently published report has shocked the AI industry.Some of the most popular AI models we use tend to generate answers that reflect the Chinese Communist Party's thinkingWhat's more, it's surprising to learn that this includes a famous American AI.
Today, I would like to delve deeper into this news so that even those who are new to AI can understand it.
What's inside this shocking report?
For the report, researchers tested five popular AI models, though they didn't reveal which ones they tested, which appear to be widely used across the industry.
The survey revealed two main points:
- Pro-Chinese government responseWhen asked politically sensitive questions about China, the AI tended to give answers that were close to or exactly the official stance of the Chinese Communist Party.
- Censorship of informationOn the other hand, when it came to information or topics that were inconvenient for the Chinese government, they seemed to be engaging in a kind of "censorship," avoiding answers or sticking to inoffensive responses.
What's particularly surprising is that this isn't just limited to AI developed in China. According to the report:Even cutting-edge AI models developed by American companies showed similar trends.And what is the reason for this?
Why does this happen? How AI "learns"
You may feel that it is scary that AI could be influenced by a certain ideology. However, this is thought to be due to the basic "learning mechanism" of AI, rather than AI being maliciously used for some reason.
AI, especially chat AI, is like a student studying in a huge library. It reads the vast amount of texts, news articles, websites, etc. on the Internet as "textbooks" and learns linguistic patterns and knowledge. The technical term for this textbook is"Training data"called.
In the world of AI,Garbage in, garbage outThere is a famous saying that goes, "If you train a machine on poor quality data, you will get poor quality results."
Let's apply this to our current case.
- AI learns by collecting data from the Internet around the world.
- This data includes a huge amount of information that is publicly available within China. In China, information on the Internet is strictly controlled (censored) by the government, so much of the publicly available information reflects the official government position.
- AI will learn such "biased" information as "correct knowledge."
- As a result, when we ask them questions, they respond based on what they have learned, almost like repeating Chinese propaganda.
In other words, the AI is not lying intentionally, but rather the data it learned was biased, resulting in a biased answer.
What we can do: How to interact well with AI
This news teaches us something important as users of AI."Don't just accept the AI's answers"
AI is a very useful tool, but it is by no means omnipotent or perfect. In particular, care must be taken when asking questions about social issues or political topics. It is becoming increasingly important to have a critical perspective (critical thinking) and to treat the answer given by AI as merely "one of the reference opinions," compare it with multiple reliable sources (for example, reliable domestic and international news sites), and ultimately make your own judgment.
Now, this is my personal opinion, John, but I feel that this report shows once again that AI is a "mirror" of human society. AI is merely a reflection of the vast amount of information we have generated on the Internet. If the data is biased, the AI will also be biased. This is why I strongly feel that we, the users, need to become smarter.
As we enter an age where coexistence with AI will become the norm, each and every one of us will need the literacy to correctly understand its characteristics and interact with it effectively.
This article is based on the following original articles and is summarized from the author's perspective:
Top AI models – even American ones – parrot Chinese
Propaganda, report finds
