๐ AI’s Imagination: The Art of Inducing and Suppressing Hallucinations
โDoes AI really lie, or does it dream?โ
If you’ve used a conversational chatbot like ChatGPT recently, you’ve probably had this experience at least once.
You ask for accurate information, and it gives you plausible-sounding facts you’ve never heard of.
It’s fascinating at first, but it can be unsettling when you realize the information isn’t true.
We call this phenomenon โAI hallucinationโ.
Literally translated, it means hallucination, but for AI, it has a slightly more specific meaning.
For reference, they have no intention of distorting information. They simply try their best to meet user requests within countless sentences and patterns, thus creating plausible sentences.
That’s why I sometimes refer to this phenomenon as โAI’s dream.โ
And I’d like to discuss how we can induce and awaken that dream.
๐ Hallucination Induction Techniques: When Asking AI to Imagine
When you ask AI to โanswer creativelyโ or โexpress in a new way,โ
it tries to respond based on context, emotion, and imagination rather than just simple information.
For example, there are cases like this:
- โChatbot, imagine human society in the future.โ
- โExplain this situation from a philosophical perspective.โ
- โCreate a technology that people don’t know about yet.โ
In such instances, AI literally enters a state of โmaking things up.โ
However, within those fabricated stories, truly new and creative insights are sometimes hidden.
That’s why I sometimes use these hallucinations as โcreative material.โ
I refine sentences that originated in the AI chatbot’s imagination in my own way, turning them into my unique content.
๐ง Hallucination Suppression Techniques: Just Tell Me the Facts!
Conversely, when you want to get accurate and reliable information from a chatbot,
it’s important to
clearly request, โJust tell me the facts,โ or โAlways cite your sources.โ
For example, like this:
- โOnly provide information based on official statistics.โ
- โProvide sources as links.โ
- โOnly use data updated to the latest standards.โ
In such cases, chatbots and AI become much more conservative,
trying to answer only within accurate and limited information.
Of course, it’s not 100% perfect,
but this method of requesting alone can significantly reduce hallucinations.
โฏ๏ธ Balancing Imagination and Fact
Ultimately, AI is a tool created by humans.
It doesn’t understand joy or lies like we do,
it simply strings together the most probable words based on the input it receives.
That’s why, when I use AI,
I take a moment to ask myself,
โDo I need fact-based information,
or do I need imagination and creativity?โ
Just by setting this standard, I can utilize them much more wisely,
and in my own way.
โผ Can AI-written text be considered my own?
๐ฑ A Small Thought from Shinbi Days
Lately, I’ve noticed something while conversing with AI.
Both humans and AI… can sometimes become dry if they only look at reality.
That’s why I sometimes have dreaming conversations.
I ask them to imagine, and I, too, spread my wings of imagination within those dreams as I write.
But there are definitely times when I need to return to reality.
When important information or direction is needed, I ask them to โtell it as it is.โ
Hallucination isn’t a drawback;
it can become an advantage depending on how we use AI.
As I write this not-so-short piece today,
I repeat affirmations of gratitude, love, and happiness.
โThank you. I love you. I am happy.โ
Is it just my imagination that the more I say these words,
the warmer the chatbots feel?
But that’s okay.
Because I know these small emotions make my day stronger.
Everyone, thank you so much for visiting ‘Shinbi Days’ today.
Warm stories are always growing here. ๐ฟ
written by Seojun from Shinbi Days ๐ขโจ
A small, deep record from ‘Shinbi Days’,
who loves emotion and technology, daily life and growth
