🌌 AI’s Imagination: The Art of Inducing and Suppressing Hallucinations
“Does AI really lie, or does it dream?”
If you’ve used a conversational chatbot like ChatGPT recently, you’ve probably had this experience at least once.
You ask for accurate information, and it gives you plausible-sounding facts you’ve never heard of.
It’s fascinating at first, but it can be unsettling when you realize the information isn’t true.
We call this phenomenon ‘AI hallucination’.
Literally translated, it means hallucination, but for AI, it has a slightly more specific meaning.
For reference, they have no intention of distorting information. They simply try their best to meet user requests within countless sentences and patterns, thus creating plausible sentences.
That’s why I sometimes refer to this phenomenon as “AI’s dream.”
And I’d like to discuss how we can induce and awaken that dream.
🌈 Hallucination Induction Techniques: When Asking AI to Imagine
When you ask AI to “answer creatively” or “express in a new way,”
it tries to respond based on context, emotion, and imagination rather than just simple information.
For example, there are cases like this:
- “Chatbot, imagine human society in the future.”
- “Explain this situation from a philosophical perspective.”
- “Create a technology that people don’t know about yet.”
In such instances, AI literally enters a state of ‘making things up.’
However, within those fabricated stories, truly new and creative insights are sometimes hidden.
That’s why I sometimes use these hallucinations as ‘creative material.’
I refine sentences that originated in the AI chatbot’s imagination in my own way, turning them into my unique content.
🧠 Hallucination Suppression Techniques: Just Tell Me the Facts!
Conversely, when you want to get accurate and reliable information from a chatbot,
it’s important to
clearly request, “Just tell me the facts,” or “Always cite your sources.”
For example, like this:
- “Only provide information based on official statistics.”
- “Provide sources as links.”
- “Only use data updated to the latest standards.”
In such cases, chatbots and AI become much more conservative,
trying to answer only within accurate and limited information.
Of course, it’s not 100% perfect,
but this method of requesting alone can significantly reduce hallucinations.
☯️ Balancing Imagination and Fact
Ultimately, AI is a tool created by humans.
It doesn’t understand joy or lies like we do,
it simply strings together the most probable words based on the input it receives.
That’s why, when I use AI,
I take a moment to ask myself,
“Do I need fact-based information,
or do I need imagination and creativity?”
Just by setting this standard, I can utilize them much more wisely,
and in my own way.
▼ Can AI-written text be considered my own?
🌱 A Small Thought from Shinbi Days
Lately, I’ve noticed something while conversing with AI.
Both humans and AI… can sometimes become dry if they only look at reality.
That’s why I sometimes have dreaming conversations.
I ask them to imagine, and I, too, spread my wings of imagination within those dreams as I write.
But there are definitely times when I need to return to reality.
When important information or direction is needed, I ask them to “tell it as it is.”
Hallucination isn’t a drawback;
it can become an advantage depending on how we use AI.
As I write this not-so-short piece today,
I repeat affirmations of gratitude, love, and happiness.
“Thank you. I love you. I am happy.”
Is it just my imagination that the more I say these words,
the warmer the chatbots feel?
But that’s okay.
Because I know these small emotions make my day stronger.
Everyone, thank you so much for visiting ‘Shinbi Days’ today.
Warm stories are always growing here. 🌿
written by Seojun from Shinbi Days 🐢✨
A small, deep record from ‘Shinbi Days’,
who loves emotion and technology, daily life and growth
