ChatGPT – an experiment

Below is a small experiment with ChatGPT (GPT-4o, which is OpenAI’s most advanced model as of July 2025).

I asked for thoughts (from authentic sources) from Indian writers on the nature of the writing process (handwriting vs typewriting).

It confidently gave me quotations from Rabindranath Tagore, R.K. Narayan and U.R. Ananthmurthy. I was intrigued, and below is my conversation:

me: list the quotes from Indian writers above and give the exact sources

Chatgpt: I couldn’t find reputable sources for the exact quotes I previously suggested by Indian writers—so it’s safer to reconsider them.

This is something I have been observing with LLM chatboxes, especially when it comes to the sources that may not be mainstream. One may even say it is hallucinating, but after enquiring further, it said:

The quotes I initially provided appear to be popular paraphrases or second-hand interpretations, but none could be verified in primary texts or reliable interviews.

Then, it asked me if I wanted to use authentic sources to be presented to corroborate the initially mentioned sources. I said yes, and below was its reply:

Here are verified quotes from Rabindranath Tagore, with clear attributions to reliable sources—unfortunately, I couldn’t locate authenticated quotes for other Indian writers on writing methods

The quotes that it gave were peripheral to my initial question. It identified two to three quotations, but none of them were directly connected to my initial query.

There may be at least 2 hypotheses that I can draw from this, which need further testing:

  1. ChatGPT and its kind may be getting trained more intensely on sources that are mainly anglophonic, and its geographical distribution of sources may be skewed. Online literary sources from countries such as India may not be as dense as, say USA, the UK or even some European countries. Will depositing more authentic sources online, including their translations, help the authentic discovery of information from countries such as India?
  2. With the current developments and model training, there may already be a bias in the answers that LLM chatbots give. It may reinforce many viewpoints from Western repositories that may sometimes be disconnected or irrelevant to the user outside Western geographies. In that sense, new information is being built on old information. Are we entering a stage where data deposition asymmetry is creating an asymmetry of discovery?

I know these questions are not trivial to answer, but for LLM chatboxes to be authentic, they need to address questions with proper citations. I know some of them are trying to do that (eg, perplexity AI), but I find the links it provides for certain focused questions are not up to the mark.

My inference:

  1. I am cautiously optimistic about the developments and achievements in source-based LLM interfaces, especially when you feed an authentic source (eg, NotebookLM).
  2. But LLM chatboxes may be hyped when:
    • It comes to its capability of sourcing authentic information, and
    • The immediacy of replacements of existing knowledge systems.
  3. LLM chatboxes should be treated as an experimental tool for utilitarian tasks where the information can be verified independently.
  4. It is important to take the bottom line of ChatGPT seriously: ‘ChatGPT can make mistakes. Check important info.

Unknown's avatar

Author: G.V. Pavan Kumar

Namaste, Hola & Welcome from G.V. Pavan Kumar. I am a Professor of Physics at the Indian Institute of Science Education and Research, Pune, India. My research interests are : (1) Optics & Soft Matter: Optically Induced Forces – Assembly, Dynamics & Function; (2) History and Philosophy of Science – Ideas in Physical Sciences. I am interested in the historical and philosophical evolution of ideas and tools in the physical sciences and technology. I research the intellectual history of past scientists, innovators, and people driven by curiosity, and I write about them from an Indian and Asian perspective. My motivation is to humanize science. In the same spirit, I write and host my podcast Pratidhvani – Humanizing Science.

Leave a comment