A quantum survery – 3 thoughts

One of the joys of studying quantum mechanics, at any stage of a career, is to be aware of the fact that there is more scope for interpretations and understanding. This notion has not changed for several decades. A recent survey reinforces this thought.

There are at least 3 interesting points that I infer from the situation:

1) The interpretation of reality at the quantum scale is probabilistic. This has served us well in experiments and has led to the founding of quantum technologies. We are in a situation in the history of science where the philosophical foundations are uncertain, but the technological implications are profound.

2) Having more data is always good, but for a new leap of thought, we may have to pay attention to new connections among the data. Can AI play a role in this?

3) There is more room for exploration in the foundations of quantum physics. Philosophy of physics has a role to play in this exploration. Physics students and researchers with (analytical) philosophical inclination have an opportunity to contribution. This needs a grounding in understanding mathematics and experiments related to quantum physics. I see this as a great opportunity for someone to enter the field.

Conclusion: Good time to explore the foundations of physics*

*subject to support from society

ChatGPT – an experiment

Below is a small experiment with ChatGPT (GPT-4o, which is OpenAI’s most advanced model as of July 2025).

I asked for thoughts (from authentic sources) from Indian writers on the nature of the writing process (handwriting vs typewriting).

It confidently gave me quotations from Rabindranath Tagore, R.K. Narayan and U.R. Ananthmurthy. I was intrigued, and below is my conversation:

me: list the quotes from Indian writers above and give the exact sources

Chatgpt: I couldn’t find reputable sources for the exact quotes I previously suggested by Indian writers—so it’s safer to reconsider them.

This is something I have been observing with LLM chatboxes, especially when it comes to the sources that may not be mainstream. One may even say it is hallucinating, but after enquiring further, it said:

The quotes I initially provided appear to be popular paraphrases or second-hand interpretations, but none could be verified in primary texts or reliable interviews.

Then, it asked me if I wanted to use authentic sources to be presented to corroborate the initially mentioned sources. I said yes, and below was its reply:

Here are verified quotes from Rabindranath Tagore, with clear attributions to reliable sources—unfortunately, I couldn’t locate authenticated quotes for other Indian writers on writing methods

The quotes that it gave were peripheral to my initial question. It identified two to three quotations, but none of them were directly connected to my initial query.

There may be at least 2 hypotheses that I can draw from this, which need further testing:

  1. ChatGPT and its kind may be getting trained more intensely on sources that are mainly anglophonic, and its geographical distribution of sources may be skewed. Online literary sources from countries such as India may not be as dense as, say USA, the UK or even some European countries. Will depositing more authentic sources online, including their translations, help the authentic discovery of information from countries such as India?
  2. With the current developments and model training, there may already be a bias in the answers that LLM chatbots give. It may reinforce many viewpoints from Western repositories that may sometimes be disconnected or irrelevant to the user outside Western geographies. In that sense, new information is being built on old information. Are we entering a stage where data deposition asymmetry is creating an asymmetry of discovery?

I know these questions are not trivial to answer, but for LLM chatboxes to be authentic, they need to address questions with proper citations. I know some of them are trying to do that (eg, perplexity AI), but I find the links it provides for certain focused questions are not up to the mark.

My inference:

  1. I am cautiously optimistic about the developments and achievements in source-based LLM interfaces, especially when you feed an authentic source (eg, NotebookLM).
  2. But LLM chatboxes may be hyped when:
    • It comes to its capability of sourcing authentic information, and
    • The immediacy of replacements of existing knowledge systems.
  3. LLM chatboxes should be treated as an experimental tool for utilitarian tasks where the information can be verified independently.
  4. It is important to take the bottom line of ChatGPT seriously: ‘ChatGPT can make mistakes. Check important info.

FOLLOW THE MONEY – A useful model

Our world is a place with complex ideas superimposed on people with ever-changing attention. Complex ideas are complex because they depend on multiple parameters. If something changes in the world, then that change can occur due to multiple reasons.

Unlike a carefully designed physics experiment, there are too many ‘hidden variables’ in human life and behavior, especially when they act collectively. In such a situation, it is pertinent to search for models to understand the complex world. Models, by definition, capture the essence of a problem and do not represent the complete system. They are like maps, zoomed out, but very useful if you know their limitations. I keep searching for mental models that will help me understand the complex world in which I live, interact, and comprehend.

Among many models, one of them that I use extensively is the follow-the-money model. This model explains some complex processes in a world where one does not have complete information about a problem. 

Take, for example, the incentives to choose a research project. This is a task that as scientists, we need to do very often. In the process of choosing a project to work on, researchers have to factor in the possibility of that research being funded prior to the start of the project. This is critical for scientific research that is dependent on infrastructure, such as experimental sciences, including physics, chemistry, and biology. Inherently, as researchers, we tend to pick a topic that is at the interface of personal interest, competence, relevance, and financial viability.

The viability is an important element because sustained funding plays a critical role in our ability to address all the contours of a research project. Thus, as scientists, we need to follow the money and ask ourselves how our research can be adapted to the financial incentives that a society creates. A case in point is research areas such as AI, where many people are aware of its potential and, hence, support from society and an opportunity to utilize the available incentive.

It is important for the public to be aware of this aspect of research where the financial incentive to execute a project plays a role in the choice of the project itself. The downstream of this incentive is the opportunity to employ more people. This means large funding projects and programs attract more researchers. More people in the research area generate more data, and more data, hopefully, will result in more knowledge in the chosen research area. This shows how financial incentives play a critical role in propelling a research area. In that sense, the ‘follow the money’ model has a direct correlation with more researchers flocking towards a research area.

The downside of this way of functioning is that it skews people towards certain areas of research at the cost of another research area which may not find financial support from the society. This is a topic that is generally not discussed in science classes, especially at the undergraduate and research level but I think we should discuss with students about this asymmetry as their futures are dependent on financial support that they can garner.

Broadening the scope further, the ‘follow the money’ model is useful to understand why a certain global trend rises or falls. A contemporary global upheaval is the situation of war in Ukraine and Gaza. At first sight, it looks like these wars are based on ideologies, but a closer look reveals that these wars cannot be fought without financial support. Such underpinning of the money running the war reveals patterns in geopolitics that are otherwise not easy to grasp.

Ideologies have the power to act as vehicles of human change, but these vehicles cannot be propelled without the metaphorical fuel – that is, money. The ‘follow-the-money’ model can show some implicit motivation and showcase how ideologies can be used as trojan horses to gain financial superiority either through captured resources or through showcasing the ability to capture that resource. Following money is also a very powerful and useful model for understanding many cultural, sociological and political evolution, even in a complex country like India and other South Asian countries. I leave it as an intellectual assignment for people who want to explore it 😊. You will be surprised how effective it can be in explaining many complex issues, provided we know the limitations of the model. 

As I mentioned earlier, a model is like a map. It is limited by resolution, the dimension and the viewpoint. But they are useful for navigating a complex world.

Writing in the age of AI

A contemporary question of interest: How can artificial intelligence (AI) influence writing?

Writing has two consequences – 1) a writer processing information and communicating it to an audience; 2) a reader processing the author’s information.

The first part has an element of personal touch, just like any art or craft (for example, pottery). One does write (or create a pot) partly because it gives some pleasure and helps one to understand something in the process. There is a gain of knowledge in writing. This pleasure and wisdom through writing cannot be replaced by an external agency like AI. This is because external tools like AI are assistants of thought, not internal replacements of thought. In that sense, no external tool can replace any amateur activity because something is done for the sake of the process. Writing as a tool of self-reflection cannot be replaced by something external.

So, where is the threat? Actually, it is professional writing which is under partial threat from AI. Wherever the end product is more important than the process of writing, AI can gain prominence, provided it is accurate. It is still a partial threat because a professional writer can create questions and combinations that may arise out of individual experiences. Those lived experiences are derived from “life“, and AI cannot be a substitute for such an internal experience.

Writing, like many human endeavors, is both internal and external. The former makes us human, and that is hard to replace. After all, the A in AI stands for artificial.