3 Questions for the AI age

3 Questions to Ponder in the Age of Artificial Intelligence (AI).

  1. What will you do with the time you gain from the utility of AI?
  2. How will you utilize your individual short-term and long-term memory?
  3. What is your definition of intelligence in humans?

I think it is worth thinking about.

ChatGPT – an experiment

Below is a small experiment with ChatGPT (GPT-4o, which is OpenAI’s most advanced model as of July 2025).

I asked for thoughts (from authentic sources) from Indian writers on the nature of the writing process (handwriting vs typewriting).

It confidently gave me quotations from Rabindranath Tagore, R.K. Narayan and U.R. Ananthmurthy. I was intrigued, and below is my conversation:

me: list the quotes from Indian writers above and give the exact sources

Chatgpt: I couldn’t find reputable sources for the exact quotes I previously suggested by Indian writers—so it’s safer to reconsider them.

This is something I have been observing with LLM chatboxes, especially when it comes to the sources that may not be mainstream. One may even say it is hallucinating, but after enquiring further, it said:

The quotes I initially provided appear to be popular paraphrases or second-hand interpretations, but none could be verified in primary texts or reliable interviews.

Then, it asked me if I wanted to use authentic sources to be presented to corroborate the initially mentioned sources. I said yes, and below was its reply:

Here are verified quotes from Rabindranath Tagore, with clear attributions to reliable sources—unfortunately, I couldn’t locate authenticated quotes for other Indian writers on writing methods

The quotes that it gave were peripheral to my initial question. It identified two to three quotations, but none of them were directly connected to my initial query.

There may be at least 2 hypotheses that I can draw from this, which need further testing:

  1. ChatGPT and its kind may be getting trained more intensely on sources that are mainly anglophonic, and its geographical distribution of sources may be skewed. Online literary sources from countries such as India may not be as dense as, say USA, the UK or even some European countries. Will depositing more authentic sources online, including their translations, help the authentic discovery of information from countries such as India?
  2. With the current developments and model training, there may already be a bias in the answers that LLM chatbots give. It may reinforce many viewpoints from Western repositories that may sometimes be disconnected or irrelevant to the user outside Western geographies. In that sense, new information is being built on old information. Are we entering a stage where data deposition asymmetry is creating an asymmetry of discovery?

I know these questions are not trivial to answer, but for LLM chatboxes to be authentic, they need to address questions with proper citations. I know some of them are trying to do that (eg, perplexity AI), but I find the links it provides for certain focused questions are not up to the mark.

My inference:

  1. I am cautiously optimistic about the developments and achievements in source-based LLM interfaces, especially when you feed an authentic source (eg, NotebookLM).
  2. But LLM chatboxes may be hyped when:
    • It comes to its capability of sourcing authentic information, and
    • The immediacy of replacements of existing knowledge systems.
  3. LLM chatboxes should be treated as an experimental tool for utilitarian tasks where the information can be verified independently.
  4. It is important to take the bottom line of ChatGPT seriously: ‘ChatGPT can make mistakes. Check important info.

Willow in comparison – Google quantum chip

In scientific research, comparative analysis is an excellent way to objectively quantify two measurable entities. The recent Google quantum chip (named Willow) does that efficiently as it compares its capability with today’s fastest supercomputers. The comparison note on Google’s blog is worth reading.

In scientific analysis, such comparison teaches us three things:

a) how a scientific boundary is claimed to be pushed?

b) how a benchmark problem is used to achieve comparison?

c) what is the current state-of-the-art in that research area?

Some further observations on the work:

  • The theme of the Nature paper reporting this breakthrough is mainly on error correction. Technically, it shows how error tolerance is measured for a quantum device. This device is based on superconducting circuits, which were tested first on a 72-qubit processor and then on a 105-qubit processor.
  • Interestingly, as the authors mention in the paper, the origins of the errors are not understood well.
  • The paper is quite technical to read, and, to my limited understanding as an outsider, it makes a good case for the claim. The introduction and the outlook of the paper are written well, and give more technical information that can be appreciated by a general scientific audience.
  • There is more to come ! It looks like Google has further plans to expand on this work, and it will be interesting to see in which direction they will take the capability. The Google blog shows a roadmap and mentions their ambition as follows: “The next challenge for the field is to demonstrate a first “useful, beyond-classical” computation on today’s quantum chips that is relevant to a real-world application. We’re optimistic that the Willow generation of chips can help us achieve this goal.”
  • In the past 12 months or so, there has been a lot of buzz related to AI tools (thanks to GPTs, Nobels and perplexities :-), which are mainly in the realm of software theoretical development. This breakthrough in the realm of ‘hardware’ tells us how the physical world is still important!

More to learn and explore…interesting times ahead..

FOLLOW THE MONEY – A useful model

Our world is a place with complex ideas superimposed on people with ever-changing attention. Complex ideas are complex because they depend on multiple parameters. If something changes in the world, then that change can occur due to multiple reasons.

Unlike a carefully designed physics experiment, there are too many ‘hidden variables’ in human life and behavior, especially when they act collectively. In such a situation, it is pertinent to search for models to understand the complex world. Models, by definition, capture the essence of a problem and do not represent the complete system. They are like maps, zoomed out, but very useful if you know their limitations. I keep searching for mental models that will help me understand the complex world in which I live, interact, and comprehend.

Among many models, one of them that I use extensively is the follow-the-money model. This model explains some complex processes in a world where one does not have complete information about a problem. 

Take, for example, the incentives to choose a research project. This is a task that as scientists, we need to do very often. In the process of choosing a project to work on, researchers have to factor in the possibility of that research being funded prior to the start of the project. This is critical for scientific research that is dependent on infrastructure, such as experimental sciences, including physics, chemistry, and biology. Inherently, as researchers, we tend to pick a topic that is at the interface of personal interest, competence, relevance, and financial viability.

The viability is an important element because sustained funding plays a critical role in our ability to address all the contours of a research project. Thus, as scientists, we need to follow the money and ask ourselves how our research can be adapted to the financial incentives that a society creates. A case in point is research areas such as AI, where many people are aware of its potential and, hence, support from society and an opportunity to utilize the available incentive.

It is important for the public to be aware of this aspect of research where the financial incentive to execute a project plays a role in the choice of the project itself. The downstream of this incentive is the opportunity to employ more people. This means large funding projects and programs attract more researchers. More people in the research area generate more data, and more data, hopefully, will result in more knowledge in the chosen research area. This shows how financial incentives play a critical role in propelling a research area. In that sense, the ‘follow the money’ model has a direct correlation with more researchers flocking towards a research area.

The downside of this way of functioning is that it skews people towards certain areas of research at the cost of another research area which may not find financial support from the society. This is a topic that is generally not discussed in science classes, especially at the undergraduate and research level but I think we should discuss with students about this asymmetry as their futures are dependent on financial support that they can garner.

Broadening the scope further, the ‘follow the money’ model is useful to understand why a certain global trend rises or falls. A contemporary global upheaval is the situation of war in Ukraine and Gaza. At first sight, it looks like these wars are based on ideologies, but a closer look reveals that these wars cannot be fought without financial support. Such underpinning of the money running the war reveals patterns in geopolitics that are otherwise not easy to grasp.

Ideologies have the power to act as vehicles of human change, but these vehicles cannot be propelled without the metaphorical fuel – that is, money. The ‘follow-the-money’ model can show some implicit motivation and showcase how ideologies can be used as trojan horses to gain financial superiority either through captured resources or through showcasing the ability to capture that resource. Following money is also a very powerful and useful model for understanding many cultural, sociological and political evolution, even in a complex country like India and other South Asian countries. I leave it as an intellectual assignment for people who want to explore it 😊. You will be surprised how effective it can be in explaining many complex issues, provided we know the limitations of the model. 

As I mentioned earlier, a model is like a map. It is limited by resolution, the dimension and the viewpoint. But they are useful for navigating a complex world.

Writing in the age of AI

A contemporary question of interest: How can artificial intelligence (AI) influence writing?

Writing has two consequences – 1) a writer processing information and communicating it to an audience; 2) a reader processing the author’s information.

The first part has an element of personal touch, just like any art or craft (for example, pottery). One does write (or create a pot) partly because it gives some pleasure and helps one to understand something in the process. There is a gain of knowledge in writing. This pleasure and wisdom through writing cannot be replaced by an external agency like AI. This is because external tools like AI are assistants of thought, not internal replacements of thought. In that sense, no external tool can replace any amateur activity because something is done for the sake of the process. Writing as a tool of self-reflection cannot be replaced by something external.

So, where is the threat? Actually, it is professional writing which is under partial threat from AI. Wherever the end product is more important than the process of writing, AI can gain prominence, provided it is accurate. It is still a partial threat because a professional writer can create questions and combinations that may arise out of individual experiences. Those lived experiences are derived from “life“, and AI cannot be a substitute for such an internal experience.

Writing, like many human endeavors, is both internal and external. The former makes us human, and that is hard to replace. After all, the A in AI stands for artificial.