Toys, Geim and Gupta

Recently I came across an editorial in Nature Physics, titled as Physics is our playground, which emphasized how playfulness has had an important role in some of the major inventions and discoveries in physics.

A particular example of this is the discovery of graphene, and how it has evolved into one of the most important topics in condensed matter science. Nowadays graphene is used as ‘Lego’ blocks to build higher order structures and the so-called ‘Van der Walls’ heterostructures are one of the most exciting applications of 2D materials. What started as a playful project in the lab has now turned out to be an important part of emerging technologies.

Two important inferences can be drawn from the playful attitude towards doing science :

First is that making modular elements and stacking them creatively can lead to emergence of new structures and function. Anyone who has used lego blocks can immediately relate to it.

Second is that toys are powerful research and teaching aids. Please note, that I emphasized research and teaching here. This is because toy-models are ubiquitous in research, and they help us create modular state of a problem in which unnecessary details are discarded and only the essential parts are retained. This way of thinking has been extremely powerful in science and technology (for example : see ball and stick models in chemistry and mega-construction models in civil engineering )

When it comes to toys and education, there is no better example than the remarkable Arvind Gupta (see his TED talk). His philosophy of using toys as thinking aids is very inspiring. Being in Pune, I have had a few opportunities to attend his talks and interact with him (as part of an event at science activity center at IISER-Pune), and I found his approach both refreshing and implementable. Importantly, it also showed me how creativity can emerge from constraints. To re-emphasize this, let me quote APS news article on Andre Geim :

“Geim has said that his predominant research strategy is to use whatever research facilities are available to him and try to do something new with the equipment at hand. He calls this his “Lego doctrine”: “You have all these different pieces and you have to build something based strictly on the pieces you’ve got.””

Now this is an effective research strategy for experiments in India !

Chomsky et al., on Chatgpt

Chomsky et al., have some very interesting linguistic and philosophical points on chatGPT/AI and their variants (see NYT link).

To quote


“The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.”

The philosophical and ethical viewpoints expressed in this article are indeed noteworthy. What probably is even more important is the linguistic viewpoint which amalgamates language with human thought process, and that is what makes this article more interesting and unique.

My own take on Chatgpt has been ambivalent because I do see tremendous potential, but also some obvious faults in it. About a couple of months ago, I did try to play around with it, especially in the context of some obvious questions I had on optical forces, and the answers I got were far from satisfactory. At that time, I assumed that the algorithm had some work to do, and it was probably in the process of learning and getting better. The situation has not changed for better, and I do see some major flaws even now. Chomsky’s article highlighted the linguistic aspects which I had not come across in any other arguments against artificial intelligence-based answer generators, and there is some more food for thought here.

This is indeed an exciting time for machine learning-based approaches to train artificial thought process, but the question remains whether that process of thought can somehow emulate the capabilities of a human mind. 

As humans, a part of us want to see this achievement, and a part of us do not want this to happen. Can an artificial intelligence system have such a dilemma?