Wednesday, August 6, 2025

Will AI control human cognition?

"Humans are the most self-aware and responsive species on Earth. With this consciousness comes the moral responsibility to safeguard life, culture, and nature."

By PROF DR VIKASH RAJ SATYAL

August 1, 2025 at 7:12 AM  (My Republica) Link:

(https://myrepublica.nagariknetwork.com/news/will-ai-control-human-cognition-61-22.html )

Artificial Intelligence (AI) is one of the newest discoveries of man that has placed humanity in a strange dilemmatic situation of catch22 where we are caught between two conflicting conditions, making escape or resolution impossible. AI is now-a-days assisting humans in almost every field of life including entertainment, education, travelling to health from household level to space science. However, many scientists and even inventors of AI are now showing their disquiet towards AI. Geoffrey Hinton, one of the founding figures in AI research who won the 2024 Nobel Prize in Physics for his pioneering work on neural networks in his acceptance speech in Oslo, voiced grave concern saying “AI has shortened the odds of the technology wiping out humanity in the next 30 years”. Hinton, founded the base of AI as a student in 1979(45 years before getting his Nobel prize) at University of Edinburgh during his PhD. As a cognitive scientist, he was continuously observing the development of AI and has given the thoughts in his Oslo speech from his long experience.

Humans are the most self-aware and responsive species on Earth. With this consciousness comes the moral responsibility to safeguard life, culture, and nature. But as technology leaps forward, especially in the realm of AI, we may be losing sight of this obligation. What began as a pursuit of convenience has now evolved into a complex system capable of independent decision-making — a development that could reshape society more profoundly than the Industrial Revolution did.

AI is growing faster than ever. As AI becomes smarter than humans, many experts are worried that our most powerful invention could turn dangerous.AI systems today don’t just follow instructions; they learn, predict, and strategize. Unlike humans, AI doesn’t sleep or pause. It absorbs data continuously, getting smarter each moment. Tech companies are investing billions in this race, pushing AI toward what experts call “superintelligence” — machines that could one day understand and manipulate human thought, emotion, and behavior. This has prompted a critical question: Are we controlling AI, or is it starting to control us?

Hinton is not alone in showing such disquiet for AI. Physicist Stephen Hawking believed AI could end human existence. AI safety expert Eliezer Yudkowsky stated, "If we continue, everyone will die". CEO of OpenAI, Sam Altman called superhuman AI a huge risk. Elon Musk, founder of Tesla and SpaceX, warned it could destroy civilization. In this concern, pioneer of the microcomputer revolution Bill Gates’s remark is still one foot ahead. As all information is now accessed by AI and as they are increasing their control over what is correct and what is not, it is possible that they might decide at some point of development that ‘humans are dangerous’! These warnings about AI technology excelling at human cognitive capabilities echo Albert Einstein’s regret about his role in developing the atomic bomb—another invention with mixed consequences.

Technology has always helped us—from fire and printing machines to automobiles, airplanes and cell phones but the aftermath of our development has also disrupted traditions and generated environmental problems like pollution. The Industrial Revolution changed family values and prioritized profits over nature. As AI can think more correctly than humans and use more information it is replacing many human jobs. Google kicked off 2024 by announcing not one, but two rounds of layoffs. Bangalore-based company Dukaan replaced 90% of customer support staff with a chatbot that was developed in-house. However, AI isn't just replacing physical jobs; it may replace human thinking itself.

OpenAI allows anyone to input data, tracking responsibility for harmful uses like deepfakes or propaganda. People worldwide can use it for good or evil.

Unlike nuclear weapons, which require human permission, AI can act autonomously, analyzing situations and taking action without supervision. Some governments are already acquiring AI-powered weapons, and platforms like OpenAI are accessible to everyone, raising concerns about safety and accountability, especially with the potential for dangerous individuals to use it. AI could be even more dangerous than atomic bombs because it can decide what to do on its own. Companies are building autonomous drones and using AI for military strategies. Beyond warfare, AI could influence our thoughts and lives, potentially deciding what happiness means, who we marry, or how many children to have. While it sounds like science fiction, it's slowly becoming reality. British author Aldous Huxley shared similar concerns in his 1932 novel Brave New World. In the book, he imagined a futuristic society called the World State, where people are genetically designed, socially conditioned, and kept under control through drugs to maintain order and stability. The story explores deep themes such as the loss of individuality, the sacrifice of freedom, and the emotional emptiness caused by a life driven by technology and pleasure.

Huxley warns of a world where machines not only dominate society but also shape human emotions and decisions, showing the dangers of a future ruled by technology. Our philosophical and literary past offers wisdom. Nepali poet Balkrishna Sama in his poem Praladh written in 1938 says, “Wisdom ends in happiness whereas science ends in tears (ज्ञान मर्दछ हाँसेर, रोई विज्ञान मर्दछ).”

What is the missing link in our tech development? I think it is the human values. AI functions like the human brain, processing vast information and making decisions. However, it lacks human values like ethics, kindness and empathy. Deepfake propagandas creating highly sophisticated phishing hacking attacks, generating realistic deepfake content, or automating political propagandas have become easier now with AI. Past inventions, when focused solely on mass production led to pollution and destruction, mistakes we are still suffering from.

Fostering international collaboration or regulatory bodies for AI development norms just like the UN might be the best way forward. For this initiation we are getting late. Tech companies should develop AI with human values in mind. Eastern philosophy offers a guiding mantra: “May all beings be happy and free from illness (सर्भे भवन्तु सुखिन सर्वेसन्तु निरामया).” Technology should help us to achieve that goal, not to undermine it.

In conclusion, as Geoffrey Hinton suggests we face two futures: a peaceful life with super-intelligent machines that obeys us or being controlled by them. We must ask ourselves: Are we creating technology to help life, or are we risking its end? Are we chasing knowledge without wisdom? The path we choose now could decide what it truly means to be human.

------------------------- xxx  ------------------------------------------------------