Categories Op-Ed

AI wars begin

ChatGPT, NYT, OpenAI, New York Times
        Farrukh Khan Pitafi

When you are obsessing about petty domestic disputes and mundane minutiae of existence you forget to pay attention to the world-changing events around you as they occur. Somewhere between the early nineties and now, our important neighbours wooed lady luck and managed to change their economic trajectory. We kept fighting.

Our neighbours became space-faring nations and we kept fighting. And now this AI revolution. Between 2015 and now, companies were created, and some others were reorganised to research artificial intelligence. We were busy fighting among ourselves and toppling governments. Now, every other day these companies wow us with some remarkable innovations and where are we? None the wiser.
In the past few months alone the world of AI has endured some remarkable tumults. And we are in no shape to benefit from any such crisis. I will use the analogy of the medicine industry to explain the situation. When the patent of any expensive brand-name medicine expires after twenty years the generic medicine manufacturers have a field day. The same medicines are manufactured and sold at a fraction of the cost of the brand-name drug. But to benefit from such a windfall a country has to have a noteworthy industrial base and attention to detail. Many great opportunities came and went and we did not pay attention.
Now the world of AI. Two recent developments had the potential to alter the course of this critical industry. One is a two-month-old development but it is still relevant because the company linked to it is once again in the news for altogether different reasons. The first is the attempted boardroom coup at OpenAI. On November 17, the then-board members of OpenAI sacked Sam Altman, the company’s CEO. OpenAI owns and operates ChatGPT, the chatbot which is credited with starting the AI spring. The organisation is something of a hybrid.

The main OpenAI Inc is a non-profit organisation but it has a for-profit subsidiary called OpenAI LLC. Currently, Microsoft owns 49 per cent of its stakes. The main reasons behind the firing are still shrouded in speculation. At the time it was stated that Altman consistently failed to be candid with the board. Now given the nature of the technology this organisation operates this could mean anything. Hence we saw an avalanche of half-digested gossip pieces.
Perhaps a look at the company’s charter will be useful here which states: “OpenAI’s mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.” This is indeed a noble pursuit for a company which despite a purported valuation of 100 billion dollars reportedly does not make a profit.
The sacking of Altman started a chain reaction. First, we saw Microsoft stepping in to offer jobs to Altman and his displaced colleagues. Once Altman showed his readiness to join, we learned that Microsoft would set up another company headed by him dedicated to AI development. An open invitation to join the new team was extended to all OpenAI employees ready to join. Meanwhile, the board was facing difficulty in finding Altman’s replacement in the face of a company-wide staff revolt. The staff members wrote a letter asking the board to step down or else they would quit and join the newly announced Microsoft AI subsidiary.

With them would go the AI expertise and perhaps even the compute. For a heartbeat, you could feel the timelines altering as a company with a history of being disciplined for seeking to create a monopoly seemed to be getting total control of the world’s most precious and sought-after technology. But then mercifully the board relented and decided to restore Altman and then step down. OpenAI since then has been restored to its pre-crisis shape. But just when you thought it was out of the woods the lightning struck again.
In a new twist, The New York Times has filed a lawsuit against OpenAI and Microsoft claiming that both these companies that share GPT-4 trained their language models using its proprietary content without permission or compensation. For the uninitiated AI’s artificial neural networks operate on the same principle as the human mind and differ from ordinary computers because, unlike the latter’s command-based programming, they are trained on huge datasets which they use to organise their thought patterns. Usually, when these datasets containing text and audiovisual materials are fed to an AI it synthesises that information and the output takes a different shape than the input. So the developers of large models routinely scrape the large swathes of the internet and repurpose them as training data.

But The New York Times lawsuit goes a step further. It claims that the newspaper’s articles were encoded into ChatGPT’s memory as the best examples of information and reporting resulting in it often regurgitating the whole text at times verbatim in its answers. The NYT then goes on to show how much time, resources and effort went into some of these stories and demands compensation and scrapping of the language model using its copyrighted material. If the last demand is met it would bring an end to ChatGPT. Of course, that is unlikely to happen but still, this is what an existential threat looks like.
To OpenAI this lawsuit came as a surprise because it was still in negotiations with the Times regarding content licensing terms. So what changed?

We live in an age when even the best among us are worried about their future and relevance. Could it be that the NYT, which is indeed, at the top of the quality content pyramid waited for its moment to take out an existential threat? But taking out OpenAI or even Microsoft does not solve that problem. There is plenty of big fish in the pond. For instance, Google, which just upgraded its Bard language model with the infusion of Gemini, its self-professed advanced general-purpose AI, decided a while ago not to use copyright-protected content as training data and consequently remains in the field posing the same challenges. Likewise, Elon Musk who never loses an opportunity to scandalise AI, regularly kicks OpenAI, signs letters to ask it to stop development and then launches his own AI model will not disappear either.

When somebody’s premature demise benefits another, who is likely to be suspected? Collusion? Collaboration? Strange bedfellows? Perhaps. But one thing is for sure. The age of AI wars has begun and it is going to be as ruthless as the mercantilist-colonial wars. Since we have no time to follow such developments amid our Lilliputian wars, we are preparing to be a data harvester’s colony?

The writer is an Islamabad-based TV journalist and policy commentator. Follow his WhatsApp channel ‘Farrukh K Pitafi’ for the latest updates

Author

Country's premier court's reporting news wire service

More From Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Artificial Intelligence in Pakistan, AI governance Pakistan, Regulation of Artificial Intelligence Act 2024, National AI Policy Pakistan, Pakistan Digital Authority, Digital Nation Pakistan Act 2025, AI legislation Pakistan, AI ethics Pakistan, AI surveillance Pakistan, AI in government Pakistan, AI regulation Pakistan, AI policy Pakistan, Ministry of IT and Telecommunication Pakistan, National Centre of Artificial Intelligence, NCAI Pakistan, Prof. Yasar Ayaz, Pakistan AI oversight, State Bank of Pakistan AI guidelines, AI in financial services Pakistan, Personal Data Protection Bill Pakistan, Data Protection Authority Pakistan, AI and data privacy Pakistan, AI in education Pakistan, AI in healthcare Pakistan, AI in agriculture Pakistan, Council of Common Interests AI, Pakistan AI literacy, AI skill development Pakistan, ethical AI Pakistan, inclusive AI governance, AI and youth Pakistan, AI and digital divide Pakistan, rights-based AI governance, AI innovation funds Pakistan, Pakistan AI infrastructure, AI R&D Pakistan, P@SHA Pakistan, Digital Rights Foundation Pakistan

Framing the Future: Ethical and Inclusive AI Governance in Pakistan

As Pakistan enters the age of Artificial Intelligence (AI), its challenge is no longer just…

nepotism in Pakistan public sector, favoritism in government jobs Pakistan, meritocracy in Pakistan bureaucracy, political interference in public administration, public sector corruption Pakistan, recruitment based on connections, lack of transparency in public sector, inefficiency in government departments, political nepotism Pakistan, bureaucratic favoritism, governance and meritocracy in Pakistan, anti-corruption reforms Pakistan, hiring without merit in public service, poor public service delivery Pakistan, public trust in government Pakistan, systemic corruption Pakistan, public sector accountability, political appointments in Pakistan, administrative reform Pakistan, merit-based recruitment Pakistan, reforming public administration, political-bureaucratic nexus Pakistan, governance failure due to favoritism, civil service reform Pakistan

Nepotism and Favouritism in Public Sector

By Zarash farooq Nepotism and favoritism have significant issues within Pakistan’s public sector, impacting everything…

Why I Took a Break from Social Media – And Found Myself

In today’s fast-paced digital world, it’s easy to feel connected — yet completely lost. I’m…