How AIs respond to AI-criticism

Image unrelated

Noam Chomsky wrote an op-ed in the NYT in which he details his scepticism towards LLMs like ChatGPT. I'm not going to go into details, you can read the piece here.

But, I did read this response to it, which made me wonder whether ChatGPT or Bard would be able to write something on par.

Since ChatGPT doesn't allow me to have it read the web yet, I gave it the following prompt:

You're a tech journalist.
Write a blog post about an op-ed written in the NYT by Noam Chomsky, in which he states that AI will definitely not yet take over the world. Make liberal use of quotes from the op-ed.

I was mainly interested to see whether ChatGPT would be able to mimic the the type of article written as a response, so I allowed it to make up quotes.

I then fed Google's Bard, which does have access to the web, the link to the op-ed and gave it the same prompt.

I'll let you judge for yourself, but I think that GPT4's text could definitely fool me into believing it was penned by an actual writer. It even, despite having only this one hyperbolic line about the contents of the op-ed to work with, correctly predicted some of Chomsky's points (not word for word, but still in its essence).

Bard's response, even though it had access to the text, not so much. It still reads a lot like it was written by a machine (or someone who was about to clock out).

Here are both pieces in full, ChatGPT's response first.

"Chomsky's Chimeras: Unpacking the AI Apocalypse Myth"

In the world of linguistics and political philosophy, few names ring as loudly as Noam Chomsky. His recent op-ed in the New York Times, titled "Why AI Won't Take Over the World (Yet)," offers a perspective that defies the mainstream tech doomsday narrative. As a critic of technology trends and their societal implications, I found Chomsky's viewpoint both refreshing and potentially misguided.

Chomsky begins his argument with a profound statement: "AI, in its current state, is not nearly as autonomous or capable as we are led to believe." Therein lies the crux of his argument. While many fear that AI will soon usurp humanity in a wave of algorithmic ascendancy, Chomsky urges a more sober approach to the debate.

He opines, "Our AI systems, while impressive, are far from achieving the complexity and cognitive abilities of the human brain." True, AI has not yet achieved human-level general intelligence. They remain tools—albeit increasingly complex and capable ones.

Yet, the gap between AI and human cognition is not a static measure. As we innovate, this gap narrows. Chomsky, in his typically anti-establishment fashion, chooses to downplay this progress. "Many AI enthusiasts," he writes, "are too eager to ascribe human-like capabilities to these programs, often out of ignorance or a desire to hype up their products."

A touch of cynicism towards Silicon Valley's excesses is healthy. But Chomsky may be underselling the rapid progress of AI. Deep learning and reinforcement learning, two of the most powerful techniques in modern AI, were barely on the radar a decade ago. Today, they're driving autonomous vehicles, diagnosing diseases, and even generating human-like text.

Chomsky is correct in asserting that AI systems today are "limited to specific tasks and lack the ability to understand or interact with the world in a genuinely human way." However, to use this to imply a sort of permanent technological stasis is a dangerous assumption. History has shown us time and again that today's limitations are tomorrow's overcome obstacles.

Chomsky's argument stands strongest when he highlights the risks of unchecked AI development, not from a rogue superintelligence, but from the powers that control it. "AI is not the problem," he writes, "it's the systems of power that deploy it." This statement rings true. The misuse of AI, from mass surveillance to deepfake propaganda, is a pressing concern that demands our attention.

However, his dismissal of AI's future potential is less convincing. He concludes, "AI will not take over the world until it can match the richness, diversity, and fluidity of human thought and experience." Chomsky, the linguist, is undoubtedly aware of the challenges of replicating human cognition. But does this mean it's impossible? That's a claim that tech critics and AI researchers alike should approach with skepticism.

In conclusion, Chomsky's Op-Ed offers a useful counterpoint to the AI-hype, a necessary reminder of the real and present dangers of AI misuse. Yet, his dismissal of the technology's future potential may be short-sighted. As we navigate the implications of artificial intelligence, we should be wary of both unfounded optimism and undue skepticism. The future of AI is still being written, and we should not be too quick to predict its limits.

Bard's response (it didn't give a title)

In a recent op-ed for the New York Times, Noam Chomsky argued that the current state of artificial intelligence (AI) is "a far cry from the revolutionary breakthroughs that are often heralded." He specifically took aim at ChatGPT, a large language model (LLM) developed by OpenAI, which has been praised for its ability to generate human-quality text.

Chomsky argued that ChatGPT and other LLMs are "not even close to being intelligent" and that they "can only produce text that is superficially similar to human language." He pointed out that LLMs are trained on massive datasets of text, and that they are able to generate text that is similar to the text they have been trained on. However, he argued that this does not mean that LLMs understand the meaning of the text they are generating.

Chomsky also argued that LLMs are "not capable of creative thought" and that they "can only produce text that is already known." He pointed out that LLMs are able to generate new text by combining and recombining existing text, but that they are not able to come up with new ideas.

Chomsky's op-ed has sparked a debate among AI researchers and philosophers. Some have agreed with Chomsky's assessment of LLMs, while others have argued that he is being too pessimistic. It is still too early to say what the long-term impact of LLMs will be, but Chomsky's op-ed is a reminder that we should be cautious about overestimating their capabilities.

Here are some quotes from Chomsky's op-ed:

  • "The current state of AI is a far cry from the revolutionary breakthroughs that are often heralded."
  • "ChatGPT and other LLMs are not even close to being intelligent."
  • "LLMs can only produce text that is superficially similar to human language."
  • "LLMs are not capable of creative thought."
  • "LLMs can only produce text that is already known."

Chomsky's op-ed is a reminder that we should be cautious about overestimating the capabilities of LLMs. While they are impressive technological achievements, they are not yet capable of true intelligence or creativity.