In recent months, the news about Artificial Intelligence has been quite disturbing, to the extent that it seems like a warning that the wolf is coming to devour the lambs. Is this technology the wolf?
The emergence of various Artificial Intelligence systems such as ChatGPT, GenAI or GenerativeAI, Dall-E or GPT has prompted hundreds of creators of art and content, as well as many service providers, to warn of the danger of all these tools in their massive use.
In a few more years, they anticipate the closure of thousands of jobs and even having as their next competition, in their creative work, a system that creates techniques to produce original content.
For example, that the next best seller in bookstores will not be a book written by Mario Vargas Llosa, but by one of these systems conceived by technology experts from a company in San Francisco, California or Beijing, China.
Or that the opening of an art gallery will not include a painting by Dalí, Picasso, Van Gogh, Rembrandt or Goya, but the work painted by an Artificial Intelligence system.
That there will be creations and services that end up collapsing because a software will do it perfectly and, moreover, save the employer the cost of having several hired workers. The future of a media could end up in the hands of a system that counts the AIs that tell the news from the ideological, political or religious orientation with which it was previously devised.
Even a technological tool will shape school papers and essays, as well as the theses needed to obtain a professional degree, and do so in such a flawless way that there would be no room to prove plagiarism or that it was written by software and not a human.
A few days ago, on Capitol Hill, the voice of Samuel Altman, CEO of OpenAI, boomed in the ears of senators who listened avidly to the expert voice of this millennial who has been one of the most precocious creators of AI systems. Such is his talent that today, at 38 years of age, he is not only a millionaire but has been recognised several times by Forbes.
In 2015, together with Elon Musk, Jessica Livingston and Peter Thiel, he founded OpenAI, a company dedicated to technological research whose creation gave rise to ChatGPT.
It was this prototype bot developed in 2022 that was one of the sunshine keys that hinted at the infinite capacity of AI programmes; a potential that, like everything else, including science itself, has its positive and dark sides.
In front of a Senate committee, Altman explained the virtues of ChatGPT by pointing out that it is a "prototype chatbot" specialised in dialogue with a broad language capacity to hold one or hundreds of conversations with a human being and thus interact with them.
Only half a year after its launch, this service already had more than one million users and its creators are waiting for its expansion to continue with a view to charging for its services. So far it has been banned in countries such as Iran, China, Russia and several parts of Africa, and the European Union (EU) will have to discuss the impact of this technology.
Altman, who knows a thing or two about technology and its potential, called on US lawmakers to structure pioneering regulation of artificial intelligence in the United States.
"I think, if this technology goes wrong, it can go pretty wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening," the technologist said.
According to the New York Times, in Altman's first testimony before Congress, the entrepreneur implored lawmakers to regulate AI; however, committee members showed a budding understanding of the technology, but were not unsettled by the potential damage of AI in the job market, as well as on social media to generate certain content that ends up manipulating the masses. It's not just about fake news anymore.
A student in Chicago asked ChatGPT to write him a poem for his literature homework and the result was astonishing because of the creative capacity of this artificial intelligence system.
In the realm of AI mastery and knowledge, an intense war is being waged between the United States and China. There is a frantic race, as there was once between the United States and Russia in the space field, to see which country would be the first to get the technology to launch a space capsule out of Earth orbit. On 4 October 1957, the then USSR launched Sputnik I into space, which was able to reach Earth orbit, and on 3 November of that year, the USSR put the dog Laika into Sputnik 2, the first living being to travel into space.
The same scenario of fierce competition is taking place with AI, and if in the United States, ChatGPT was born, in Beijing, the Chinese Baidu presented Ernie with the same characteristics as the American version.
While Chinese President Xi Jinping has not publicly expressed any fears about the impact of AI in his country, in the case of the United States, President Joe Biden recently told technology entrepreneurs that AI "has enormous potential and danger" and did not rule out pushing for regulation.
Senator Chris Coons, a Democrat from Delaware, noted in that appearance that the Chinese are creating AI that "reinforces the core values of the Chinese Communist Party and the Chinese system" and said there will be imminent indoctrination.
To be sure, this pioneering regulation is currently only a pipe dream, because Altman's and other technologists' criticism of lawmakers is that they fail to understand what AI is and confuse it only with the use of social media.
"The technology that my company is implementing may destroy some jobs, but it also creates new jobs because it is also an opportunity," Altman told lawmakers on Capitol Hill.
Also speaking at the hearing were Christina Montgomery, IBM's chief privacy and trust officer, and Gary Marcus, a professor and frequent critic of artificial intelligence technology. Marcus and Altman proposed the creation of an agency that would issue licenses for the development of large-scale AI models with security regulations and a series of tests so that AI systems and models that are created would pass these standards.
"We believe the benefits of the tools we have deployed so far far outweigh the risks, but ensuring their safety is vital to our work," Altman said.
Richard Blumenthal, a Democratic senator from Connecticut and chairman of the Senate panel, acknowledged at the meeting that Congress has failed to keep up with regulations and oversight in the area of new technologies.
The Washington Post revealed that subcommittee members suggested an independent agency to oversee AI, as well as rules forcing companies to disclose how their models work and the data sets used. In addition, antitrust rules, to prevent companies like Microsoft and Google from dominating the market.
Not so dangerous
Will AI eventually displace humans, will it overtake them intellectually, and does it pose public or even existential threats? Not exempt from this controversy, Noam Chomsky, one of the most brilliant intellectuals, published an article in The New York Times entitled "The False Promise of ChatGPT" and questioned all the flaws in the language mechanism used by this artificial intelligence system.
The professor emeritus of the Massachusetts Institute (MIT) said that ChatGPT is first and foremost "high-tech plagiarism" that will tend to prevent learning.
More than a few school tutors, university principals and teachers warn that the first casualty of ChatGPT will be education and, with it, learners who will lose literacy, reading comprehension, synthesis, writing, research and storytelling skills. Writing is a way of fixing language.
Robert Zaretsky of the University of Houston believes that the college essay died years ago. He argues: "It's a mug's game in which a student sends me an electronic file that, when opened, spills out a jumble of words that the sender proposes to be a finished paper. Most technological disruptions leave both positive and negative effects in their wake; if the college essay is truly unsalvageable, perhaps ChatGPT will eventually succeed in replacing it with something more interesting".
The host of the YouTube channel EduKitchen asked Chomsky: "For years various programmes have helped teachers detect plagiarised essays... now it will be more difficult, because it's easier to plagiarise. But that's the only contribution to education I can think of".
Chomsky has also written another article for The New York Times in collaboration with Ian Roberts, a professor of linguistics at Cambridge University, and Jeffrey Watumull, a philosopher who is also head of AI at a technology company.
The three inquire about the hype around products like ChatGPT, both for investors who are suddenly very excited about the possibility of independent robotic work, and for others who are potentially worried about the impending robot apocalypse.
According to the authors of this editorial: "The human mind is not, like ChatGPT and its ilk, a statistical engine heavy on pattern matching, cramming hundreds of terabytes of data and extrapolating the most likely conversational answer or the most probable answer to a scientific question. By contrast, the human mind is a surprisingly efficient and even elegant system that operates on small amounts of information; it does not seek to infer gross correlations between data points, but to create explanations".
At 95, Chomsky retains one of the brightest and most lucid minds. For the Jewish-American linguist and philosopher, these programmes cannot explain the rules of English syntax, which makes their predictions always superficial and dubious.
The issue is language and the impact that these AI systems will have on it and on the acquisition of knowledge for the human being and the functions in the brain.