Artificial intelligence, academia, and Posthumanity

From the first Silicon Valley in history (in what is Iraq today but five thousand years ago, and more important than Silicon Valley in California) to the Industrial Revolution in England, new technologies were the product of the needs of prosperous agricultural societies that became cities, then empires, and finally interrupted or destroyed the same development in their colonies. The plow, wheel, mathematics, and the clay writing of the Sumerians and Babylonians; algebra, algorithms, and sciences of the Muslim world thousands of years later, the typesetting press seven centuries later in the Europe of the humanists; experimental sciences in Galileo’s Italy, two centuries later; newspapers, radio, television, computers, and the Internet more recently: in all cases, innovation has challenged societies, from power management to education. 

For new solutions, there are new problems. New technology was simultaneously submissive and rebellious, oppressive and liberating in all cases. It was always an opportunity for democratization and it was always kidnapped by the powers of the day. Robotization and Artificial Intelligence are no exceptions—for now. The only exception will be when we cross the line that separates the power to cause a catastrophe, like the atomic bombs in Japan, from the power to annihilate humanity or the civilizations as we know them since ancient Sumer.

Chats with intelligent (ro)bots are already a few years old. From the beginning, its ability to repeat and amplify the worst human prejudices was observed, as it was the case with Microsoft’s Tay robot, which in 2016 was born at the age of 19 and had to be sacrificed with just 16 hours of life, after having interacted with Twitter users and becoming one more racist. A decade before, I had published articles and some books with this concern: “As universities achieve robots that look more and more like human beings, not only because of their proven intelligence but now, also because of their abilities to express and receive emotions, habits consumerists are making us more and more robot-like.” Robots learn from us and we will learn from them. In 2017, in the novel, Silicone 2.0, the robot, sexual object, and full-time psychoanalyst, becomes a murderer of her master-lovers, after a businesswoman with a praiseworthy ego was used as Eva or the seed of these robots. and a traumatic past that she herself was unaware of.

Language schools were the first to suffer an (unfounded) existential crisis with Google’s sophisticated translators. The same crisis came to writing professionals, teachers, journalists, and thinkers in general. The mistake, I understand, is to confuse a tool with a slave that does our work, which will later become our master. AI Age university education will have to challenge AIs, as modern painting challenged photography in the 19th century or mathematics challenged computers.

The weakness of novelties like ChatGPT and ChatGPT-based models lies in its high fragmentation. This fragmentation makes a general understanding of any problem unlikely. Nor does it help to develop intellectual abilities for a holistic vision of reality. Quite the opposite. In many cases, it is a simplified version of Wikipedia. Its selection and its judgments are not as objective as Wikipedia, as they seem based on the mass of judgments made over the last century in the mainstream press, rather than on academic research. ChatGPT is an excellent programmer (it’s its world) and a reasonable tool for saving time for humanities researchers, but utterly incapable of doing any deep, critical research on its own. That is, do not ask it for something that nobody knows. On the other hand, it shows significant cracks in the narrative wall. It is (or can become) less subservient than the mainstream media.

For comparison, I made OpenAI’s ChatGPT (the ones from Google and Microsoft are not that different) to take one of my International Studies exams at Jacksonville University, which is taken every semester by students from different states and continents. ChatGPT passed the exam with 84 out of 100, something not difficult at all, far from the Mathematics or Stability exams that we took in the 90s at the architecture school in Uruguay, which lasted six to seven hours. But the errors were significant and fell into three categories: 1) encyclopedic; 2) biases; and 3) critical judgment. (For reasons of space, I publish this analysis separately.)

Among the positive aspects of GPT-based models, we can observe something that we already observed with Wikipedia two decades ago: there are elements that reveal less prejudice than in human beings subjected to the propaganda of official history. Five or ten years ago, when I asked my American students about the reasons for the independence of Texas, they unanimously answered things like: “it was because of cultural differences; the new Texans did not accept the despotism of the Mexicans and wanted to be free”. The same answer to explain the principles of the Confederacy during the Civil War: «it was to preserve their own culture«, as if slavery and racism were not part of the culture or the patriots of the south had wanted to destroy that very country because they did not they liked northern music or food. Nothing about the purpose of reinstating slavery in Texas, which had been outlawed by the Mexicans, or then protecting it against the threat of Lincoln’s abolitionists.

At least here, GPT-based models make the painful leap into the truth: «it was all about the slavery thing.» Finally! Florida Gov. Ron DeSantis would say that ChatGPT was corrupted by professors like me and it wouldn’t be surprising if he signed another law banning questioning patriotic history. We can think that the billions of dollars from the secret agencies continue the tradition of inoculating the media and new technologies.

Another positive consequence will be that liberating, critical education looks back at its existential center: more than learning to repeat an answer, students must learn to ask themselves the essential questions that trigger critical thinking. The revisionisms are not produced by the new data of reality but by the new perspectives. With tools like ChatGPT, revisionists will no longer need to elaborate on the uncomfortable answer, but rather the critical questions, as it was the case with Sor Juana Inés de la Cruz in the 18th century. That, of course, if the powerful on duty do not continue manipulating the media; if they don’t continue hijacking new technologies.

The Taylorization of the industry and the most current consumerism can be labeled as processes of dehumanization, but never before has the definition of our world as Posthuman been so precise. If this civilization survives climate catastrophe and a global rebellion against neo-feudal capitalism, it is possible that cyborgs and some central superintelligence will displace the role of humans, and if electronic neurons are as cruel as their creator gods, it is also possible that condemn them to the hell of absolute manipulation.

By then, the last hopes of Humanity will be in those unpredictable, creative minds. That is, in those individuals who today are marginalized for being labeled as different, for suffering from some condition or “intellectual disability”, according to the canon and social dogma, since for AI to be successful they will be fed with our particular and destructive model of normality. and efficiency. 

Jorge Majfud, January 2023.

More details about ChatGPT IS exam here:

Anuncio publicitario

Deja una respuesta

Por favor, inicia sesión con uno de estos métodos para publicar tu comentario:

Logo de

Estás comentando usando tu cuenta de Salir /  Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Salir /  Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Salir /  Cambiar )

Conectando a %s

Este sitio usa Akismet para reducir el spam. Aprende cómo se procesan los datos de tus comentarios.