singularidad tecnológica y transhumanismo

Technological singularity and transhumanism

artificial intelligence

According to the scientific literature, it is generally accepted that there are 3 levels of artificial intelligence (AI): Narrow or weak AI, general or strong AI, and artificial superintelligence.

1. Weak Artificial Intelligence

Weak artificial intelligence (ANI) is the only type of artificial intelligence we have achieved to date. Weak AI is goal-oriented, designed to perform specific tasks such as:

  • Facial recognition
  • Speech Recognition/Voice Assistants
  • Self-Driving Cars
  • Purchase Suggestions

These are statistical models that make predictions or suggestions in specific contexts for which they have been trained from an initial data set. It is very efficient in completing the specific task for which it is programmed.

In any case, ANI has seen major advances in recent years, thanks to deep learning and neural networks. For example, AI systems are used in medicine to diagnose cancer and other diseases. In addition, it has made strides in ubiquity and its presence is increasingly widespread in the daily life of citizens, driven by the popularity of voice assistants in the home and features included in any of our smartphones such as facial recognition in photo applications.

At the business level, the intelligent use of data is a reality through the increased use of sensors in factories which facilitate the specification of tasks or predictive maintenance, as well as the accumulation of user data that facilitates the generation of completely personalized products and services.

2. Strong Artificial Intelligence

The next level is strong artificial intelligence (AGI), also known as artificial general intelligence. It is artificial intelligence that equals or exceeds human intelligence, that is, the intelligence of a machine that can successfully perform any intellectual task of any human being.  So far, strong artificial intelligence remains an aspiration, it is hypothetical despite the great advances in the field and the improvement of machine learning models.

3. Artificial Superintelligence

Finally, Artificial Superintelligence (ASI) refers to an intelligence far above the most gifted human minds. It is related to what is known as “technological singularity”.

What is technological singularity?

Technological singularity predicts that, in the future, technology will develop machines that will surpass human intelligence, marking a before and after in human history. This will lead to what Oxford philosopher Nick Bostrom calls an “intelligence explosion”: machines will continually improve themselves. So, each new generation, being smarter than the one before, will be able to improve its own intelligence, giving rise to another new generation that is even smarter, and so on. Technological singularity will cause unimaginable social changes that are impossible to understand or predict by any human.

singularidad tecnológica y transhumanismo

Experts differ in their predictions as to when it will happen. Researcher Gary Marcus states that:

«Virtually everyone in the AI field believes that machines will one day outperform humans and, on some level, the only real difference between enthusiasts and skeptics is a time frame».

At the 2012 Singularity Summit, Stuart Armstrong did a study on expert predictions and found a wide range of predicted dates, with an average value of 2040. According to Armstrong in 2012:

«“It’s not exactly formal, but my current estimate of the 80% is somewhere from five to 100 years».

Preparing for technological singularity

What will happen after the singularity is an unknown to the human mind. A sufficiently powerful AI would be able to make changes on a global scale that we can't predict, for reasons we wouldn't be able to discern. It does not necessarily mean a dystopian future for humans or that we are going to become extinct. Nor do we find ourselves at the other extreme; we cannot expect machines to do all the work and humans to live on a permanent vacation. Simply put, we do not know what will happen because it is beyond our intelligence.

Just because we don't know what's going to happen doesn't mean we shouldn't prepare ourselves. Regulatory efforts that currently address issues such as code transparency, the logic behind AI decisions, training data bias, or the risks of AI use in warfare are undoubtedly necessary and will be very important at the first level of AI (ANI). However, there are reasonable doubts on the part of experts about its applicability when we reach technological singularity. How will we be able to apply norms to such higher intelligences?

Augmented Humanity or Transhumanism

One of the most interesting approaches is based on the improvement of the human being (augmented humanity or transhumanism) so as not to lose the race of intelligence vs machines.

For historian Yuval Noah Harari, the most likely path of evolution is precisely that the human being evolves alongside machines. In his essay “Homo Deus” he says:

“Homo sapiens will not be exterminated by a robot uprising. Homo sapiens are more likely to improve themselves step-by-step, and to join robots and computers in the process, until our descendants look back and realize that they are no longer the kind of animal that wrote the Bible”.

In fact, there are already several such projects. For example, Elon Musk's Neuralink: His vision is to develop brain-computer interfaces (BCIs) to connect humans and computers to survive the next era of AI. Neuralink's mission is clear: “If you can’t beat them, join them”. In other words, our only way to have a chance when superintelligent AI arrives is to merge with AI.


*Neuralink BCI Device

Conclusions

On a personal level, all we can do is prepare ourselves mentally and be open to a change that, according to experts, can happen in our lifetime.

We can end with Harari’s words:

“The Neanderthals didn't have to worry about the Nasdaq because they were protected from it by a shield made up of some tens of thousands of years. However… it is likely that the attempt to improve Homo sapiens will change the world and make it unrecognizable as early as this century”. He continues, “In retrospect, many believe that the fall of the pharaohs was a positive event. Perhaps the collapse of humanism is also beneficial. People often fear change because they fear the unknown. But the one and greatest constant in history is that everything changes.”

At SEIDOR Opentrends we help companies on their journey to digital transformation, if you need advice on the adoption of Artificial Intelligence, contact us!

Martí Fàbrega

Martí is a Digital Transformation Consultant and Senior Business Development Manager at SEIDOR Opentrends. His aim is to transform technology into business value for his clients, putting the greatest possible focus on innovation.