Bare Blogs

Is AI causing Existential risk

Will AI cause existential risk to humans?
Illustration: BareBlogs

Humans are the noblest of all creations, but they are not invincible. Natural calamities, like tsunamis, earthquakes, floods, storms, droughts, and wildfires, show the vulnerabilities of humans. Likewise, outbreaks like plagues, Black Death, smallpox, flu, HIV, and COVID-19 also affect humanity to the greatest extent. While these catastrophes are not created by humans, some are manmade. The technology that humans create poses unparalleled risk to their existence – this is the risk that can make us extinct or cause some type of adversity that limits our ability to grow and thrive enduringly. For example, the creation of the atomic bomb posed a high level risk to our existence, and after this invention the humans have only increased problems for themselves. It is difficult to estimate the exact percentage of existential risk to humans; rough estimates indicate that existential risk due to nuclear war and climate change is approximately. 0.1%. The chances of an outbreak causing the same type of catastrophe are at an alarming rate of 3%. One might think that these tragedies make humans extinct. These estimates are not small, and possibly humans could invent or develop new technologies that increase the likelihood of existential risks.

AI is the most notable technology that humans created in the second half of the 20th century. Most AIs till date are specific purpose, such as found in video games, face or voice recognition, or narrow AI that perform specific creative tasks. The AI is penetrating with the rise of Generative AI and threatening critical thinking, social life, and employment of humans. Experts have different estimates about the emergence of general AI. Somesurveys identified that it could be developed in this century. Unlike specific AI that perform only a given task, the general AI is terrifying because it can adapt and perform any task, outperforming humans.

There are also guesses about what general AI could look like. It could be like a sentinel or a source in a matrix. But the main question is what it looks like when humans share the earth with a machine that is a substitute. General AI could help humans achieve their goals or fix things; it could also pose a risk to our existence. To lower this risk, scientists need to align AI values with humans. This is the most daunting philosophical and engineering problem because it will require extensive and thoughtful work. Even if this happens, general AI leads to another problem. If general AI develops profound respect for humans and wants to solve all humanity’s problems and align with human values, scientists need to make AI values rigid. If machines are given the power to manage the earth, their rigid beliefs and values can dominate and prevent humans to follow a single ideology extremely resistant to change. There’s a lot of uncertainty regarding general AI and very difficult to estimate how it poses existential risks. Chances might be that new risks may make existing risks doubtful. Humans must be mindful that the decisions they make today affect their tomorrow.

 

Social Share
error: Content is protected !!