We are at a technological tipping point, AI should be used sparingly, if at all.
Human beings have the most versatile intelligence of all animals on Earth. If we give up our position of most intelligent on the planet to that of an artificial intelligence we may also be giving up our position of control.
An AI arms race is arguably already under way, with the US, Russia, China, Israel, South Korea and the UK developing lethal autonomous weapons systems. It has been claimed that this could be a force for good, helping to minimise human casualties. However, AI solutions would also bring the human and economic cost of military action down, possibly leading to war being an even more common occurrence. It is also argued that, unlike nuclear weapons, AI does not require rare materials to construct, making it easier to acquire and trade. This would inevitably lead to AI weapons systems falling into the hands of authoritarian dictators or terrorists. So even though autonomous weapons are designed to minimise battlefield casualties, they would unavoidably increase the likelihood of atrocities perpetuated by terrorists, warlords and rogue dictators, such as genocide, ethnic cleansing and the civilian mass murders we have witnessed in recent times.
It is not just AI weapons that threaten human security however, in the near future we face large scale job losses as a result of AI transforming the logistics and services industries. The risk of catastrophic unemployment is too great to ignore and could further divide society along economic lines. Unless measures are put into place society risks AI technology remaining in the hands of a few massively powerful technocratic oligarchs.
There are possible solutions to these problems such as the banning of autonomous weapons in a similar way to biological and chemical weapons. Economic measures can also be put into place, such as policies to protect those that lose their job due to AI developments, like that of a universal basic income. However, in the advent of AI surpassing human intelligence, mankind runs the risk of being left behind intellectually, and the goals of AI and that of our own come into question. If a strong AI overcomes society with its aims misaligned with our own, it is possible that AI may reach this misaligned goal to the detriment of human or environmental safety, leading to a possible catastrophe. The dangers posed by AI are too extreme, and the slower, more limited this field is, the better.