Technology
Masaki Shibutani
Apr 23 · Last update 4 mo. ago.
How do you think AI and technology should be utilized for society?
How do you think AI and technology should be utilized for society?
Stats of Viewpoints
The real issue is goal alignment
0 agrees
0 disagrees
To transform society and solve the problems we have failed to solve...
2 agrees
0 disagrees
AI will inevitably lead to wide spread job loss, policy needs to better protect the unemployed
0 agrees
0 disagrees
We are at a technological tipping point, AI should be used sparingly, if at all.
0 agrees
0 disagrees
Viewpoints
Add New Viewpoint
The real issue is goal alignment

The worries are less about disaster (or they are a misconception) the real issue is “goal alignment”. At the most basic level the AI may have an understanding of what it is supposed to be doing, but no understanding of the wider consequences. If a complex AI system is tasked with a decision that has a great risk factor, or the AI accidentally creates an unforeseen risk factor to the detriment of the environment, society or individuals then run the actual risk of disaster.

A classic example is an AI system that was set to make a simple product, such as paperclips. If this AI were given a clear task but was not given specific enough guidelines, and then given free reign to maximise output and efficiency disaster could easily follow. This isn’t simply a case of guideline programming failure however, as the AI may simply step into an unforeseen area, the real world is arguably a lot more nuanced than a computer one. In the paper clip AI example the simple task of paperclips production may lead to the AI then somehow consuming all natural resources making said product.

The utilisation of this technology, especially that of a wide or super intelligent AI, should progress but with extreme caution. Ultimate question is ceding control to something more intelligent, and at some point loosing our position in the hierarchy of intellect. This could be a potentially dangerous place to inhabit, but if handled slowly and carefully we could reap the benefits while side stepping the risks.

Agree
Disagree
Latest conversation
Daniel Halliday
Jul 31
Approved
DH edited this paragraph
The worries are less about disaster (or they are a misconception) the real issue is “goal alignment”. At the most basic level the AI may have an understanding of what it is supposed to be doing, but no understanding of the wider consequences. If a complex AI system is tasked with a decision that has a great risk factor, or the AI accidentally creates an unforeseen risk factor to the detriment of the environment, society or individuals then run the actual risk of disaster.
To transform society and solve the problems we have failed to solve...

AI has many exciting prospects, from advancing societies technologically to solving deep rooted global problems. As AI is rapidly developing, we are currently witnessing the proliferation of AI aided internet searching and facial recognition, and the prospect of driverless transportation moves forwards on a daily basis. We soon face the prospect of narrow AI, one that performs complicated but narrow tasks, being integrated into daily life to the point where it will effect lifestyles, the economy and society as a whole.

AI and technology should be used to move towards a jobless society, one in which the majority of jobs could eventually be carried out by AI more effectively and at a lower cost. Communities could be free to enjoy leisure or artistic pursuits and any work still necessary could be free of the menial tasks that define a lot of occupations currently. Human endeavour could then turn to progress on a large scale and people would have more time to consider resolving global human problems that currently continue to plague societies, such as war, disease, poverty, and famine.

However, the most exciting prospect is of the development of very advanced super intelligent AI or ASI. When the threshold of human intelligence is surpassed by that of an ASI, we will no longer be limited to the problem solving ability of the human brain. It is then possible that bigger moral and social human issues could be easily resolved if such an intellect were to be tasked with this problem. Therefore humanity should strive towards developing and pushing the boundaries with AI, as it holds the potential to massively accelerate human progress

digitaltrends.com/cool-tech/ai-for-social-good-2018

Agree
Disagree
Latest conversation
Daniel Halliday
Jul 31
Approved
DH edited this paragraph
However, the most exciting prospect is of the development of very advanced super intelligent AI or ASI. When the threshold of human intelligence is surpassed by that of an ASI, we will no longer be limited to the problem solving ability of the human brain. It is then possible that bigger moral and social human issues could be easily resolved if such an intellect were to be tasked with this problem. Therefore humanity should strive towards developing and pushing the boundaries with AI, as it holds the potential to massively accelerate human progress
AI will inevitably lead to wide spread job loss, policy needs to better protect the unemployed

AI will inevitably lead to a massive amount of job loss in the immediate term. Although a new range of job opportunities may arise from AI in the long term, the initial problem needs to be addressed before it becomes a disaster. Currently, the unemployment welfare systems of many countries already fall short of adequately addressing the issue of unemployment. As the AI issue is looming over this area, concrete solutions need to be put into place. AI technology should see restricted use until jobs losses can be accounted for.

Agree
Disagree
We are at a technological tipping point, AI should be used sparingly, if at all.

Human beings have the most versatile intelligence of all animals on Earth. If we give up our position of most intelligent on the planet to that of an artificial intelligence we may also be giving up our position of control.

An AI arms race is arguably already under way, with the US, Russia, China, Israel, South Korea and the UK developing lethal autonomous weapons systems. It has been claimed that this could be a force for good, helping to minimise human casualties. However, AI solutions would also bring the human and economic cost of military action down, possibly leading to war being an even more common occurrence. It is also argued that, unlike nuclear weapons, AI does not require rare materials to construct, making it easier to acquire and trade. This would inevitably lead to AI weapons systems falling into the hands of authoritarian dictators or terrorists. So even though autonomous weapons are designed to minimise battlefield casualties, they would unavoidably increase the likelihood of atrocities perpetuated by terrorists, warlords and rogue dictators, such as genocide, ethnic cleansing and the civilian mass murders we have witnessed in recent times.

It is not just AI weapons that threaten human security however, in the near future we face large scale job losses as a result of AI transforming the logistics and services industries. The risk of catastrophic unemployment is too great to ignore and could further divide society along economic lines. Unless measures are put into place society risks AI technology remaining in the hands of a few massively powerful technocratic oligarchs.

There are possible solutions to these problems such as the banning of autonomous weapons in a similar way to biological and chemical weapons. Economic measures can also be put into place, such as policies to protect those that lose their job due to AI developments, like that of a universal basic income. However, in the advent of AI surpassing human intelligence, mankind runs the risk of being left behind intellectually, and the goals of AI and that of our own come into question. If a strong AI overcomes society with its aims misaligned with our own, it is possible that AI may reach this misaligned goal to the detriment of human or environmental safety, leading to a possible catastrophe. The dangers posed by AI are too extreme, and the slower, more limited this field is, the better.

futureoflife.org/open-letter-autonomous-weapons

Agree
Disagree
Translate