The case against AI Doomerism

By John, 11 July, 2024

I found this interesting analysis from Halvarflake (@HalvarFlake@mastodon.social) on Mastodon on why an AI superintelligence isn’t likely to emerge at all, let alone any time soon. 

The analysis is from a physical / information constraints standpoint and is quite interesting.
 

A lot of the narratives seem to assume that a superintelligence will somehow free itself from constraints like „cost of compute“, „cost of storing information“, „cost of acquiring information“ etc. - but if I assume that I assume an omniscient being with infinite calculation powers and deterministically computational physics, I can build a hardcore version of Maxwells Demon that incinerates half of the earth by playing extremely clever billards with all atoms in the atmosphere. No diamandoid bacteria (whatever that was supposed to mean) necessary.

I maintain that the biggest risk of AI is that powerful people use it to create tools that will have an adverse impact on less powerful people.

Tags