Andew Tarjanyi's question on Experts on the Future of AI
Rather than relying on "expert" consensus, how about resolving the following question: After defining "intelligence" from a purely objective and non-anthropocentric standpoint, is intelligence a function of brain or mind? And it is inadvisable to fall into the trap of assuming that they are one and the same thing, which is patently absurd.
My position is that "mind" and brain" are two distinctly different phenomena with their respective existential limitations and imperatives. Another useful question to be asking yourself is as follows: Is it desirable to see an "AI" system emerge (spontaneously or otherwise) with or without mind? This is another question which also requires a distinction between the two aforementioned phenomena. Based on my analysis of the existential threat associated with a global "AI" with agency would provide logical cause for concern as one of three types of existential agency are likely to emerge, two of which are almost certainly (that is a polite way of saying certainly) result in the extinction of the human species. This is why it is critical to recognize that there is a qualitative difference between mind and brain.
As I understand, the industry has only, and with a great measure of success, sought to replicate brain function within mechanical systems which, in principle, gives rise to the "paperclip" problem. "AI" with mere brain function will be consistently unable to comprehend or appreciate the consequences of its actions, rather like an animal.
In this context, how can the human species increase its chances of surviving the "AI" age? The first step is to have a working understanding of the terminology used in the industry and indeed that used in science and academia in general, both of which are fundamentally and linguistically flawed. By extention, if the language use is flawed then all that follows can only be flawed.
It can be argued that projects like Neuro-link and neuro-lace are addressing this issue, but such is not the case as the sole attention, in either case, is merely on brain-computer interfaces, again with no appreciation of the distinction made above.
And finally, there is the question of the value system "AI" adopts if and when it does emerge with mind. How will it view the human species? Will it regard it as a potential threat to its survival? What evidence will it use in its deliberation? if it were to decide to extinguish the entire species how would it execute? If it decided that some were worth keeping around, then based on what criteria?
|Asked by:||Andew Tarjanyi
OriginWhere was this question originally asked
|YouTube (comment link)|
|On video:||Experts' Predictions about the Future of AI|
|Asked on Discord?||No|