BigMike wrote: ↑Sat Aug 27, 2022 7:33 am
Harry Baird wrote: ↑Sat Aug 27, 2022 12:37 am
This is epiphenomenalism, the view which I've already pointed out is analytically defeated in the article
Exit Epiphenomenalism: The Demolition of a Refuge by Titus Rivas & Hein van Dongen. Of course, as hq points out, disproofs such as this will simply be ignored by the fools of physicalism, who are only interested in evidence which supports their view.
You just don't get it, do you?
Consciousness can not push atoms around. End of discussion.
Please note that I have not really followed this discussion in depth (I find this sort of argumentation really tedious because, as it seems to me, it actually hinges not so much on intellectual positions but on psychological stances) but I am interested to hear your views on the following comment.
I gather that you don't place any stock in the 'ghost in the machine' argument about *consciousness*. So if I understand right you see the machine (the biological organism) as the entirely of the mechanism that produces all human creations. An animal, in comparison, is simply a lesser version of the human being. It would be possible, then, for some animal consciousness if it continued to evolve and develop, to (perhaps at some point) develop to a human level.
Is this a correct paraphrase?
It occurred to me when I pondered this to ask you if it is possible, or probable (?) in your view that so-called AI intelligence could, or will, at some point develop a similar 'consciousness' as we humans possess? It would seem that the argument that seems most reasonable to you is that, yes, this could happen. Because in such a situation a 'mind' with awareness and I suppose
volition, because it is a product of materiality (a super-advanced physical computer), is in truth what
we are.
Therefore, we seem to be imbuing computer systems with a facsimile of our own programming and then giving them the tools to self-develop, self-correct -- effectively to learn.
So it does seem logical (according to the views you have) that like in the science-fiction scenarios, that these AI mechanisms could, or might, become consciously volitional. (If that happened, how would we describe that volition?)
It would be a curious turn of events if what humans created through conception of material possibility (advanced computer technology and programming) then superseded human beings. I wonder if you also view it like this? And if at some point these computer-awarenesses could somehow also handle construction and fabrication of that which is their substrata (the components that make up a computing machine) that they could very well become, eventually, just as aware as we believe that we are, but also capable of building all the systems needed to maintain their own (what is the word?)
bodies.