1
Neuropath / Re: Countering the Argument with Thorsten
« on: December 08, 2019, 11:02:36 am »I've gone through some of Bakker's stuff, and eliminativism did seem like a live possibility but then I read Alex Rosenberg's stuff about Intentionality in Atheist's Guide to Reality where he says we simply have to be wrong about having thoughts:I don't understand the argument. Couldn't you just as easily make an analogy consisting of say, a robot with a camera? The camera takes as input photons from the surroundings and creates an output consisting of an array of pixels or something upon which further computations are then done in order to make some decision according to some goal function. There's no infinite regress here. Generally I don't like comparing a human brain with a piece of software but I think this is one case where the analogy makes sense, except you have a lot of higher order representations, computations etc. going on because you literally have like a trillion interconnected cells.Quote"A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?
...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...
What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.
Physics has ruled out the existence of clumps of matter of the required sort...
…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.
It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all...When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong."
The idea we don't have thoughts about things, Intentionality....it seems to me the correct conclusion is materialism is false not that Cogito Ergo Sum is a mistake.Regarding our similarity to other organisms...I mean bees apparently understand the concept of Zero so perhaps mentality goes down further than we think, maybe even as deep as the panpsychics suggest.
Lol at the dog - but re: software...isn't this just an instantiation of a Turing Machine, in which case the calculations only have the meaning we give?
I mean any bit string can be interpreted differently, which is not to say every string of 0's and 1's can be every program imaginable but at the very least it seems any such string can represent a countably infinite number of programs?
I guess I don't see much difference between a computer and an abacus in terms of holding some aboutness in the material?
If we consider that a human brain can be (even just vaguely) associated to a computer (and personaly I think the association make sens in this case), you don't need to give meaning to the calculation.
A software receive one (or many) input and return an output. The output is the "physical" manifestation of the calculation, independently of the "meaning".
We can think of the brain the same, we have stimulus/inputs through our senses, and we output some actions.
I don't see how the fact that our brain is freaking complicated, and that its internal trillions of neuronal activities/"operations" per seconds trick itself into "consciousness" change anything or counter the Argument.
But in the end, philosophy won't explain anything, all we can do is wait for science to give an answer. We can speculate but it's just that, speculation.
Except you do have thoughts about things, which is where the meaning question comes from as it refers to Aboutness of Thought (what Bakker & other philosophers call Intentionality) which Bakker thinks can be reduced to a physics explanation (matter, energy, forces, etc).
To me Eliminativism toward Intentionality is the central point of Bakker's BBT, so everything turns on this issue. So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.
Admittedly there are other issues at play, like the nature of causation and mental causation, but given that we use Intentionaltiy to find interest relative causal chains it's probably a good starting point to refute the Argument. Which is - IIRC - all I was getting at there.
As for whether science can decide this issue...I'd agree with you if you're talking about something like a revised version of the Libet type experiements (the current set apparently got debunked), but not if we're talking about eliminativism of Intentionality. I just don't see how someone can do anything but find correlations since -as per above- finding a causal explanation for Intentionality would require Intentionality.
>So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.
That why analogies are just anologies and nothing more.
The correctness of a program isn't the issue here, first because even a bugged/sabotaged program will get inputs and return outputs independently of the original intention. Then because we are obviously not programs but complex random evolved chimestry, and not programmed to do something specific by someone else.
So in my opinion, programs can explain it away if we accept the prelude that we are complex physical machines.
The only other option I can conceive is that their is some magic giving power to the brain over the physical world. But I won't accept it as I don't see any proof that it might be the case (just like I don't but garlic on my front door because nothing indicate the existence of vampires).
The brain being to blind to know it react instead of actually doing is moot to me as if it's actually the original of actions it would break causality.