Recent Posts

Pages: 1 [2] 3 4 ... 10
11
Neuropath / Re: Countering the Argument with Thorsten
« Last post by sciborg2 on December 09, 2019, 09:00:36 pm »
Alex Rosenberg and Bakker are correct (AFAICTell) that the only conclusion for those committed to Physicalism is this seeming Aboutness has to be false in some sense, but I just cannot see how that could be.

From Rosenberg's latest book:
Quote
What makes the neurons in the hippocampus and the medial entorhinal cortex of the rat into grid cells and place cells—cells for location and direction? Why do they have that function, given that structurally they are pretty much like many other neurons throughout both the rat and the human brain?
From as early in evolution as the emergence of single-cell creatures, there was selection for any mechanism that just happened to produce environmentally appropriate behavior, such as being in the right place at the right time. In single-cell creatures, there are “organelles” that “detect” gradients in various chemicals or environmental factors (sugars, salts, heat, cold, even magnetic fields). “Detection” here simply means that, as these gradients strengthen or weaken, the organelles change shape in ways that cause their respective cells to move toward or away from the chemicals or factors as the result of some quite simple chemical reactions. Cells with organelles that happened to drive them toward sugars or away from salts survived and reproduced, carrying along these adaptive organelles. The cells whose organelles didn’t respond this way didn’t survive. Random variations in the organelles of other cells that just happened to convey benefits or advantages or to meet those cells’ survival or reproductive needs were selected for.
The primitive organelles’ detection of sugars or salts consisted in nothing more than certain protein molecules inside them changing shape or direction of motion in a chemical response to the presence of salt or sugar molecules. If enough of these protein molecules did this, the shape of the whole cell, its direction, or both would change, too. If cells contained organelles with iron atoms in them, the motion of the organelles and the cells themselves would change as soon as the cells entered a magnetic field. If this behavior enhanced the survival of the cells, the organelles responsible for the behavior would be called “magnetic field detectors.” There’d be nothing particularly “detecting” about these organelles, however, or the cells they were part of. The organelles and cells would just change shape or direction in the presence of a magnetic field in accordance with the laws of physics and chemistry.
The iterative process of evolution that Darwin discovered led from those cells all the way to the ones we now identify as place and grid cells in the rat’s brain. The ancestors to these cells—the earliest place and grid cells in mammals—just happened to be wired to the rest of the rat’s ancestors’ neurology, in ways that just happened to produce increasingly adaptive responses to the rat’s ancestors’ location and direction. In other mammals, these same types of cells happened to be wired to the rest of the neurology in a different way, one that moved the evolution of the animal in a less-adaptive direction. Mammals wired up in less-adaptive ways lost out in the struggle for survival. Iteration (repetition) of this process produced descendants with neurons that cause behavior that is beautifully appropriate to the rat’s immediate environment. So beautifully appropriate, that causing the behavior is their function.
The function of a bit of anatomy is fixed by the particular adaptation that natural selection shaped it to deliver. The process is one in which purpose, goal, end, or aim has no role. The process is a purely “mechanical” one in which there are endlessly repeated rounds of random or blind variation followed by a passive process of environmental filtration (usually by means of competition to leave more offspring). The variation is blind to need, benefit, or advantage; it’s the result of a perpetual throwing of the dice in mixing genes during sex and mutation in the genetic code that shapes the bits of anatomy. The purely causal process that produces functions reveals how Darwin’s theory of natural selection banishes purpose even as it produces the appearance of purpose; the environmental appropriateness of traits with functions tempts us to confer purpose on them.
What makes a particular neuron a grid cell or a place cell? There’s nothing especially “place-like” or “grid-like” about these cells. They’re no different from cells elsewhere in the brain. The same goes for the neural circuits in which they are combined. What makes them grid cells and place cells are the inputs and outputs that natural selection linked them to. It is one that over millions of years wired up generations of neurons in their location in ways that resulted in ever more appropriate responses for given sensory inputs from the rat’s location and direction.
Evolutionary biology identifies the function of the grid and place cells in the species Rattus rattus by tracing the ways in which environments shaped cells in the hippocampus and entorhinal cortex of mammalian nervous systems to respond appropriately (for the organism) to location and direction. Their having that function consists in their being shaped by a particular Darwinian evolutionary process.
But what were the “developmental” details of how these cells were wired up to do this job in each individual rat’s brain? After all, rats aren’t born with all their grid and place cells in place (Manns and Eichenbaum, 2006). So how do they get “tuned” up to carry continually updated environmentally appropriate information about exactly where the rat is and which way the rat needs to go for food or to avoid cats? Well, this is also a matter of variation and selection by operant conditioning in the rat brain, one in which there is no room for according these cells “purpose” (except as a figure of speech, like the words “design problem” and “selection” that are used as matters of convenience in biology even though there is no design and no active process of selection in operation at all).
Like everything else in the newborn rat’s anatomy, neurons are produced in a sequence and quantity determined by the somatic genes in the rat fetus. Once they multiply, the neurons in the hippocampus and the entorhinal cortex, and many other neurons in the rat’s brain as well, make and unmake synaptic connections with each other. Synaptic connections that lead to behavior rewarded by the environment, such as finding the mother’s teat, are repeated and thus strengthened physically (by the process Eric Kandel discovered; Kandel, 2000). Among the connections made, many are then unmade because they lead to behaviors that are not rewarded by feedback processes that strengthen the synaptic connections physically. Some are even “punished” by processes that interrupt them. In the infant rat, the place cells make contact with the grid cells by just such a process in the first three weeks of life, enabling the rat’s brain to respond so appropriately to its environment that these cells are now called “place” and “grid” cells (O’Keefe and Dostrovsky, 1979). Just as in the evolution of grid and place cells over millions of years, so also in their development in the brain of a rat pup, there is no room whatever for purpose. It’s all blind variation, random chance, and the passive filtering of natural selection.
These details about how the place cells and the grid cells got their functions are important here for two reasons. First, they reflect the way that ’natural selection drives any role for a theory of mind completely out of the domain of biology, completing what Newton started for the domain of physics and chemistry. They show how the appearance of design by some all-powerful intelligence is produced mindlessly by purely mechanical processes (Dennett, 1995). And they make manifest that the next stage in the research program that began with Newton is the banishment of the theory of mind from its last bastion—the domain of human psychology.
Second, these details help answer a natural question to which there is a tempting but deeply mistaken answer. If the grid cells and the place cells function to locate the rat’s position and direction of travel, why don’t they contain or represent its location and direction? If they did, wouldn’t that provide the very basis for reconciling the theory of mind with neuroscience after all? This line of reasoning is so natural that it serves in part to explain the temptation to accord content to the brain in just the way that makes the theory of mind hard to shake. By now, however, it’s easy to see why this reasoning is mistaken. For one thing, if the function of the place and grid cells really makes them representations of direction and location, then every organ, tissue, and structure of an organism with a function would have the same claim on representing facts about the world.
Consider the long neck of the giraffe, whose function is to reach the tasty leaves high up in the trees that shorter herbivores can’t reach, or the white coat of the polar bear whose function is to camouflage the bear from its keen-eyed seal prey in the arctic whiteness. Each has a function because both are the result of the same process of random or blind variation and natural selection that evolved the grid cells in the rat. Does the giraffe’s neck being long represent the fact that the leaves it lets the giraffe reach are particularly tasty? Is the coat of the polar bear about the whiteness of its arctic environment or about the keen eyesight of the seals on which the bear preys? Is there something about the way the giraffe’s neck is arranged that says, “There are tasty leaves high up in the trees that shorter herbivores can’t reach”? Is there something about the white coat of the polar bear that expresses the fact that it well camouflages the bear from its natural prey, seals? Of course not.
But even though they don’t represent anything, the long neck of the giraffe and the white coat of the polar bear are signs: the long neck is a sign that there are tasty leaves high in the trees on the savanna, and the white coat is a sign that the bear needs to camouflage itself from its prey in the whiteness of the arctic, the way clouds are signs that it may rain. But for the neck and coat to also be symbols, to represent, to have the sort of content the theory of mind requires, there’d have to be someone or something to interpret them as meaning tasty leaves or a snowy environment. Think back to why red octagon street signs are symbols of the need to stop—symbols we interpret as such—and not merely signs of that need.
The sign versus symbol distinction is tricky enough to have eluded most neuroscientists. The firing of a grid cell is a good sign of where the rat is. It allows the neuroscientist to make a map of the rat’s space, plot where it is and where it’s heading. John O’Keefe called this a “cognitive map,” following Edward Tolman (1948). The “map,” however, is the neuroscientist’s representation. The rat isn’t carrying a map around with it, to consult about where it is and where it’s heading. Almost all neuroscientists use the word “representation,” which in more general usage means “interpreted symbol,” in this careless way—to describe what is actually only a reliable sign. (See Moser et al., 2014 for a nice example.) The mistake is usually harmless since neuroscientists aren’t misled into searching for some other part of the brain that interprets the neural circuit firing and turns it into a representation. In fact, most neuroscientists have implicitly redefined “representation” to refer to any neural state that is systematically affected by changes in sensory input and results in environmentally appropriate output, in effect, cutting the term “representation” free from the theory of mind, roughly the way evolutionary biologists have redefined “design problem” to cut it free from the same theory.

I too think Rosenberg and Bakker are "right" even if they aren't 100% correct necessarily.  But, far be it from me to think I fully understand Rosenberg's point.  I think you'd find the book interesting though Sci.

Great quote - I disagree w/ Alex Rosenberg but I think more than most he correctly identifies the reality of the Physicalist position, the acceptance that matter (and thus the brain if it's matter) cannot be about anything.

Curious - why do you think the eliminativist position is correct? I could easily throw out free will if there was enough evidence, but the idea we don't have thoughts is a step beyond my boggle threshold.
12
Neuropath / Re: Countering the Argument with Thorsten
« Last post by H on December 09, 2019, 01:48:02 pm »
Alex Rosenberg and Bakker are correct (AFAICTell) that the only conclusion for those committed to Physicalism is this seeming Aboutness has to be false in some sense, but I just cannot see how that could be.

From Rosenberg's latest book:
Quote
What makes the neurons in the hippocampus and the medial entorhinal cortex of the rat into grid cells and place cells—cells for location and direction? Why do they have that function, given that structurally they are pretty much like many other neurons throughout both the rat and the human brain?
From as early in evolution as the emergence of single-cell creatures, there was selection for any mechanism that just happened to produce environmentally appropriate behavior, such as being in the right place at the right time. In single-cell creatures, there are “organelles” that “detect” gradients in various chemicals or environmental factors (sugars, salts, heat, cold, even magnetic fields). “Detection” here simply means that, as these gradients strengthen or weaken, the organelles change shape in ways that cause their respective cells to move toward or away from the chemicals or factors as the result of some quite simple chemical reactions. Cells with organelles that happened to drive them toward sugars or away from salts survived and reproduced, carrying along these adaptive organelles. The cells whose organelles didn’t respond this way didn’t survive. Random variations in the organelles of other cells that just happened to convey benefits or advantages or to meet those cells’ survival or reproductive needs were selected for.
The primitive organelles’ detection of sugars or salts consisted in nothing more than certain protein molecules inside them changing shape or direction of motion in a chemical response to the presence of salt or sugar molecules. If enough of these protein molecules did this, the shape of the whole cell, its direction, or both would change, too. If cells contained organelles with iron atoms in them, the motion of the organelles and the cells themselves would change as soon as the cells entered a magnetic field. If this behavior enhanced the survival of the cells, the organelles responsible for the behavior would be called “magnetic field detectors.” There’d be nothing particularly “detecting” about these organelles, however, or the cells they were part of. The organelles and cells would just change shape or direction in the presence of a magnetic field in accordance with the laws of physics and chemistry.
The iterative process of evolution that Darwin discovered led from those cells all the way to the ones we now identify as place and grid cells in the rat’s brain. The ancestors to these cells—the earliest place and grid cells in mammals—just happened to be wired to the rest of the rat’s ancestors’ neurology, in ways that just happened to produce increasingly adaptive responses to the rat’s ancestors’ location and direction. In other mammals, these same types of cells happened to be wired to the rest of the neurology in a different way, one that moved the evolution of the animal in a less-adaptive direction. Mammals wired up in less-adaptive ways lost out in the struggle for survival. Iteration (repetition) of this process produced descendants with neurons that cause behavior that is beautifully appropriate to the rat’s immediate environment. So beautifully appropriate, that causing the behavior is their function.
The function of a bit of anatomy is fixed by the particular adaptation that natural selection shaped it to deliver. The process is one in which purpose, goal, end, or aim has no role. The process is a purely “mechanical” one in which there are endlessly repeated rounds of random or blind variation followed by a passive process of environmental filtration (usually by means of competition to leave more offspring). The variation is blind to need, benefit, or advantage; it’s the result of a perpetual throwing of the dice in mixing genes during sex and mutation in the genetic code that shapes the bits of anatomy. The purely causal process that produces functions reveals how Darwin’s theory of natural selection banishes purpose even as it produces the appearance of purpose; the environmental appropriateness of traits with functions tempts us to confer purpose on them.
What makes a particular neuron a grid cell or a place cell? There’s nothing especially “place-like” or “grid-like” about these cells. They’re no different from cells elsewhere in the brain. The same goes for the neural circuits in which they are combined. What makes them grid cells and place cells are the inputs and outputs that natural selection linked them to. It is one that over millions of years wired up generations of neurons in their location in ways that resulted in ever more appropriate responses for given sensory inputs from the rat’s location and direction.
Evolutionary biology identifies the function of the grid and place cells in the species Rattus rattus by tracing the ways in which environments shaped cells in the hippocampus and entorhinal cortex of mammalian nervous systems to respond appropriately (for the organism) to location and direction. Their having that function consists in their being shaped by a particular Darwinian evolutionary process.
But what were the “developmental” details of how these cells were wired up to do this job in each individual rat’s brain? After all, rats aren’t born with all their grid and place cells in place (Manns and Eichenbaum, 2006). So how do they get “tuned” up to carry continually updated environmentally appropriate information about exactly where the rat is and which way the rat needs to go for food or to avoid cats? Well, this is also a matter of variation and selection by operant conditioning in the rat brain, one in which there is no room for according these cells “purpose” (except as a figure of speech, like the words “design problem” and “selection” that are used as matters of convenience in biology even though there is no design and no active process of selection in operation at all).
Like everything else in the newborn rat’s anatomy, neurons are produced in a sequence and quantity determined by the somatic genes in the rat fetus. Once they multiply, the neurons in the hippocampus and the entorhinal cortex, and many other neurons in the rat’s brain as well, make and unmake synaptic connections with each other. Synaptic connections that lead to behavior rewarded by the environment, such as finding the mother’s teat, are repeated and thus strengthened physically (by the process Eric Kandel discovered; Kandel, 2000). Among the connections made, many are then unmade because they lead to behaviors that are not rewarded by feedback processes that strengthen the synaptic connections physically. Some are even “punished” by processes that interrupt them. In the infant rat, the place cells make contact with the grid cells by just such a process in the first three weeks of life, enabling the rat’s brain to respond so appropriately to its environment that these cells are now called “place” and “grid” cells (O’Keefe and Dostrovsky, 1979). Just as in the evolution of grid and place cells over millions of years, so also in their development in the brain of a rat pup, there is no room whatever for purpose. It’s all blind variation, random chance, and the passive filtering of natural selection.
These details about how the place cells and the grid cells got their functions are important here for two reasons. First, they reflect the way that ’natural selection drives any role for a theory of mind completely out of the domain of biology, completing what Newton started for the domain of physics and chemistry. They show how the appearance of design by some all-powerful intelligence is produced mindlessly by purely mechanical processes (Dennett, 1995). And they make manifest that the next stage in the research program that began with Newton is the banishment of the theory of mind from its last bastion—the domain of human psychology.
Second, these details help answer a natural question to which there is a tempting but deeply mistaken answer. If the grid cells and the place cells function to locate the rat’s position and direction of travel, why don’t they contain or represent its location and direction? If they did, wouldn’t that provide the very basis for reconciling the theory of mind with neuroscience after all? This line of reasoning is so natural that it serves in part to explain the temptation to accord content to the brain in just the way that makes the theory of mind hard to shake. By now, however, it’s easy to see why this reasoning is mistaken. For one thing, if the function of the place and grid cells really makes them representations of direction and location, then every organ, tissue, and structure of an organism with a function would have the same claim on representing facts about the world.
Consider the long neck of the giraffe, whose function is to reach the tasty leaves high up in the trees that shorter herbivores can’t reach, or the white coat of the polar bear whose function is to camouflage the bear from its keen-eyed seal prey in the arctic whiteness. Each has a function because both are the result of the same process of random or blind variation and natural selection that evolved the grid cells in the rat. Does the giraffe’s neck being long represent the fact that the leaves it lets the giraffe reach are particularly tasty? Is the coat of the polar bear about the whiteness of its arctic environment or about the keen eyesight of the seals on which the bear preys? Is there something about the way the giraffe’s neck is arranged that says, “There are tasty leaves high up in the trees that shorter herbivores can’t reach”? Is there something about the white coat of the polar bear that expresses the fact that it well camouflages the bear from its natural prey, seals? Of course not.
But even though they don’t represent anything, the long neck of the giraffe and the white coat of the polar bear are signs: the long neck is a sign that there are tasty leaves high in the trees on the savanna, and the white coat is a sign that the bear needs to camouflage itself from its prey in the whiteness of the arctic, the way clouds are signs that it may rain. But for the neck and coat to also be symbols, to represent, to have the sort of content the theory of mind requires, there’d have to be someone or something to interpret them as meaning tasty leaves or a snowy environment. Think back to why red octagon street signs are symbols of the need to stop—symbols we interpret as such—and not merely signs of that need.
The sign versus symbol distinction is tricky enough to have eluded most neuroscientists. The firing of a grid cell is a good sign of where the rat is. It allows the neuroscientist to make a map of the rat’s space, plot where it is and where it’s heading. John O’Keefe called this a “cognitive map,” following Edward Tolman (1948). The “map,” however, is the neuroscientist’s representation. The rat isn’t carrying a map around with it, to consult about where it is and where it’s heading. Almost all neuroscientists use the word “representation,” which in more general usage means “interpreted symbol,” in this careless way—to describe what is actually only a reliable sign. (See Moser et al., 2014 for a nice example.) The mistake is usually harmless since neuroscientists aren’t misled into searching for some other part of the brain that interprets the neural circuit firing and turns it into a representation. In fact, most neuroscientists have implicitly redefined “representation” to refer to any neural state that is systematically affected by changes in sensory input and results in environmentally appropriate output, in effect, cutting the term “representation” free from the theory of mind, roughly the way evolutionary biologists have redefined “design problem” to cut it free from the same theory.

I too think Rosenberg and Bakker are "right" even if they aren't 100% correct necessarily.  But, far be it from me to think I fully understand Rosenberg's point.  I think you'd find the book interesting though Sci.
13
General Misc. / Re: Quotes
« Last post by sciborg2 on December 09, 2019, 11:04:14 am »
"We have come from God, and inevitably the myths woven by us, though they contain error, will also reflect a splintered fragment of the true light, the eternal truth that is with God. Indeed only by myth-making, only by becoming 'sub-creator' and inventing stories, can Man aspire to the state of perfection that he knew before the Fall.

Our myths may be misguided, but they steer however shakily towards the true harbour, while materialistic 'progress' leads only to a yawning abyss and the Iron Crown of the power of evil."
-JRR Tolkien
14
General Misc. / Re: What are you watching?
« Last post by sciborg2 on December 09, 2019, 01:55:08 am »
Watching Nightflyers, not as bad as the criticisms (IMO ofc) but it does seem this could easily have been a movie or 3-4 ep special than 10 episodes.

15
General Misc. / Re: What are you watching?
« Last post by TaoHorror on December 08, 2019, 11:11:04 pm »
The Watchmen TV show has also been quite good.  At first I was figuring I wouldn't care, since these were new characters, but in actuality, the show is well done and quite interesting in it's own right, plus there seems to be tons of nods to be movie and the comic books.  I'd recommend it, but only if you already saw the movie first.

+1 ... I do my best to go into something with no expectations, but fail frequently ( reminds me of an enlightenment precept to be without thought ... I digress ) and thought I would be let down. Not at all - this thing has legs. It's weird comic-booky is best I can explain it, but stays true to the comic/movie atmospheric contrast of pop music/imagery and violence - some is kick you in the gut kind of stuff, you'all will dig it being fans of PON. The lead Sister Night is awesome - always like the actor playing her, but she shines in this thing and tickles me when she, "WHAT THE FUCK?!?!", the indignation of it comes through. Kinda a spoiler as this part I'm about to talk about is in the current episode 7 ( show ain't over yet ), but nothing earth shattering, but I'll put it in spoilers anyways for any who might care. So far so good, I love the show and has a mystery going on that's on it's surface looks simple, but underneath it's getting really cool.

(click to show/hide)

I'm purposely not reading the Joker discussion as I've not seen it - my oldest son screwed me over, we were supposed to go see it together and he ditched me to go see it with his friends - so now looking like I'll be waiting for it on demand, sigh ... looks good.
16
Neuropath / Re: Countering the Argument with Thorsten
« Last post by sciborg2 on December 08, 2019, 10:29:34 pm »
I've gone through some of Bakker's stuff, and eliminativism did seem like a live possibility but then I read Alex Rosenberg's stuff about Intentionality in Atheist's Guide to Reality where he says we simply have to be wrong about having thoughts:

Quote
"A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...

What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

Physics has ruled out the existence of clumps of matter of the required sort...

…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all...When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong."

The idea we don't have thoughts about things, Intentionality....it seems to me the correct conclusion is materialism is false not that Cogito Ergo Sum is a mistake.
I don't understand the argument. Couldn't you just as easily make an analogy consisting of say, a robot with a camera? The camera takes as input photons from the surroundings and creates an output consisting of an array of pixels or something upon which further computations are then done in order to make some decision according to some goal function. There's no infinite regress here. Generally I don't like comparing a human brain with a piece of software but I think this is one case where the analogy makes sense, except you have a lot of higher order representations, computations etc. going on because you literally have like a trillion interconnected cells.

Regarding our similarity to other organisms...I mean bees apparently understand the concept of Zero so perhaps mentality goes down further than we think, maybe even as deep as the panpsychics suggest.  ;)


Lol at the dog - but re: software...isn't this just an instantiation of a Turing Machine, in which case the calculations only have the meaning we give?

I mean any bit string can be interpreted differently, which is not to say every string of 0's and 1's can be every program imaginable but at the very least it seems any such string can represent a countably infinite number of programs?

I guess I don't see much difference between a computer and an abacus in terms of holding some aboutness in the material?

If we consider that a human brain can be (even just vaguely) associated to a computer (and personaly I think the association make sens in this case), you don't need to give meaning to the calculation.

A software receive one (or many) input and return an output. The output is the "physical" manifestation of the calculation, independently of the "meaning".
We can think of the brain the same, we have stimulus/inputs through our senses, and we output some actions.

I don't see how the fact that our brain is freaking complicated, and that its internal trillions of neuronal activities/"operations" per seconds trick itself into "consciousness" change anything or counter the Argument.

But in the end, philosophy won't explain anything, all we can do is wait for science to give an answer. We can speculate but it's just that, speculation.

Except you do have thoughts about things, which is where the meaning question comes from as it refers to  Aboutness of Thought (what Bakker & other philosophers call Intentionality) which Bakker thinks can be reduced to a physics explanation (matter, energy, forces, etc).

To me Eliminativism toward Intentionality is the central point of Bakker's BBT, so everything turns on this issue. So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.

Admittedly there are other issues at play, like the nature of causation and mental causation, but given that we use Intentionaltiy to find interest relative causal chains it's probably a good starting point to refute the Argument. Which is - IIRC - all I was getting at there.

As for whether science can decide this issue...I'd agree with you if you're talking about something like a revised version of the Libet type experiements (the current set apparently got debunked), but not if we're talking about eliminativism of Intentionality. I just don't see how someone can do anything but find correlations since -as per above- finding a causal explanation for Intentionality would require Intentionality.

>So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.

That why analogies are just anologies and nothing more.
The correctness of a program isn't the issue here, first because even a bugged/sabotaged program will get inputs and return outputs independently of the original intention. Then because we are obviously not programs but complex random evolved chimestry, and not programmed to do something specific by someone else.

So in my opinion, programs can explain it away if we accept the prelude that we are complex physical machines.
The only other option I can conceive is that their is some magic giving power to the brain over the physical world. But I won't accept it as I don't see any proof that it might be the case (just like I don't but garlic on my front door because nothing indicate the existence of vampires).

The brain being to blind to know it react instead of actually doing is moot to me as if it's actually the original of actions it would break causality.

But the "garlic on the door" is assuming that some Holy Grail programs (namely the ones in our brain) have self-awareness and determinate thoughts and others are just atoms in the void?

There will be input and output into minds, but minds also have thoughts referencing aspects of reality - the "Aboutness" aspect of our thinking philosophers call Intentionality.

This all goes back to the Alex Rosenberg quote - how can one clump of matter (neurons) be about another clump of matter (Paris) when matter does not have intrinsic representation? It's less my issue with programs lacking meaning than this question.

Alex Rosenberg and Bakker are correct (AFAICTell) that the only conclusion for those committed to Physicalism is this seeming Aboutness has to be false in some sense, but I just cannot see how that could be.
17
Neuropath / Re: Countering the Argument with Thorsten
« Last post by Jabberwock03 on December 08, 2019, 11:02:36 am »
I've gone through some of Bakker's stuff, and eliminativism did seem like a live possibility but then I read Alex Rosenberg's stuff about Intentionality in Atheist's Guide to Reality where he says we simply have to be wrong about having thoughts:

Quote
"A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...

What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

Physics has ruled out the existence of clumps of matter of the required sort...

…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all...When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong."

The idea we don't have thoughts about things, Intentionality....it seems to me the correct conclusion is materialism is false not that Cogito Ergo Sum is a mistake.
I don't understand the argument. Couldn't you just as easily make an analogy consisting of say, a robot with a camera? The camera takes as input photons from the surroundings and creates an output consisting of an array of pixels or something upon which further computations are then done in order to make some decision according to some goal function. There's no infinite regress here. Generally I don't like comparing a human brain with a piece of software but I think this is one case where the analogy makes sense, except you have a lot of higher order representations, computations etc. going on because you literally have like a trillion interconnected cells.

Regarding our similarity to other organisms...I mean bees apparently understand the concept of Zero so perhaps mentality goes down further than we think, maybe even as deep as the panpsychics suggest.  ;)


Lol at the dog - but re: software...isn't this just an instantiation of a Turing Machine, in which case the calculations only have the meaning we give?

I mean any bit string can be interpreted differently, which is not to say every string of 0's and 1's can be every program imaginable but at the very least it seems any such string can represent a countably infinite number of programs?

I guess I don't see much difference between a computer and an abacus in terms of holding some aboutness in the material?

If we consider that a human brain can be (even just vaguely) associated to a computer (and personaly I think the association make sens in this case), you don't need to give meaning to the calculation.

A software receive one (or many) input and return an output. The output is the "physical" manifestation of the calculation, independently of the "meaning".
We can think of the brain the same, we have stimulus/inputs through our senses, and we output some actions.

I don't see how the fact that our brain is freaking complicated, and that its internal trillions of neuronal activities/"operations" per seconds trick itself into "consciousness" change anything or counter the Argument.

But in the end, philosophy won't explain anything, all we can do is wait for science to give an answer. We can speculate but it's just that, speculation.

Except you do have thoughts about things, which is where the meaning question comes from as it refers to  Aboutness of Thought (what Bakker & other philosophers call Intentionality) which Bakker thinks can be reduced to a physics explanation (matter, energy, forces, etc).

To me Eliminativism toward Intentionality is the central point of Bakker's BBT, so everything turns on this issue. So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.

Admittedly there are other issues at play, like the nature of causation and mental causation, but given that we use Intentionaltiy to find interest relative causal chains it's probably a good starting point to refute the Argument. Which is - IIRC - all I was getting at there.

As for whether science can decide this issue...I'd agree with you if you're talking about something like a revised version of the Libet type experiements (the current set apparently got debunked), but not if we're talking about eliminativism of Intentionality. I just don't see how someone can do anything but find correlations since -as per above- finding a causal explanation for Intentionality would require Intentionality.

>So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.

That why analogies are just anologies and nothing more.
The correctness of a program isn't the issue here, first because even a bugged/sabotaged program will get inputs and return outputs independently of the original intention. Then because we are obviously not programs but complex random evolved chimestry, and not programmed to do something specific by someone else.

So in my opinion, programs can explain it away if we accept the prelude that we are complex physical machines.
The only other option I can conceive is that their is some magic giving power to the brain over the physical world. But I won't accept it as I don't see any proof that it might be the case (just like I don't but garlic on my front door because nothing indicate the existence of vampires).

The brain being to blind to know it react instead of actually doing is moot to me as if it's actually the original of actions it would break causality.
18
Neuropath / Re: Countering the Argument with Thorsten
« Last post by sciborg2 on December 08, 2019, 05:43:54 am »
I've gone through some of Bakker's stuff, and eliminativism did seem like a live possibility but then I read Alex Rosenberg's stuff about Intentionality in Atheist's Guide to Reality where he says we simply have to be wrong about having thoughts:

Quote
"A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...

What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

Physics has ruled out the existence of clumps of matter of the required sort...

…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all...When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong."

The idea we don't have thoughts about things, Intentionality....it seems to me the correct conclusion is materialism is false not that Cogito Ergo Sum is a mistake.
I don't understand the argument. Couldn't you just as easily make an analogy consisting of say, a robot with a camera? The camera takes as input photons from the surroundings and creates an output consisting of an array of pixels or something upon which further computations are then done in order to make some decision according to some goal function. There's no infinite regress here. Generally I don't like comparing a human brain with a piece of software but I think this is one case where the analogy makes sense, except you have a lot of higher order representations, computations etc. going on because you literally have like a trillion interconnected cells.

Regarding our similarity to other organisms...I mean bees apparently understand the concept of Zero so perhaps mentality goes down further than we think, maybe even as deep as the panpsychics suggest.  ;)


Lol at the dog - but re: software...isn't this just an instantiation of a Turing Machine, in which case the calculations only have the meaning we give?

I mean any bit string can be interpreted differently, which is not to say every string of 0's and 1's can be every program imaginable but at the very least it seems any such string can represent a countably infinite number of programs?

I guess I don't see much difference between a computer and an abacus in terms of holding some aboutness in the material?

If we consider that a human brain can be (even just vaguely) associated to a computer (and personaly I think the association make sens in this case), you don't need to give meaning to the calculation.

A software receive one (or many) input and return an output. The output is the "physical" manifestation of the calculation, independently of the "meaning".
We can think of the brain the same, we have stimulus/inputs through our senses, and we output some actions.

I don't see how the fact that our brain is freaking complicated, and that its internal trillions of neuronal activities/"operations" per seconds trick itself into "consciousness" change anything or counter the Argument.

But in the end, philosophy won't explain anything, all we can do is wait for science to give an answer. We can speculate but it's just that, speculation.

Except you do have thoughts about things, which is where the meaning question comes from as it refers to  Aboutness of Thought (what Bakker & other philosophers call Intentionality) which Bakker thinks can be reduced to a physics explanation (matter, energy, forces, etc).

To me Eliminativism toward Intentionality is the central point of Bakker's BBT, so everything turns on this issue. So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.

Admittedly there are other issues at play, like the nature of causation and mental causation, but given that we use Intentionaltiy to find interest relative causal chains it's probably a good starting point to refute the Argument. Which is - IIRC - all I was getting at there.

As for whether science can decide this issue...I'd agree with you if you're talking about something like a revised version of the Libet type experiements (the current set apparently got debunked), but not if we're talking about eliminativism of Intentionality. I just don't see how someone can do anything but find correlations since -as per above- finding a causal explanation for Intentionality would require Intentionality.
19
Philosophy & Science / Re: Multiverse Theories Are Bad for Science
« Last post by themerchant on December 07, 2019, 11:14:51 am »
All these theories and no experiments, science is about falsifiable results if you can't even experiment on your theories it's philosophy not science. All imo of course.

Just for clarity, people agree on this, yeah? One isn't "doing science" if all they are doing is proposing un-falsifiable claims, right? There is some division between theoretical vs. experimental of course, but is there a point where theory is so far from being verifiable that it ceases to be scientific?

I'm paraphrasing a Feynman point in a lecture about science.

https://www.youtube.com/watch?v=LIxvQMhttq4

It's also basically what i was taught and also think, although i might just be indocrtrinated to think that. Who knows.
20
The Unholy Consult / Re: [TUC Spoilers] Esmenet the Angelic Ciphrang
« Last post by mostly.harmless on December 07, 2019, 12:45:14 am »
Lol

Sent from my LYA-L09 using Tapatalk

Pages: 1 [2] 3 4 ... 10