Recent Posts

Pages: [1] 2 3 ... 10
1
Short Stories & Others / Re: The Lollipop Factory
« Last post by Wilshire on February 18, 2020, 06:43:25 pm »
2020 Update: From Three Pound Brain
Quote
What are the odds that I would finish writing a near-future viral thriller (The Lollipop Factory) just as 2019-nCoV was becoming entrenched in Wuhan?

So he at least wrote it. Lets hope he uses "2019-nCoV" hysteria as motivation to publish it.
2
Neuropath / Re: Countering the Argument with Thorsten
« Last post by sciborg2 on December 11, 2019, 10:14:25 pm »
But I think you are right, I find myself being something of both a bad Hegelian and a bad Psysicalist.  I'm also (something of) a Monist, probably.  I think "mentality" isn't an innate state of matter, in the same way that "wetness" isn't.  It's a product of structure and relation.  Matter can deliver what we call "mind" in the same sort of way that matter can deliver a house or a pile of rubble, just depending on how it's structured in relation to us.

Wetness as in the feeling of something being wet, or liquid as a matter-state?

So some structures are special, giving feels/thoughts/etc, and other structures are not? Or do all structures have something-it-is-to-be-like them?
3
Neuropath / Re: Countering the Argument with Thorsten
« Last post by H on December 11, 2019, 03:41:47 pm »
When you say lapsing into a real Phenom. Stance, do you mean a position where there is something mental or at least proto-mental (whatever that means) in Nature?

I guess I'm curious because you seem to be trying to reconcile Hegel's Idealism with a more Physicalist position, or am I reading you wrong there?
I don't know about that.  I think it would depend on what "in nature" means.  Because, of course, to me, humans are in (and of nature).  But I think I mean, more so, that I am taking for granted, or making the assumption that what self-consciousness seems to be, experientially or phenomenologically, it somewhat actually is (or that it is at all).  And also, what just consciousness (i.e. non-self consciousness) is, also actually is what it seems (to us) to be (and not be).

But I think you are right, I find myself being something of both a bad Hegelian and a bad Psysicalist.  I'm also (something of) a Monist, probably.  I think "mentality" isn't an innate state of matter, in the same way that "wetness" isn't.  It's a product of structure and relation.  Matter can deliver what we call "mind" in the same sort of way that matter can deliver a house or a pile of rubble, just depending on how it's structured in relation to us.
4
Neuropath / Re: Countering the Argument with Thorsten
« Last post by sciborg2 on December 11, 2019, 12:30:50 am »
So before self-consciousness is Consciousness? Marcus Arvan has an argument like that, he told me he was going to publish a paper on how to reconcile his Computationalism with his (functional at least) Dualism...It's interesting as he doesn't necessarily claim there are insenstate structures, that even video game characters might suffer...I explained to him how a game character doesn't even have independent "parts" necessarily, sharing aspects in Unity for example...but he was (AFAICTell) undeterred....

In an underthought out way, maybe?  I think (maybe?, some?) animals are conscious, but exactly what that "non-self" consciousness is, exactly, well, that is hard to say.  But I think I am lapsing to a real Phenomenological stance though.  Or, so it seems to me I might be.

But I think when most people talk about consciousness, they really specifically mean human consciousness, which is self-consciousness specifically, I think.  Because Dasein (Human Being, self-conscious Being) is different (presumably) than, say, Ape Being, or Octopus Being, as far as I can tell at least.

When you say lapsing into a real Phenom. Stance, do you mean a position where there is something mental or at least proto-mental (whatever that means) in Nature?

I guess I'm curious because you seem to be trying to reconcile Hegel's Idealism with a more Physicalist position, or am I reading you wrong there?
5
Neuropath / Re: Countering the Argument with Thorsten
« Last post by H on December 10, 2019, 11:29:06 pm »
So before self-consciousness is Consciousness? Marcus Arvan has an argument like that, he told me he was going to publish a paper on how to reconcile his Computationalism with his (functional at least) Dualism...It's interesting as he doesn't necessarily claim there are insenstate structures, that even video game characters might suffer...I explained to him how a game character doesn't even have independent "parts" necessarily, sharing aspects in Unity for example...but he was (AFAICTell) undeterred....

In an underthought out way, maybe?  I think (maybe?, some?) animals are conscious, but exactly what that "non-self" consciousness is, exactly, well, that is hard to say.  But I think I am lapsing to a real Phenomenological stance though.  Or, so it seems to me I might be.

But I think when most people talk about consciousness, they really specifically mean human consciousness, which is self-consciousness specifically, I think.  Because Dasein (Human Being, self-conscious Being) is different (presumably) than, say, Ape Being, or Octopus Being, as far as I can tell at least.
6
Neuropath / Re: Countering the Argument with Thorsten
« Last post by sciborg2 on December 10, 2019, 11:21:21 pm »
So the structure inherently contains the meaning - a sort of Platonism - when there's recursion?

I would agree there seems to a dismissal of structure's importance among the more spiritual crowd that tries to compare the brain to a radio controlled car* but it also seems to me structures don't have determinate meaning - at least from the outside. So the idea here would be a structure - or at least a structure + process - brings about mental characteristics to the inside...right?

*Though Einstein did write the introduction to Upton Sinclair's book on telepathy Mental Radio... ??? :-X :-X

Hmm, I'm not sure, obviously I have no thought this out thoroughly.  But I don't think structure has to equal meaning.  I could imagine structures that are meaningless though, right?  It's just that what we call "meaning" is generally found through relations, structle being one that comes up often?

But I think I agree, structure, relation and process, that is what mind is.  Where is the recursion?  Well, because we are structured in relation to ourselves in such a way that the process of Being, that is, Dasein (human Being) takes itself into account in what it is Be (or what is is in Becoming).  So self-consciousness is recursively considering itself, that is, it's relation to itself (in addition to other things), in it's Being.

Does that make sense?  I don't even know, but off the top of my head it seems to, maybe.  Or maybe it's a word-salad.

So before self-consciousness is Consciousness? Marcus Arvan has an argument like that, he told me he was going to publish a paper on how to reconcile his Computationalism with his (functional at least) Dualism...It's interesting as he doesn't necessarily claim there are insenstate structures, that even video game characters might suffer...I explained to him how a game character doesn't even have independent "parts" necessarily, sharing aspects in Unity for example...but he was (AFAICTell) undeterred....
7
Neuropath / Re: Countering the Argument with Thorsten
« Last post by H on December 10, 2019, 11:14:38 pm »
So the structure inherently contains the meaning - a sort of Platonism - when there's recursion?

I would agree there seems to a dismissal of structure's importance among the more spiritual crowd that tries to compare the brain to a radio controlled car* but it also seems to me structures don't have determinate meaning - at least from the outside. So the idea here would be a structure - or at least a structure + process - brings about mental characteristics to the inside...right?

*Though Einstein did write the introduction to Upton Sinclair's book on telepathy Mental Radio... ??? :-X :-X

Hmm, I'm not sure, obviously I have no thought this out thoroughly.  But I don't think structure has to equal meaning.  I could imagine structures that are meaningless though, right?  It's just that what we call "meaning" is generally found through relations, structle being one that comes up often?

But I think I agree, structure, relation and process, that is what mind is.  Where is the recursion?  Well, because we are structured in relation to ourselves in such a way that the process of Being, that is, Dasein (human Being) takes itself into account in what it is Be (or what is is in Becoming).  So self-consciousness is recursively considering itself, that is, it's relation to itself (in addition to other things), in it's Being.

Does that make sense?  I don't even know, but off the top of my head it seems to, maybe.  Or maybe it's a word-salad.
8
Neuropath / Re: Countering the Argument with Thorsten
« Last post by sciborg2 on December 10, 2019, 10:56:46 pm »
So, to say that "mind" is only neuronal activity sort of seems, to me, to be akin to saying the house and the pile of rubble are the same thing.  Except, of course, they aren't, because the structure and relation are keys to what makes a "whole" of it's constituent parts.

So the structure inherently contains the meaning - a sort of Platonism - when there's recursion?

I would agree there seems to a dismissal of structure's importance among the more spiritual crowd that tries to compare the brain to a radio controlled car* but it also seems to me structures don't have determinate meaning - at least from the outside. So the idea here would be a structure - or at least a structure + process - brings about mental characteristics to the inside...right?

*Though Einstein did write the introduction to Upton Sinclair's book on telepathy Mental Radio... ??? :-X :-X
9
Neuropath / Re: Countering the Argument with Thorsten
« Last post by H on December 10, 2019, 03:05:12 pm »
Great quote - I disagree w/ Alex Rosenberg but I think more than most he correctly identifies the reality of the Physicalist position, the acceptance that matter (and thus the brain if it's matter) cannot be about anything.

Curious - why do you think the eliminativist position is correct? I could easily throw out free will if there was enough evidence, but the idea we don't have thoughts is a step beyond my boggle threshold.
Well, I don't know.  I do think that "aboutness" is likely false.  Past that, I do think that "conditioning" of some sort, is likely most of what "mind" is/does.  Does this mean that anything and everything we attribute to mind is wholly incorrect?  I don't know, but I don't think it must be that.

Maybe I could say my idea is that the phenomena of "mind" is both not what we think it is (with aboutness and the like), but also is not just "bare" neuronal firing.  To me, maybe this is because consciousness (and more importantly self-consciousness) is both recursive and relational.

Recursive in the sense that being self-aware means you are in a sort of feedback loop, one that allows you to "see" yourself and (to some degree or other) modify or influence your own thoughts/behaviors.  Relational, because I don't think any "thought" whatever that might be, or not be, "stands alone."  In the same way that the letters C-A-T have anything, in-themselves, to do with a four legged animal, no thought, in-itself, has any "stand alone" thing like meaning, outside it's relations.  Maybe in a sort of mereology, each part is both a part, but an individual part does not inform us of the whole, being only a part of the whole.

So, to say that some neuronal "part" is the whole would be like saying that hydrogen atoms, one in a cat and one in a star, explains both cats and stars for us.  Or, let us pretend you own a house.  Then there is an earthquake and the house collapses.  We could be an "imaginative physicalist" (i guess) and say, your house before and after are the exact same things.  Still the same number of atoms, still the same atomic, molecular and chemical compositions.  In other words, a summary physical survey says both things (the house before, the pile of rubble now) are the "same."  Except, of course, no one would say that, because it facile to see that the two are not the same at all.  One had a definite, "meaningful" structure and relation, where the other is structured and related (in a manner of speaking) only to the manner in which it was before and in collapsing.

So, to say that "mind" is only neuronal activity sort of seems, to me, to be akin to saying the house and the pile of rubble are the same thing.  Except, of course, they aren't, because the structure and relation are keys to what makes a "whole" of it's constituent parts.

Of course, I am not smart of credentialed, so maybe that is a whole line of crap.
10
Neuropath / Re: Countering the Argument with Thorsten
« Last post by sciborg2 on December 09, 2019, 09:00:36 pm »
Alex Rosenberg and Bakker are correct (AFAICTell) that the only conclusion for those committed to Physicalism is this seeming Aboutness has to be false in some sense, but I just cannot see how that could be.

From Rosenberg's latest book:
Quote
What makes the neurons in the hippocampus and the medial entorhinal cortex of the rat into grid cells and place cells—cells for location and direction? Why do they have that function, given that structurally they are pretty much like many other neurons throughout both the rat and the human brain?
From as early in evolution as the emergence of single-cell creatures, there was selection for any mechanism that just happened to produce environmentally appropriate behavior, such as being in the right place at the right time. In single-cell creatures, there are “organelles” that “detect” gradients in various chemicals or environmental factors (sugars, salts, heat, cold, even magnetic fields). “Detection” here simply means that, as these gradients strengthen or weaken, the organelles change shape in ways that cause their respective cells to move toward or away from the chemicals or factors as the result of some quite simple chemical reactions. Cells with organelles that happened to drive them toward sugars or away from salts survived and reproduced, carrying along these adaptive organelles. The cells whose organelles didn’t respond this way didn’t survive. Random variations in the organelles of other cells that just happened to convey benefits or advantages or to meet those cells’ survival or reproductive needs were selected for.
The primitive organelles’ detection of sugars or salts consisted in nothing more than certain protein molecules inside them changing shape or direction of motion in a chemical response to the presence of salt or sugar molecules. If enough of these protein molecules did this, the shape of the whole cell, its direction, or both would change, too. If cells contained organelles with iron atoms in them, the motion of the organelles and the cells themselves would change as soon as the cells entered a magnetic field. If this behavior enhanced the survival of the cells, the organelles responsible for the behavior would be called “magnetic field detectors.” There’d be nothing particularly “detecting” about these organelles, however, or the cells they were part of. The organelles and cells would just change shape or direction in the presence of a magnetic field in accordance with the laws of physics and chemistry.
The iterative process of evolution that Darwin discovered led from those cells all the way to the ones we now identify as place and grid cells in the rat’s brain. The ancestors to these cells—the earliest place and grid cells in mammals—just happened to be wired to the rest of the rat’s ancestors’ neurology, in ways that just happened to produce increasingly adaptive responses to the rat’s ancestors’ location and direction. In other mammals, these same types of cells happened to be wired to the rest of the neurology in a different way, one that moved the evolution of the animal in a less-adaptive direction. Mammals wired up in less-adaptive ways lost out in the struggle for survival. Iteration (repetition) of this process produced descendants with neurons that cause behavior that is beautifully appropriate to the rat’s immediate environment. So beautifully appropriate, that causing the behavior is their function.
The function of a bit of anatomy is fixed by the particular adaptation that natural selection shaped it to deliver. The process is one in which purpose, goal, end, or aim has no role. The process is a purely “mechanical” one in which there are endlessly repeated rounds of random or blind variation followed by a passive process of environmental filtration (usually by means of competition to leave more offspring). The variation is blind to need, benefit, or advantage; it’s the result of a perpetual throwing of the dice in mixing genes during sex and mutation in the genetic code that shapes the bits of anatomy. The purely causal process that produces functions reveals how Darwin’s theory of natural selection banishes purpose even as it produces the appearance of purpose; the environmental appropriateness of traits with functions tempts us to confer purpose on them.
What makes a particular neuron a grid cell or a place cell? There’s nothing especially “place-like” or “grid-like” about these cells. They’re no different from cells elsewhere in the brain. The same goes for the neural circuits in which they are combined. What makes them grid cells and place cells are the inputs and outputs that natural selection linked them to. It is one that over millions of years wired up generations of neurons in their location in ways that resulted in ever more appropriate responses for given sensory inputs from the rat’s location and direction.
Evolutionary biology identifies the function of the grid and place cells in the species Rattus rattus by tracing the ways in which environments shaped cells in the hippocampus and entorhinal cortex of mammalian nervous systems to respond appropriately (for the organism) to location and direction. Their having that function consists in their being shaped by a particular Darwinian evolutionary process.
But what were the “developmental” details of how these cells were wired up to do this job in each individual rat’s brain? After all, rats aren’t born with all their grid and place cells in place (Manns and Eichenbaum, 2006). So how do they get “tuned” up to carry continually updated environmentally appropriate information about exactly where the rat is and which way the rat needs to go for food or to avoid cats? Well, this is also a matter of variation and selection by operant conditioning in the rat brain, one in which there is no room for according these cells “purpose” (except as a figure of speech, like the words “design problem” and “selection” that are used as matters of convenience in biology even though there is no design and no active process of selection in operation at all).
Like everything else in the newborn rat’s anatomy, neurons are produced in a sequence and quantity determined by the somatic genes in the rat fetus. Once they multiply, the neurons in the hippocampus and the entorhinal cortex, and many other neurons in the rat’s brain as well, make and unmake synaptic connections with each other. Synaptic connections that lead to behavior rewarded by the environment, such as finding the mother’s teat, are repeated and thus strengthened physically (by the process Eric Kandel discovered; Kandel, 2000). Among the connections made, many are then unmade because they lead to behaviors that are not rewarded by feedback processes that strengthen the synaptic connections physically. Some are even “punished” by processes that interrupt them. In the infant rat, the place cells make contact with the grid cells by just such a process in the first three weeks of life, enabling the rat’s brain to respond so appropriately to its environment that these cells are now called “place” and “grid” cells (O’Keefe and Dostrovsky, 1979). Just as in the evolution of grid and place cells over millions of years, so also in their development in the brain of a rat pup, there is no room whatever for purpose. It’s all blind variation, random chance, and the passive filtering of natural selection.
These details about how the place cells and the grid cells got their functions are important here for two reasons. First, they reflect the way that ’natural selection drives any role for a theory of mind completely out of the domain of biology, completing what Newton started for the domain of physics and chemistry. They show how the appearance of design by some all-powerful intelligence is produced mindlessly by purely mechanical processes (Dennett, 1995). And they make manifest that the next stage in the research program that began with Newton is the banishment of the theory of mind from its last bastion—the domain of human psychology.
Second, these details help answer a natural question to which there is a tempting but deeply mistaken answer. If the grid cells and the place cells function to locate the rat’s position and direction of travel, why don’t they contain or represent its location and direction? If they did, wouldn’t that provide the very basis for reconciling the theory of mind with neuroscience after all? This line of reasoning is so natural that it serves in part to explain the temptation to accord content to the brain in just the way that makes the theory of mind hard to shake. By now, however, it’s easy to see why this reasoning is mistaken. For one thing, if the function of the place and grid cells really makes them representations of direction and location, then every organ, tissue, and structure of an organism with a function would have the same claim on representing facts about the world.
Consider the long neck of the giraffe, whose function is to reach the tasty leaves high up in the trees that shorter herbivores can’t reach, or the white coat of the polar bear whose function is to camouflage the bear from its keen-eyed seal prey in the arctic whiteness. Each has a function because both are the result of the same process of random or blind variation and natural selection that evolved the grid cells in the rat. Does the giraffe’s neck being long represent the fact that the leaves it lets the giraffe reach are particularly tasty? Is the coat of the polar bear about the whiteness of its arctic environment or about the keen eyesight of the seals on which the bear preys? Is there something about the way the giraffe’s neck is arranged that says, “There are tasty leaves high up in the trees that shorter herbivores can’t reach”? Is there something about the white coat of the polar bear that expresses the fact that it well camouflages the bear from its natural prey, seals? Of course not.
But even though they don’t represent anything, the long neck of the giraffe and the white coat of the polar bear are signs: the long neck is a sign that there are tasty leaves high in the trees on the savanna, and the white coat is a sign that the bear needs to camouflage itself from its prey in the whiteness of the arctic, the way clouds are signs that it may rain. But for the neck and coat to also be symbols, to represent, to have the sort of content the theory of mind requires, there’d have to be someone or something to interpret them as meaning tasty leaves or a snowy environment. Think back to why red octagon street signs are symbols of the need to stop—symbols we interpret as such—and not merely signs of that need.
The sign versus symbol distinction is tricky enough to have eluded most neuroscientists. The firing of a grid cell is a good sign of where the rat is. It allows the neuroscientist to make a map of the rat’s space, plot where it is and where it’s heading. John O’Keefe called this a “cognitive map,” following Edward Tolman (1948). The “map,” however, is the neuroscientist’s representation. The rat isn’t carrying a map around with it, to consult about where it is and where it’s heading. Almost all neuroscientists use the word “representation,” which in more general usage means “interpreted symbol,” in this careless way—to describe what is actually only a reliable sign. (See Moser et al., 2014 for a nice example.) The mistake is usually harmless since neuroscientists aren’t misled into searching for some other part of the brain that interprets the neural circuit firing and turns it into a representation. In fact, most neuroscientists have implicitly redefined “representation” to refer to any neural state that is systematically affected by changes in sensory input and results in environmentally appropriate output, in effect, cutting the term “representation” free from the theory of mind, roughly the way evolutionary biologists have redefined “design problem” to cut it free from the same theory.

I too think Rosenberg and Bakker are "right" even if they aren't 100% correct necessarily.  But, far be it from me to think I fully understand Rosenberg's point.  I think you'd find the book interesting though Sci.

Great quote - I disagree w/ Alex Rosenberg but I think more than most he correctly identifies the reality of the Physicalist position, the acceptance that matter (and thus the brain if it's matter) cannot be about anything.

Curious - why do you think the eliminativist position is correct? I could easily throw out free will if there was enough evidence, but the idea we don't have thoughts is a step beyond my boggle threshold.
Pages: [1] 2 3 ... 10