Other Titles > Neuropath

Countering the Argument with Thorsten

<< < (2/20) > >>

What Came Before:

--- Quote from: Thorsten on May 15, 2013, 07:02:13 am ---The applicability of logic is a very deep one. How do you decide on something being true? Where do the logical deduction rules come from? Why do we think a principle like Occam's razor ('If there are several competing explanations, take the one requiring the least amount of assumptions.') is good for establishing truth?
--- End quote ---

Well, we humans seem to use a combination of statistic descriptions (math) and logic (linguistic justifications) with varying degrees of validity to convey holistic packets of information - Truth seems to satisfy making the most sense out of a(ll) given occurrence of phenomena. Data and concise, valid communication are ideal as is the heuristic strategy (Occam's razor) of positing and falsifying fewer, rather than more assumptions to support your actual hypothesis. We're talking averages across averages, right?

--- Quote from: Thorsten on May 15, 2013, 07:02:13 am ---Occam's razor is in my view a good principle because in my experience it turns out to be true most of the time. The whole set of scientific principles applied to physics is justified, because it actually works out for me - I can calculate a phenomenon, and my experience tells me that I successfully predicted what I will experience.

(As a side note, modern physics is all about what you experience and not at all about what things really *are* - all Quantum Field Theory is concerned with are 'observables', and it is very clear that we don't have a clue what nature is, only how it behaves when we look at it).
--- End quote ---

+1. Perhaps, you've some underlying disdain for psychological research? I know it's a prevalent feeling among the sciences.

--- Quote from: Thorsten on May 15, 2013, 07:02:13 am ---Or, to be slightly mean: Imagine one of the experimental papers supposedly disproving the notion of free will is sent to a journal. The referee recommends not to publish the paper. I am prepared to bet a lot of money that the researchers do not think 'Well, the referee has no free will, he is determined by circumstances to come to this decision to decline publication, so there's nothing we can do.' I am very sure what they will no is to make an appeal to the free decision-making ability of the referee to change this decision based on new arguments. Because science requires the ability to decide between a true proposition and a false one. If we could not make that decision because we'd be compelled by circumstance to believe something, science wouldn't work conceptually. So that's why researchers disproving free will don't act in any way as if their research would actually be true.
--- End quote ---

Again, I think you and I have a the opportunity to really hash some of BBH's finer points. I think you've done yourself an initial disservice due to the fact of stopping at Bakker's positions. Perhaps, we can discover better justifications than simply casting "science!"

The above examples (this and the one from your last post) seem to presuppose that scientists (or people) are changed in some profound manner by neurally representing linguistic statements in the first place.

As a segue, I'd like to add that learning to practice a certain set of scientific or academic ritual's doesn't seem to change our brains in the drastic forms we were freestyling (2-dim representation of 4-dim structure, Neil's experience of the cognitive and behaviorial expression of Neil-It).

Now these strike me to think of learning (but my mind is usually there, regardless).

There are certain instances of theorized pervasive and immediate learning. Things happen and people can absorb new behaviors or cognitive expressions in truly impressive time periods. We, also, fall into something of a quagmire regarding social interpretation and how as academics, we work to adopt and embody a complex and particular mode(s) to express information. Yet the public (or the world of Neuropath) is irrevocably changed by the influence of Neil's (NSA Neuroscientists) research.

This also makes me think of the Graduates (Neuropaths) and how they exist among a sea of different neural expressions of the same matter (Neil-It, Psychopaths, Autistics, and us... Normies?) but I really am beginning to find I'm not sure where you stand on some specific matters (at bottom).

--- Quote from: Thorsten on May 15, 2013, 07:02:13 am ---I've written a longer text about 'belief in evidence' in a discussion of Dawkins' The God Delusion in case you're interested - it's the second part.

--- End quote ---

Will absolutely read it at some point.

--- Quote from: Thorsten on May 15, 2013, 07:02:13 am ---
--- Quote ---You've treated this like a logic problem - my first thoughts are a couple of strange examples from neurological studies.(...)While these might support your Reductionism metaphor, I don't think it helps the Applicability of Logic.
--- End quote ---

1. I know these cases do exist, but what can we really deduce from them?

I have no doubt that there is a deep connection between mind and body at some level. A very simple example is the experience that when I drink alcohol, my mental experience changes.

2. Yet, on the next day, my mind reverts back to how it was.

3. But then there seems to be something as mind-internal experience -  I might have a crucial insight, or come to a major decision in my life. And on the next day, my mind does not revert back but remains changed from that point on.

So could this not suggest a hardware/software model in which in the first case I temporarily change the hardware, and as a result the software runs differently, but reverts back to its normal operation once the hardware operates normally, whereas the second case represents a change in the software which isn't easily revertable?

I would assume that if someone over night severs the neural connections to my leg and attaches the same nerve bundles to my arms, then my intention to move my leg will lead to some motion of my arms. In a similar way, I would assume that wrong wiring of senses can lead to all sorts of weird perceptions - like pleasure where pain would be expected. Such rewirings would be, unlike in the case of alcohol, more permanent hardware damage, with little change of the software to resume normal operation.

4. Yet, in many cases the mind seems to be able to work around damage. I vaguely remember an experiment in which people were asked to wear mirror glasses which showed the world upside-down, and while this was initially very confusing, their mind learned to undo the effect, and after a few days they saw the world normally again - and then inverted once they took the glasses off, until again after a few days the perception adjusted to normal.

The point seems to be that being able to prove that changes to the body/brain change the mental experience isn't the same thing as proving that there is no software equivalent and that the hardware is all there is to the problem.
--- End quote ---

I might have picked out specific parts but I shall bold the striking moments instead.

1. It's a good question and one I had to return to. They certainly ply my imagination with endless scenarios. Specifically, what does it mean towards Blind Brain Hypothesis? Well, Neil-It's argument seemed to be that there was a most advantageous cortical representations. There is a fantastic quote from LTG, which captures it succinctly. But more importantly, that what we experience doesn't have to even remotely correspond to modes of thought now... Who knows how someone meaningfully partaking in changes in cognitive experience would rationalize there new experiences or if they would at all (which is why Buddhism and Nihilism seem to make so many appearences at TPB.

2. For whatever reasons, it seems the brain prefers neural homeostasis to instances of inbalance. Even in the arbitrarily destructive studies where the neural junctions for eyesight were destroyed and the auditory neural tracts rewired to the visual cortex, the visual cortex developed the same, physical cortical representation or architecture as it would normally in the auditory cortex. Neuroscientists have pursued discoveries like this towards two primary hypotheses: that the brain can universally represent information for which it has a sensory appendage and that it does so by being plastic (capable of dynamically changing, recycling existing cortical structures, or even growing in number and connections (through density and pruning). Regardless, neuronal homeostatis...?

3. There are plenty of studies/on-going research towards the cognitive phenomenon of insight and subsequent rapid changes in cortical representation. There are also, as a I mentioned, studies towards specific neurotransmitters, developmental periods, etc, in which rapid change in brain structure happens and is retain (often in cases, showcasing top-down development - though, I read Jorge in my mind as I'm sure he'd argue that this evidence still favors something in the brain doing something in the brain).

4. Three or so weeks of disorientation and puking and then, apparently like a snap of fingers, the world is (not-righted) right. And perception happens as ordinarily experienced, then take them off and three weeks again - it's funny as I'm not sure if anyone has pursued this further that suggests that recursive neural structures are bypassing the normal retinal flip. Classic studies. I've wanted to do it a number of times myself but have never had the time :(. You can likewise do smaller experiential plays of threshold like wearing a blackout blindfold past ninety minutes towards increase auditory sensation (during imaging there is a corresponding sudden activate in the visual cortex).

--- Quote from: Thorsten on May 15, 2013, 07:02:13 am ---Thanks - I appreciate that. I am reading the White-Luck Warrior at the moment, so I wanted to read up on the details of my Metaphysics (and partially languages) - so then I started looking again for where the discussions are.

--- End quote ---

Cheers. I have to run, though I'll quick read this over.

However, continued from the middle - you don't think there is evidence to suggest that our experiences as "I," as cognitive agents, isn't illusory insomuch as colours, perceptions, would not exist as they do without corresponding evidence of physiological changes (we're attracted to the redness of an apple because it has been beneficial towards our survival)?

If examples like this hold weight, then our interpretations (leading to behaviors) surrounding these objects and manifestations of percepts, where the environment matters to us, for sustenance, we experience it?

If we cognitively use the same heuristics and biases to interpret our brains as our environment, then not only are we always playing catchup neurally, needing more to represent before, we're also only seeing (experiencing, cogitating, conscious of) only those extremes in threshold that "I" or "We" are capable of. On that note, there are documented averages in sensory and perceptive thresholds where a distinct average emerges between what we hear, see, feel, taste, or smell, and what activation the brain shows in response to things that "I" don't notice.

Lol. Well, I am and well and truly late.

Hope that's food for thought, Thorsten. Again, I'm not even sure I disagree with your initial argumentation - I simply thought of many of these things by extension that weren't satisfactorily referenced for my liking.

Apologies for the spelling errors and general moments of failure throughout my writing. Can't stay for a reread.

Callan S.:

--- Quote ---While the mind thinks it executes a plan to realize some future goal, according to the Argument, the underlying reality is that the past state is just computed forward, the seeming future goal towards the mind proceeds is simply an illusion, in reality it is the past that determines what will happen, not the future vision.
--- End quote ---
I'm not sure I understand this understanding of 'the argument'?

If I think the ball is under the third cup and my goal is to lift it up, that the ball is under another cup (or no cup at all) is just how reality plays out - the 'future vision' isn't all that relevant. It's merely a guess.

What Came Before:
Some glaring face-palms.

But specifically, 2. That study of questionable surgery was performed on ferrets (the auditory cortex responds incrementally across the cortical matter to pitch and volume, which occurred spontaneously in the visual cortex post-op).


--- Quote ---Well, we humans seem to use a combination of statistic descriptions (math) and logic (linguistic justifications) with varying degrees of validity to convey holistic packets of information - Truth seems to satisfy making the most sense out of a(ll) given occurrence of phenomena. Data and concise, valid communication are ideal as is the heuristic strategy (Occam's razor) of positing and falsifying fewer, rather than more assumptions to support your actual hypothesis. We're talking averages across averages, right?

--- End quote ---

Well, the catch is making most sense - how do you define that? 

Within a formal system (science), making sense is defined in terms of deviation from the data, given a hypothesis - so within the formal system I can decide what makes sense and what doesn't.  However, that doesn't tell if the system makes sense.

Adopt a different formal system - comparison to scripture. Then the definition of 'making sense' becomes 'is sufficiently close to something that's described in the scripture'.

If you're thinking within science, looking into scripture for answers makes no sense. If you are thinking within scripture, doing a statistical analysis makes no sense.

What you need to argue is that science is the best among all possible formal systems to be applied to some particular problem (the mind in this case). Because clearly there are problems to which science doesn't apply. Is the Mona Lisa great art? is not a question you could address with a measurement or statistical analysis.

--- Quote ---Perhaps, you've some underlying disdain for psychological research?

--- End quote ---

Perhaps... I want to be careful, there is lots of research, and I don't want to make blanket statements. But I guess most of the time when some result is popular and often cited by writers and I look into it, I just think it misses the point. Some examples:

1) Pattern recognition

The usual story goes that the human mind has a strong tendency to see patterns in randomness (this can be connected to the evolutionary cost of running from what is not a tiger, as compared with not running from what is a tiger). An often-cited setup involves a researcher generating strings of random numbers on the computer and asking the probands to see if they could find a pattern - the vast majority could. The interpretation given is that humans are obviously quite good in spotting a pattern where there is none.

Except the probands were factually correct.

First, computers don't generate random numbers, they generate pseudorandom numbers, and given enough of these, a vast intellect could unravel the algorithm and predict the next number in the sequence. So people were quite correct in assuming the existence of a pattern as it was really there.

Second, mathematically for any finite sequence it is impossible to determine if it is truly random or not. Most people would complete 2,4,6,8,10,... with 12 assuming the rule is 'add 2', but the rule may be 'next number has to be larger' in which case 11 would be valid, the rule may be 'must be even' in which case 2 would be valid, the rule may be 'count by adding 2 to 10, then go negative' in which case -2 would be the next answer,... there's nothing logical in preferring one rule over the other, it's just down to habit, we like counting.

So mathematically the probands were doing something very reasonable - they were trying to find the rule for a sequence which could well have a rule. What the experiment actually shows is that the mind tries to spot a pattern where there could be a pattern if prompted to do so by the question (if the question would have been 'are these good random numbers' the answer would possibly have been different...).

2) Happiness research

One often quoted study is that children do not increase the happiness in a family. What was done is that the researchers selected  samples of women with and without children in similar social positions, phoned them several times and interviewed them for their current happiness, estimated on a scale from 0 to 10. The claimed result was that children do not lead to any happier life.

I think this one gets hit by the applicability of logic, here the inference rule that if you have a proposition A and not-A, and you can show not-A to be true, A must be false.

I myself am capable of experiencing a mental state with contradicting emotions. I can, for instance, feel desperate, angry at myself and at the same time feel deep satisfaction with the way my life is going. Or I can be madly angry and my children about breaking something and love them at the same time. Describing my emotional state at any given time would require about 2 pages of written text using a symbolic language (I couldn't do it in English for lack of words, but I could for instance do it using astrological symbols - I think Jung was the first to realize how useful symbolic expressions are in the context).

If asked to report my surface emotion, this would be a hopelessly inadequate picture, and any inference drawn from that about my happiness would be completely wrong. Even on a day where the kids annoy me to the point of screaming, I am still aware that seeing them grow up is the source of deep satisfaction for me. So in this case, proving not-A doesn't imply the falseness of A. If that is so for me, why would it be different for others? I think the study simply doesn't ask in the right framework, and so it obtains a meaningless answer.

3) Evolutionary psychology

One celebrated result here is jealousy sexual vs. emotional infidelity - the results of questions posed to probands are that women feel more disturbed by the thought of their partner forming an emotional attachment to another woman instead of just having sex, whereas men feel the other way round. Supposedly this proves that evolution strikes and we see a stone-age hunter-gatherer setup at work - men are worried they'd have to care for offspring not carrying their genes, whereas women are concerned with losing a nutrition-providing partner.

Except... the raw data shows that variations between cultures in the jealousy response are much greater than the effects claimed which remain after averaging across all cultures. Except, we don't know at all how stone age humans felt about jealousy and raised children - they might have raised kids in a large group, not caring about whose offspring a particular child is. The stone-age setup proposed is not based on evidence, but imagined such that it accounts for the facts. Without solid data on how stone-age humans actually thought and behaved, I can justify anything.

4) Enlightnenment

If you do brain imaging of Buddhist monks during meditation, you can observe certain areas of their brain go active. The supposed implication of this is that they don't really experience enlightenment, it's just a function of this particular brain center.

Of course, if you would do brain imaging of  scientists reading and understanding a research paper, you would observe certain areas of their brains go active. If the above implication were correct, then they wouldn't really understand the paper, it could all be explained in terms of a particular brain center being active.

Correlation isn't causation - this is a fairly elementary reasoning error.

I could go on with this, but I think you may spot where my problems with research in psychology and brain science reside...


--- Quote ---However, continued from the middle - you don't think there is evidence to suggest that our experiences as "I," as cognitive agents, isn't illusory insomuch as colours, perceptions, would not exist as they do without corresponding evidence of physiological changes (we're attracted to the redness of an apple because it has been beneficial towards our survival)?
--- End quote ---

It all depends on what you prefer to call 'real' or 'illusionary'.

I think normally the word 'illusion' is used to describe something which has a certain appearance, but you can verify that the appearance is not correct by remaining in a given framework by just changing perspective. For instance, mirror illusions don't require you to change to a description of the situation in terms of quantum theory - you can walk around the setup, and it becomes apparent that there is a mirror.

In contrast you (and Bakker) seem to be using the word 'illusion' here for something that is in a high-level effective theory (the mind as we ourselves would describe it, or psychology as used by Freud, Jung, Adler,...) but not in a low-level more fundamental theory (interactions among interconnected neurons).

This is, I think, a very important difference. Let me illustrate this with an example where it is well understood what happens.

A rock has mass. In a very high level effective theory (Newtonian mechanics) we can view it as a rigid body with a given mass and  use that to compute the trajectories of the rock when we throw it.

At a lower level, the rock is composed of molecules which have mass. Even more fundamentally, it is composed of protons and neutrons which have mass with electrons thrown in which have very little mass to comtribute. At the most fundamental level, the rock is described by the Standard Model of Particle Physics in terms of quarks, gluons, and photons. And in the bare Lagrangean of this theory, there is no mass.

So mass is a property of high level effective theories only, it is not a fundamental property of the world. The illusion of mass of a rock arises largely because there is a lot of field energy in the binding of quarks and gluons which makes an empty vacuum energetically disfavoured, and thus stuff plowing through the field energy contained in the vacuum effectively acquires mass.

Yet, this 'illusionary' mass is quite capable of killing you when the rock hits your head. Which goes a long way to illustrate that just because something is not a property of the fundamental theory, it can't be seen as meaningless or 'not real'.

The fact that a theory of the mind in terms of connected neurons doesn't have certain traits can not be used to argue that these traits would not be meaningful, or not real when seen on a different scale. There are dozends (if not more) counterexamples in physics where effective high level theories gain new properties or lose properties which the fundamental theory has.

The world behaves as if there would be mass when seen at a certain scale, this is what gives meaning to the concept. The world behaves as if there would be an 'I' when seen at a certain scale. and this is what gives meaning to the concept.

If you want to limit 'real' to 'what is contained in our most fundamental theory only', you declare pretty much everything as illusion, and you're left with a description of the world in terms of operators acting on Fock spaces having certain commutation and anticommutations - which manifestly isn't what is real, but just describes how the real world behaves. So in essence nothing is real then. Doesn't lead anywhere in particular to accept only the fundamental as real.


[0] Message Index

[#] Next page

[*] Previous page

Go to full version