Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Thorsten

Pages: [1]
1
Neuropath / Re: Countering the Argument with Thorsten
« on: May 31, 2013, 08:02:20 am »
Quote
From the mindset I'm working from, credibility doesn't mean much at all. It's about, as you say, true/false.

Please don't misquote me, this is not what I say. What I wrote is: the proposition should be evaluated for true/false, not for pleasant/unpleasant but also I don't go so much for facts as I go for likelihood and May I gently remind you that I am an academy researcher at a university, and that it's therefore unlikely that I have been misled to think anything wrong referring to how science works?.

Credibility does not establish fact, but it does increase likelihoods, and that is an important difference.

Quote
I'm sorry, this comes off as social leverage, blending social with investigation.

Let me clear this up for you. I feel in no way compelled to convince you, or to win this argument, or to prove my knowledge or experience to you. My primary motivation in having a discussion is that I might learn something, my secondary is that I might share knowledge and ideas.

As a result, I don't mind spending an hour to type a long explanation of some convoluted thought if I have the feeling it helps my opposite to understand something new. But I do mind wasting my time talking to someone who isn't willing to listen.

We have established:

Quote
I don't think that's getting into speculative fiction and trying out the idea - your folding the idea into your axe notion and not entertaining the idea of something that can carpet bomb the forest to hell and back.

When someone says 'What if it's this way?' - it's not really a counter to say 'No, it it's not'. It's just unimaginative.

You're of the opinion that I am unimaginative, at least insofar as this discussion is concerned.

Quote
I think you've been misslead to think science involves reasoning. Atleast reasoning in the sense of coming to a conclusion, rather than a conclusion coming to it. Do you think scientists reason a conclusion, then run thousands of experiments just for fun, even though they've already come to a conclusion?

You're of the opinion that I don't know how science is done .

(Note that your reply to my question is factually wrong - detectors are really built to test hypotheses, not to just see what happens, there is really a confirmation bias and scientists are aware of it and try to deal with it - so you do not know how science is in fact done, you're arguing based on how you think it should be done - alas, we don't have infinite funds).

Quote
Also 'proof' and 'incompleteness theorem' in the one sentence seems a little jarring.

You are ready to judge one of the most profound mathematical results of the 20th century based on how it sounds.

Also, see above, you are unable or unwilling to represent my positions correctly.

Quick reality check - why should you listen to some guy of whom you're convinced he doesn't know how science is done, who is completely unimaginative and just is around playing games with words. Answer - you shouldn't.

Second reality check - what would it take to make you listen? I would have to convince you that I in fact do know how science works, that the incompleteness theorem is to be taken very seriously and real, I would have to re-iterate my positions again and again till you acknowledge them as they are rather than as you want them to be.

I frankly think you have no real idea how math actually works, what science is fundamentally based on, that there are several ways to analyze fiction, what epistemic relativism is and how it connects to the present discussion, and I am also of the opinion that you have no intention to catch up with these things.

So why on earth should I spend time and effort arguing to you things you don't want to understand? I am happy to concede the argument to you, you may happily continue in your belief that I am misled in how science works and that I am biased, I don't care.

I'm not using any social leverage to win this argument - I simply don't want to spend my time convincing you that there's plenty of things you're apparently not aware of where you demonstrate again and again that you have no interest in learning something new.

2
Neuropath / Re: Countering the Argument with Thorsten
« on: May 30, 2013, 07:00:01 am »
@Madness:

Quote
Quote
a) Perception must be a meaningful representation of properties of reality as it is, otherwise science breaks.
b) Decisions between true and false propositions must be free (and refer to reality as is) and not determined by circumstance, otherwise science breaks.
How does this statement account for much scientific research using advancing technological prosthetics to test hypothesis, which we cannot with our (possibly)fiveish biological sensory perceptions?

Ultimately it is down to our biological senses. The ATLAS detector takes particle tracks in proton-proton collisions and writes huge amounts of data on tape, over which an analysis is run. But in the end, scientists have to read the result plot from a computer screen with their bare eyes. They have to believe that what appears in their mind correlates with the state of the detector. The whole calibration process of such machines proceeds from the known to the unknown - one first demonstrates that the detector responds to something which one knows as expected, then goes into the unknown. If you follow the chain, you end up by comparing detector readouts with visible particle tracks in a mist chamber.

We can extend and augment the biological senses, but all that requires that we can trust them somewhere. If the perception of the eye could not be assumed to refer to properties of anything real, no detector could ever be calibrated, nor could its data be read and understood.

Quote
Quote
b) Decisions between true and false propositions must be free (and refer to reality as is) and not determined by circumstance, otherwise science breaks.
b) Could you offer some exposition? Like instances, which you found in my words (apologies for lack of clarity), I find this contextual and lacking coherency based on possible connotations...

As this is a conceptual point, it's only possible to come up with gedankenexperiments, so it's difficult. But what about this one:

Consider a people D with a very simple deterministic mind which is based on a set of rules. If a member of D classifies the scene as being outside, and if he detects a small, white moving object with fluttering motion in the scene, and if he has ever been told before about how to observe butterflies, he will be convinced that he is observing a butterfly. Now, given this set of rules, he will be convinced to observe a butterfly even if the object he's observing is really a small sheet of paper drifting in the wind, and he will hold to this conviction even if he can clearly see that the object has a rectangular shape rather than a body and wings --- because his mind is determined by circumstances. He has no choice to reconsider his classification, nor would he see any reason to. Many others of his people would readily agree with him, there'd be some who would not, but he would be convinced that they've never been told before how to observe butterflies, and once they would be educated, then they would agree with him.

It is terribly obvious what is going on when observing the problem from outside, having a human mind which is not tied to such simple deterministic rules. But even talking to any members of D, we could not convince them that their conviction to have truly seen a butterfly has really not so much to do with here being a butterfly --- they'd probably argue that we miss the obvious.


Quote
Honestly, I think Bakker's arguments, while seeming ultimately pessimistic, take acceptance as a matter of degree. Even if his methodology weren't assumed, Neil-it would still be the Jester, the ultimate Shaman of your example, because of the efficacy with which he might induce any experience, the sum of all, not the some of many.

I honestly can't even buy into Neil's bare reasoning. Suppose you can show with a suitable machine that you can artificially induce any perception or experience. What this proves is that perceptions and experiences can be deceiving insofar as their cause is determined - under certain circumstances you see an apple where there is no external reality corresponding to the perception, under certain circumstances you can feel causation where there is none, under certain circumstances you feel pleasure where there is damage to your body.

This doesn't prove that there are no real apples, no real causation or no real damage to your body. The ability of a conjurer to pull a rabbit out of a hat doesn't tell me anything about where real rabbits come from.

What Neil reasons is that because a system may deceive, it does deceive, so he goes de-actiavting it. But putting out my eyes because they may be deceiving isn't really such a good idea, because I end up not seeing anything. Neill sort of assumes he can get the illusion machinery of the brain out of the way and then sees things closer to how they really are - but I don't think that works, illusionary perceptions use the same brain structures as real perceptions, getting rid of illusions isn't so easy.

Quote
Are you assuming everything in relation to consciousness-as-experienced as fact?

I tend to be very careful with the word 'fact'. For me, it's a bootstrap process. Things have to be either evident, or justified by something evident, which leads to chains of things depending on other things to be true.

The existence of a conscious observer seeing a scene is evident. The existence of an internal stream of thought, a higher consciousness, is evident. Perceptions of an external world are evident.

These things lead to experiences. I have to assume that the perceptions and experiences are meaningful in relation to the external world, then I can start to built observation and reasoning techniques to infer properties of the external world from my experiences (if I were in the Matrix, I could not do that, I would be much mistaken).

These start out simple - correlating visual and tactile information with my mental picture of the scene - usually works (there it starts going wrong, there are perception illusions where it doesn't - so already at that level I have to be aware that my knowledge of the external world as reflected in the senses is incomplete and flawed). Making predictions based on recurring events (sun will rise tomorrow) - usually works.

Then it gets more complex (with the possibility of more flaws) - math and an axiom system (where again the axioms are taken to be true because they are obvious, not because they can be justified by other means). Consistency conditions that several perspectives must lead to the same result (I'm still not sure if they're real - the higher consciousness has consistency as a necessary condition to exist, but physics does not - so consistency may not be that important in reality as is but in reality as perceived). Science.

All relies on what has come before, and I always trust the lower building blocks more than the higher, because the higher have so much more room for error. Statistical analysis - do you know what probability actually is? Can you explain it? I know perhaps a handful of scientists who really know what it is, although many more can apply the techniques (and even more can't). Detectors - they do go wrong, and in fact experimentalists trust their eyes over the machine and look at actual event displays to see if the detector does what it's supposed to do rather than trust the machine-internal checks. I trust my ability to experience pain much, much more than a brain scan to tell whether I am in pain or not.

You can sometimes get a dissonance - higher-level reasoning contradicting lower level. So then I might dismiss an experience as wrong because of science telling me so - but then the only remaining guiding principle is consistency, I would only want to do that to make my picture of the world more consistent, not less so.

I don't go so much for facts as I go for likelihood - I try to investigate a problem from as many angles as I can, and then see if there is a coherent picture suggesting itself or not.

@Callan S.

Quote
I think you've been misslead to think science involves reasoning. Atleast reasoning in the sense of coming to a conclusion, rather than a conclusion coming to it. Do you think scientists reason a conclusion, then run thousands of experiments just for fun, even though they've already come to a conclusion?

May I gently remind you that I am an academy researcher at a university, and that it's therefore unlikely that I have been misled to think anything wrong referring to how science works?

How do you think experiments are planned? Do you really think people spend a few billions to build an accelerator and detectors just to see what happens? Experiments are done to test hypotheses, which in turn are based on conclusions drawn from past experiments. Of course reasoning features at every stage.

Quote
Also 'proof' and 'incompleteness theorem' in the one sentence seems a little jarring.

Evidently you have not read this up. Your loss - math can actually prove that it is incomplete and that there are true statements not provable by math.

In general: Admittedly I find the line of discussion if Neuropath is fiction bizarre. I may be dumb enough to miss out some experimental facts of neuroscience or psychology, and I am willing to learn, should this prove necessary, but I am not dumb enough to miss the fact that I am discussing a book, There's a whole chain of ideas related to 'willing suspension of disbelief' in discussing fiction - but I feel this gets tedious for me and I won't elaborate.

I am frankly not sure that I want to continue the whole discussion. I've tried to explain my points, I haven't seen anything which I would really recognize as a counter-argument (which may be my bias, or may not) and so I feel I am not really learning anything I haven't known before (which again may be my fault or not). I am not a Bakker fan and have moved to the next book a while ago, and I increasingly start to wonder if I shouldn't be doing something else with my time.

Personally I think Neil (aka Bakker) pulls a conjuring trick, and part of the trick is that the outcome is unpleasant, so if you reject the conclusion, there's always the 'But you only reject it because it is unpleasant.' - which distracts from the fact that the proposition should be evaluated for true/false, not for pleasant/unpleasant, and that there's no correlation. But I think I am done trying to illustrate that trick - if you don't see it by now, then you're either right, or you won't or can't see it.

3
Neuropath / Re: Countering the Argument with Thorsten
« on: May 22, 2013, 06:40:10 am »
Quote
No, the question of 'What if it's this way?' is aimed at you!

Yes, and my answer to that is still 'Then it is not consistent.'

Speculative fiction can still be consistent or inconsistent. I can conceive of a fictional world in which magic exists and society adjusts to it in a plausible way. That'd be consistent. I can conceive of a fictional world in which characters are still in the room after they left the room. That'd be inconsistent. I tend to value consistent fiction higher than inconsistent fiction, because I know how difficult it is to get it consistent.

Quote
But if you can't bring yourself to argue it, then it's speculative fiction - but instead you're addressing it exactly as you say - as if someone made an argument for the world as it is.

Yes, strangely enough, I am doing exactly as I say I am doing :-)

Look, we can treat this as a bit of fiction, then all I can say is 'Hey, it's made up, so it could be anything anyway. Book finished, next please. ' That's not interesting. The interesting thing is to treat it as 'What if all the premises were really true - what would it imply?' In the present case, it's even more interesting because I consider it reasonably plausible that at least some of Neil's experiments could really be done and that they would have the described outcome, so I don't see the premises as quite as fictional as, say, hidden dragons (plausibility comes in degrees, remember?)

It's funny that you'd attest to my lack of imagination, while the position 'Well, it's just fiction.' requires a lot less imagination than to map out the consequences of a world in which all the assumed premises of a story were real.

Quote
You're disinclined to consider it - so much so you describe the author as just talking with himself

Not sure how you imagine books are written, but my writing is essentially me answering a question I posed to myself. I am fairly certain it worked the same way for Bakker - he asked the question 'What if it were this way?' and then worked out the answer for himself and wrote it up.

Quote
From your computer analogy, you seem to think the mind isn't gone on damaged hardware.

I think if someone downloaded the program from damaged hardware and then compared it to the original disks program, you'd find quite a difference.

No, that's not what I said. If I trash a computer, the algorithms which were represented on it are gone. If I blow someone's brain apart, the mind isn't likely to continue. I said that if you want to understand and possibly answer the question posed, i.e. how 3d rendering works or how free will works, looking at the hardware is pointless (because you confuse the specific representation of information with the information itself).

(The difference would be quite relevant if it were possible to copy and store the patterns which make up a mind - then you could destroy a brain, but still have the mind continuing. Whether this is ultimately possible or not depends on the question if the mind has a 'simple' representation in terms of information only, or if there is a missing ingredient like a soul or the quantum nature of the world).

Quote
It seems your best argument is 'It can't scale'.

The ax is a metaphor to illustrate a point, please don't confuse it with a genuine argument. The genuine argument is that science, both in the real world as well as in the world of Neuromancer, is a tool based on certain premises which is justified by certain means. From these premises and means follows a region of applicability outside of which science isn't valid, and in the novel, science is applied outside this region (and so is it in the real world).

On general grounds you can show (math is quite amazing) that every self-consistent and sufficiently complex formal system has just the same problems as science, so the question whether there is a scalable tool is conclusively settled with a 'No- there isn't!'. So formal reasoning can never be complete and self-consistent at the same time, and knowledge of the world obtained by formal reasonings will thus always be a patchwork switching from one system to the next. Please read up on Gödel's incompleteness theorem for the proof - it's really worth understanding, there is an immensely profound insight into the fundamental nature of logical reasoning in there, and it delivers some very substantial insights into the nature of self-referencing problems as well.

The only way to get around this is to invent a fictional world in which logic as we know it doesn't work, and I don't quite think Neuropath does that.

4
Neuropath / Re: Countering the Argument with Thorsten
« on: May 21, 2013, 09:12:26 am »
Quote
When someone says 'What if it's this way?' - it's not really a counter to say 'No, it it's not'. It's just unimaginative.

Okay, the game here is:

Bakker in the guise of Neil makes a statement about the mind and science in the context of a future version resembling our world. I interpret this as a statement made about the mind in our world, because

a) Bakker doesn't indicate that science works significantly different in the world of Neuropath
b) Bakker refers to experiments done in our world and claims made in our world and
c) the limitations of science as we know it are universal

So I answer to the argument as if someone would have made it for our world.

Since Neuropath is a work of fiction, the author is entirely free to say 'Hey, but in my world it is all different and I am right.'  I concede that point that an author is free to invent a world in which he answers 'What if it's this way?' by 'This comes out'.

If that is Bakker's answer, then logic doesn't really work too well in the world of Neuropath, because self-defeating arguments are somehow okay, but that's not interesting for me.

Quote
One might not be utterly convinced - but the only way to utterly convince of a speculation about technology is if Bakker went and invented the brain augmentation technology. Surely to require that is to show no interest in speculation at all?

You missed the point. Even if a real Neil could demonstrate all which he sets out to do, it would still not prove what he claims it does. He faces a conceptual problem, not a technological one.

Quote
I don't think that's getting into speculative fiction and trying out the idea - your folding the idea into your axe notion and not entertaining the idea of something that can carpet bomb the forest to hell and back.

Which would make it an eminently useless tool to cut branches, illustrating my point yet again.

What I am doing is illustrating the implications of the idea (which Neil doesn't for obvious reasons) - and they're not compatible with the assumptions on which the idea is based. Which is to say, there's a consistency problem. Now, you can chuck out consistency (and modern fiction even does that at times), but then the results get really strange.

In other words, my answer to 'What if it's this way?' is 'Then you're forced to believe at least two mutually exclusive statements to be true.'

5
Neuropath / Re: Countering the Argument with Thorsten
« on: May 21, 2013, 06:55:20 am »
Have been trying for yet another hour to digest this. This is in essence your response when I asked about real and illusionary.

Quote
For my personal coin, I've subscribed to the thought that unless we're a completely necessary component for the existence of the Multiverse, then Objective-Truth, the state of affairs as they truly are (Reductionism to it's holistic conclusions...) exists outside of a human account in sound. I mean, we more or less exist happily based on how much we maintain our habitable environment, our niche. I'm not necessarily advocating naturalist truth but living, and accounts of it, in more accord with the actual state of the Multiverse seems to have its dividends.


I've always found the German word Umwelt helpful. Basically, we perceive only what we can perceived. Which leaves for the unperceived...
(...)
Let's start from illusion because I'm never sure that real applies.

We experience illusory correlation, indiscriminately, insofar as we are either/both ignorant of their occurrence or where the hard wall of experience doesn't immediately contradict our, otherwise simple, cognitive dissonance.

However, in terms of how our languages, our philosophies, even math to an extent, describe "reality as it exists, necessarily consequent of us, or without us antecedent as it is," we might say those descriptions (which we experience more or less immediately visceral than other articulate abstractions) are an illusion, a misconception, because we don't know what kind of information BB interacts with before our conscious, sufficient, experience. Even in cases of expertise, of conscious learning and practice, individuals render behaviors and their cognitive tools, unconscious, implicit, and part of a system that functions beyond our ability to experience it.

That information could well be redundant - but it might well as shatter the box as we push its thresholds as it may well push the boundaries of scientific falsification.

1) I don't know what your Multiverse is. It is not a well agreed-on concept. I know it from Fantasy literature, Moorcock uses it to characterize all the worlds his eternal hero acts in. Speculative cosmology has a multiverse, creating all possible universes so that one can use the anthropic principle to justify why some things in this universe are as they are. Speculative String Theory generates the landscape with 10^600 different basic gauge theories, all corresponding to possible universes. One interpretation of Quantum Mechanics has the many worlds hypothesis that makes all state amplitudes real, not only the ones projected by a measurement. Multiverse in essence is a buzzword that may mean anything and has been used by almost everyone speculating about physics to mean something completely different.

There's no bit of actual evidence that any of these is true.

2) You seem to distinguish 'reality as is' from 'reality as perceived' here and declare everything which is not 'reality as is' as an illusion.

It seems to me that consciousness is a part of 'reality as it is, being itself' and hence the only part of 'reality as is' we have access to - whatever else we perceive is always reality as perceived. So I would argue that given your definition 'I' is real but incomplete, the rest of the conscious perceptions are 'real' as far as the perception (the qualia) goes but not as far as the object perceived goes, and the 'I' reflected upon by the conscious mind is hence less real than the 'I' being itself.

I don't find the definition particularly useful, because it declares pretty much everything illusionary.

What I find more interesting is the degree of reality we can assign. 'mass' and other physical concepts do not require belief - a rock will kill you when it drops onto your head, regardless if you believe it will or if you are even aware. 'money' or 'police' work only if you are aware of them, understand them and share a common belief, i.e. they are real in quite a different sense. A Beethoven symphony has yet a more complicated degree of reality, because it is a pattern and can jump across many different carriers, from sheets of papers to a DVD to sound waves to the memory of the orchestra members how to perform it - it doesn't require belief, but it requires a key to decode.

Illusions are then things which claim a different category of reality than they are - a  hologram or a mirror may create the illusion of a rock, but the illusion has quite different properties, it cannot kill. Someone dressed up as a police officer may create the illusion of a police force, but will not be able to be backed up. It's not clear to me that something can give the illusion of being a Beethoven symphony.

This is primarily a distinction of usefulness - I recognize that there's very little I can do in absolute terms with reality, so I classify the rest according to degree.

Using such references to 'reality as is', it is quite easy to declare things as 'illusion'. The point is that most people do not realize that the same argument declares everything else an illusion. A clearer statement would be that 'I as self-aware reflected upon' as the same degree of reality as 'grain of sand' or 'SiO2' - i.e. a reflected perception - and a higher degree of reality as 'money' - i.e. a social construction.

3) I have no idea what  Reductionism to it's holistic conclusions... is supposed to mean - reductionism and holism are logically incompatible principles, the sentence is a paradox.

If you mean to imply that reality as it is is paradox, I'm probably with you.

4) I'm not sure what to make of

I've always found the German word Umwelt helpful. Basically, we perceive only what we can perceived. Which leaves for the unperceived...

It seems for a truism and as such trivial. I don't think anyone here means to say that what we see is all there is. I don't even know of anyone who ever wrote that the conscious mind is all there is to the mind. I think it is algorithmically impossible to create any pattern which is aware of all its internal state at once (so even Kellhus must have an unconscious mind). So what does this really argue?

5) You seem to go for some variant of Solipsism - but thats not a scientific proposition.

Even in cases of expertise, of conscious learning and practice, individuals render behaviors and their cognitive tools, unconscious, implicit, and part of a system that functions beyond our ability to experience it.

That information could well be redundant - but it might well as shatter the box as we push its thresholds as it may well push the boundaries of scientific falsification.


Yes. So it could all be wrong - and there is Solipsism that argues precisely that, and it can't be refuted.

The point is, if it's too wrong, science doesn't work.

a) Perception must be a meaningful representation of properties of reality as it is, otherwise science breaks.
b) Decisions between true and false propositions must be free (and refer to reality as is) and not determined by circumstance, otherwise science breaks.

What you can't do is scientifically prove something which implies science doesn't work. You can't falsify something that your falsification method requires to work.

So you can strike all possibilities from Bakker's Bestiary which violate a) or b) as untestable within science. They could still be true, but they're never accessible scientifically.

Conversely, if science works (as Bakker and Neil seem to believe), then some properties of the mind follow necessarily.

6
Neuropath / Re: Countering the Argument with Thorsten
« on: May 20, 2013, 08:32:12 am »
Quote
But if you do think it could be conducted more rigorously...why attribute the half assed effort as being science?

Because Neil (and Bakker) do. I think science isn't applicable to some problems. An ax can be blunt, then it doesn't work well, and you can sharpen it, but it still remains an inadequate tool to cut down a forest.

7
Neuropath / Re: Countering the Argument with Thorsten
« on: May 20, 2013, 07:25:21 am »
Quote
I have a feeling we're coming to limits of mutually beneficial communication.

Well, we do seem to get lost somewhere. Admittedly I have a hard time understanding what your position actually is, even re-reading your words I still get contradictory information. In return, I feel quite misunderstood.

Quote
I never invoked "science!"?

But Neil (and hence Bakker) does - that's what all this is about, right?

Quote
I offered examples (albeit, without retrieving specific studies and defending those results, in specific) yet you summarily dismissed by suggesting that because you can point out fallacious results in the studies you've been exposed to, and, perhaps, therefore in any other you may encounter.

I can't recall you posting any specific studies as example here. I gave four examples of studies I dismissed, citing my reason for doing so. You offered interpreted results without telling me how the underlying studies were done, thus denying me any possibility to make up my mind about the study. I apologize, but I don't take such interpretations on faith - I want to be able to judge the full methodology. I will certainly dismiss any other study with similar reasoning errors which I may encounter. I would be happy to discuss with you about any concrete example rather than be forced to make blanket statements.

It occurred to me in the course of the weekend that Akka in the 'White Luck Warrior' actually spells out what is done in Neuropath.

Neil in essence pulls off a conjuring trick. What he can demonstrate in the end is that you can artificially induce alternate states of consciousness. People licking toads, eating mushrooms and inhaling holy smoke have known this for millenia though. If Neil would be a shaman, claiming that alternate states of consciousness are proof of the gods, then everyone would dismiss his results out of hand. But he's a scientist.

By framing all this in science and writing to readers which are socialized within science, Bakker manages something remarkable. By appealing to prior knowledge of the readers, he creates a frame of established fact and illuminates everything in knowledge - and this in turn hides the boundaries of said knowledge and leads to the absence of questions.

In particular, the whole setup distracts from the fact that science has boundaries. For instance, questions like 'Free will' or 'The supernatural' are quite outside of science for the simple reason that it's not possible to formulate a testable hypothesis to prove or disprove the notions. There are no concepts to formulate these properly (I invite you to try if you think differently). It also hides the fact that science is justified from somewhere.

And since the reader has the notion that he knows so much, he fails to realize his ignorance and fails to ask further. Like 'What is actually going on here?'

Suppose for a moment Neil would investigate something we do know properly. Say real-time 3d rendering (because it's a hobby of mine). What we see is the output - we see for instance rainbow reflections on light passing through dew drops rendered on-screen, which prompts the question 'How does this work?'

Now here's Neil investigating 3d rendering. He shows you an IR image of the mainboard where the graphics card is lit up brightly when the rendering runs 'Look, we can identify the centers where it is done!' He asks you to look at the screen, then says: 'Look what I can do!' and waves a magnet in front of the screen - and because it is an old cathode ray screen, the rendered image swirls with the magnet. 'Look what I can do!' - and pulls the red color channel cable from the monitor - and as a result, the image gets a strange yellow-green hue. 'Look what I can do' - and he puts some current to the graphics chip, short-circuiting parts of the fragment-rendering pipeline, and as a result rainbow-colored static fluctuates across the screen. 'I can make you see all these things. It's all the hardware, see!'

But as anyone doing 3d rendering knows, he would completely miss the point. The question is after the algorithm - we're asking the few hundred lines which describe how light interacts with a dew drop in terms of vertices, light vectors, textures, specular highlights and so on. The algorithm is high-level information, it works independent of the realization on GLSL or direct-X, or on an Intel, NVIDIA or ATI chipset. The algorithm is not dependent on a particular low-level realization on a particular graphics card, and you have about zero chance of reverse-engineering from Neil's techniques. The best chance to re-engineer it is to look at the output and try to understand it on the high conceptual level.

Which is to say, I think Neil obtains information, but the information he obtains is completely disconnected from the question he proposes to address. And this would be completely obvious if he wouldn't be a scientist but a shaman so that the applicability boundaries of what he was doing wouldn't be hidden so well.

Which is also to say, I don't think neuroscience claims to 'illusionary I' or 'no free will' will change society in any way. If anything, they will discredit neuroscience, or even worse science in general. People ascribe a high degree of credibilty to science, but not to the point that they'd accept something which contradicts their good and direct experience. 


Quote
It's weird how when science does something like drop a watermelon and a marble in order to show they both fall at the same speed, were all 'okay, we'll just accept that'. But somewhat like they say, the more complicated science gets, the more it seems just another form of magic. And then the questions become 'Why does your magic trump my magic?'

No, that's not so weird if you accept science as a tool.

An ax is a very good tool if you want to cut branches from a trunk. So you might think to apply it further. It still works decently to cut down a small tree. So you might apply it further. It barely works to cut down a large tree. It certainly doesn't work well enough to cut down a whole forest - so at some place, you take a chainsaw or even a harvester vehicle.

But of course, taking the harvester to cut branches from a single tree is extremely cumbersome.

So science is a very good tool (the best, I think) to get your mind around falling watermelons - but as you expand its region of application, it doesn't necessarily stay a good tool (although this is usually assumed).

Consider for instance statistical analysis. If you look into what it actually is, it is a tool to manage your lack of knowledge (if in a medical study the result is that a drug helped 55% of the probands this is equivalent of saying that the researchers have no idea of the causal relationship between the drug and the body, because otherwise they would know in each case - when I drop water melons, I don't answer in 55% of melons will do that, I give you a number when it will reach the ground). So statistics is only useful if you have a controlled lack of knowledge (i.e. an idea of what you do not know) - if you don't, you can still go through all the same motions and get answers, but your results cease to mean anything (apart from 'I don't know'). For instance, it is statistically completely true that a bit less than 50% of all parents who plan on having children will get pregnant. Of course, if you happen to be a man, the implication is not that you should have a fair chance of getting pregnant... so the 50% are mathematically true, but completely miss the point. In this case you can spot it easily, but there are way more subtle cases. And the formalism simply doesn't tell you when it becomes meaningless - so there's no warning from inside statistics.

So if you frame it like 'Why do you think your tool is better for the purpose than my tool?', it is no longer such a mystery, it becomes a valid question.

8
Neuropath / Re: Countering the Argument with Thorsten
« on: May 18, 2013, 02:55:05 pm »
Quote
We're primarily invoking language games, which I'd like to avoid as much as possible but you tend to hit on those meriting distinction.

No, I don't want to do that at all.

I think the issue is simpler and more fundamental. You have (presumably) been socialized with science, and learned to apply and accept scientific reasoning.

Imagine for a moment you meet someone from a hunter-gatherer culture and talk to him about your mode of truth-finding. When you start talking about statistical analysis, he will tell you something like 'Why should it matter for me if something happens for these other people I don't even know?' If you talk about brain scans, he will tell you 'Why do you trust this machine more than your own senses?' He will look at you strangely and ask 'Can't you feel the spirits of nature?' In short, he comes from a completely different system of thought.

You, being born into science and reasoning from within science, will find it obvious that he goes astray. But he, being born into a spirit-world, will find you equally mentally deficient and regard your inability to feel the spirits as some kind of insanity. For him, science makes no sense whatsoever, he will regard it as some dysfunction of the mind.

Science as seen from within science is self-justifying - but that's just circular reasoning, albeit a bit hard to spot. So is animism from within animism. What you really need to do in order to justify science is to step out of it, try seeing the world from different perspectives, understand how it is conceptualized from different perspectives, and then see where science wins out.

(As a side note, Moenghus the elder is a fictional example of that trap of never leaving one's own perspective. He is in essence a scientist (rational reasoner), finds his own set of rational beliefs quite justified from within his perspective of science, but then completely misses out on the fact that the Gods of Earwa appear to be completely real rather than a figment of people's imagination and that he will  be damned as a result. Kellhus, in contrast, does leave his own perspective (or probably rather is pushed beyond it by the circumfixion).)

Quote
What happens in cases of understanding or, even, imagination? Neural accounts seem to deal with embodied cognition, embodied simulation, or some account of neurons.

Well, I was sort of using 'making sense' in reference to 'truth'  - I readily agree that there is the additional complication that the feeling of 'making sense' as experienced by a mind may or may not correlate with any truth.


Quote
What about language? In the Sapir-Wharf hypothesis (which was primarily asserted by a student of Wharf's), it's suggested that we can't account for the existence of phenomenon for which we do not experience or interact with linguistically (describing) in some way.

I don't buy Sapir-Whorf. I can talk with colleagues all over the world (who come from completely different cultural context) just fine about Quantum Field Theory (which can't really be put in words) and all of this should not work. Poking holes into Sapir-Whorf is not really difficult.

Quote
How about the ways in which language warp our very perception of our environments (seeing more colours or feeling more or less emotions in relation to words in your language, cognition of time and space, or an example Wilshire brought up in chat the other day - which I will find as I'm starting to remember - about language and orientation,

There are lots of urban legends floating around, for instance you may have heard of the alleged timelessness of the Hopi language or the way the Piraha language contradicts universal grammar. I have bothered to read up in detail several of these cases, and it always boiled down to bad research. I am not aware of good evidence that languages would really warp the perception of the physical environment in a significant way. The social environment is a different beast.

Quote
Regardless, your disdain for many of these results seems to come down to inapplicable discussion on the part of the authors

Yes, I think that's what I said previously - I don't doubt that if one does these things, one gets the results quoted, I just doubt that the interpretation attached to them is correct.

Quote
Illusory correlation is almost the quintessential human problem, if we are to be understood in terms of pattern-recognizing machines - we see correlation where there is in fact none.

That's easy to say - how do you know where there is in fact none?  How many correlations do you miss which are in fact there? All you have is a discrepancy between two systems of reasoning where one claims a correlation and the other doesn't. For you, it seem obvious that one of these systems (science) must be correct for the problem at hand, but I don't see this as obvious at all.

Science isn't a fact-producing machinery. There are very, very few scientific results which have survived even 100 years without revision. It used to be 'fact' that the influence of the environment can't possibly inherited by the offspring. Well, now there's epigenetics - turns out that it can after all. It used to be 'fact' that there's an ether in which light propagates. Now we have Quantum Electrodynamics for the job.

Science is a description and predictive-model producing machinery - it does deal with better descriptions and more predictive models, but it doesn't ever deal with facts - assuming something fact may be a pretty dangerous error for a scientist because it means you will never be ready to revise it.

Let me ask a very simple question - can you name anything which you would consider 'real' and compare it with a different thing which you would call 'illusion'? So maybe we can get the definitions from there.

9
Neuropath / Re: Countering the Argument with Thorsten
« on: May 17, 2013, 11:50:59 am »
Quote
Would like to hear your post-WLW thoughts on Earwan metaphysics if you have the time/inclination.

I'm currently halfway through - I plan to write up anything interesting. Btw. - I wonder if anyone has a copy of my analysis note on languages in the first trilogy - I can't seem to find it on my new computer.

10
Neuropath / Re: Countering the Argument with Thorsten
« on: May 17, 2013, 08:30:10 am »
Quote
However, continued from the middle - you don't think there is evidence to suggest that our experiences as "I," as cognitive agents, isn't illusory insomuch as colours, perceptions, would not exist as they do without corresponding evidence of physiological changes (we're attracted to the redness of an apple because it has been beneficial towards our survival)?

It all depends on what you prefer to call 'real' or 'illusionary'.

I think normally the word 'illusion' is used to describe something which has a certain appearance, but you can verify that the appearance is not correct by remaining in a given framework by just changing perspective. For instance, mirror illusions don't require you to change to a description of the situation in terms of quantum theory - you can walk around the setup, and it becomes apparent that there is a mirror.

In contrast you (and Bakker) seem to be using the word 'illusion' here for something that is in a high-level effective theory (the mind as we ourselves would describe it, or psychology as used by Freud, Jung, Adler,...) but not in a low-level more fundamental theory (interactions among interconnected neurons).

This is, I think, a very important difference. Let me illustrate this with an example where it is well understood what happens.

A rock has mass. In a very high level effective theory (Newtonian mechanics) we can view it as a rigid body with a given mass and  use that to compute the trajectories of the rock when we throw it.

At a lower level, the rock is composed of molecules which have mass. Even more fundamentally, it is composed of protons and neutrons which have mass with electrons thrown in which have very little mass to comtribute. At the most fundamental level, the rock is described by the Standard Model of Particle Physics in terms of quarks, gluons, and photons. And in the bare Lagrangean of this theory, there is no mass.

So mass is a property of high level effective theories only, it is not a fundamental property of the world. The illusion of mass of a rock arises largely because there is a lot of field energy in the binding of quarks and gluons which makes an empty vacuum energetically disfavoured, and thus stuff plowing through the field energy contained in the vacuum effectively acquires mass.

Yet, this 'illusionary' mass is quite capable of killing you when the rock hits your head. Which goes a long way to illustrate that just because something is not a property of the fundamental theory, it can't be seen as meaningless or 'not real'.

The fact that a theory of the mind in terms of connected neurons doesn't have certain traits can not be used to argue that these traits would not be meaningful, or not real when seen on a different scale. There are dozends (if not more) counterexamples in physics where effective high level theories gain new properties or lose properties which the fundamental theory has.

The world behaves as if there would be mass when seen at a certain scale, this is what gives meaning to the concept. The world behaves as if there would be an 'I' when seen at a certain scale. and this is what gives meaning to the concept.

If you want to limit 'real' to 'what is contained in our most fundamental theory only', you declare pretty much everything as illusion, and you're left with a description of the world in terms of operators acting on Fock spaces having certain commutation and anticommutations - which manifestly isn't what is real, but just describes how the real world behaves. So in essence nothing is real then. Doesn't lead anywhere in particular to accept only the fundamental as real.

11
Neuropath / Re: Countering the Argument with Thorsten
« on: May 17, 2013, 06:53:28 am »
Quote
Well, we humans seem to use a combination of statistic descriptions (math) and logic (linguistic justifications) with varying degrees of validity to convey holistic packets of information - Truth seems to satisfy making the most sense out of a(ll) given occurrence of phenomena. Data and concise, valid communication are ideal as is the heuristic strategy (Occam's razor) of positing and falsifying fewer, rather than more assumptions to support your actual hypothesis. We're talking averages across averages, right?

Well, the catch is making most sense - how do you define that? 

Within a formal system (science), making sense is defined in terms of deviation from the data, given a hypothesis - so within the formal system I can decide what makes sense and what doesn't.  However, that doesn't tell if the system makes sense.

Adopt a different formal system - comparison to scripture. Then the definition of 'making sense' becomes 'is sufficiently close to something that's described in the scripture'.

If you're thinking within science, looking into scripture for answers makes no sense. If you are thinking within scripture, doing a statistical analysis makes no sense.

What you need to argue is that science is the best among all possible formal systems to be applied to some particular problem (the mind in this case). Because clearly there are problems to which science doesn't apply. Is the Mona Lisa great art? is not a question you could address with a measurement or statistical analysis.

Quote
Perhaps, you've some underlying disdain for psychological research?

Perhaps... I want to be careful, there is lots of research, and I don't want to make blanket statements. But I guess most of the time when some result is popular and often cited by writers and I look into it, I just think it misses the point. Some examples:

1) Pattern recognition

The usual story goes that the human mind has a strong tendency to see patterns in randomness (this can be connected to the evolutionary cost of running from what is not a tiger, as compared with not running from what is a tiger). An often-cited setup involves a researcher generating strings of random numbers on the computer and asking the probands to see if they could find a pattern - the vast majority could. The interpretation given is that humans are obviously quite good in spotting a pattern where there is none.

Except the probands were factually correct.

First, computers don't generate random numbers, they generate pseudorandom numbers, and given enough of these, a vast intellect could unravel the algorithm and predict the next number in the sequence. So people were quite correct in assuming the existence of a pattern as it was really there.

Second, mathematically for any finite sequence it is impossible to determine if it is truly random or not. Most people would complete 2,4,6,8,10,... with 12 assuming the rule is 'add 2', but the rule may be 'next number has to be larger' in which case 11 would be valid, the rule may be 'must be even' in which case 2 would be valid, the rule may be 'count by adding 2 to 10, then go negative' in which case -2 would be the next answer,... there's nothing logical in preferring one rule over the other, it's just down to habit, we like counting.

So mathematically the probands were doing something very reasonable - they were trying to find the rule for a sequence which could well have a rule. What the experiment actually shows is that the mind tries to spot a pattern where there could be a pattern if prompted to do so by the question (if the question would have been 'are these good random numbers' the answer would possibly have been different...).

2) Happiness research

One often quoted study is that children do not increase the happiness in a family. What was done is that the researchers selected  samples of women with and without children in similar social positions, phoned them several times and interviewed them for their current happiness, estimated on a scale from 0 to 10. The claimed result was that children do not lead to any happier life.

I think this one gets hit by the applicability of logic, here the inference rule that if you have a proposition A and not-A, and you can show not-A to be true, A must be false.

I myself am capable of experiencing a mental state with contradicting emotions. I can, for instance, feel desperate, angry at myself and at the same time feel deep satisfaction with the way my life is going. Or I can be madly angry and my children about breaking something and love them at the same time. Describing my emotional state at any given time would require about 2 pages of written text using a symbolic language (I couldn't do it in English for lack of words, but I could for instance do it using astrological symbols - I think Jung was the first to realize how useful symbolic expressions are in the context).

If asked to report my surface emotion, this would be a hopelessly inadequate picture, and any inference drawn from that about my happiness would be completely wrong. Even on a day where the kids annoy me to the point of screaming, I am still aware that seeing them grow up is the source of deep satisfaction for me. So in this case, proving not-A doesn't imply the falseness of A. If that is so for me, why would it be different for others? I think the study simply doesn't ask in the right framework, and so it obtains a meaningless answer.

3) Evolutionary psychology

One celebrated result here is jealousy sexual vs. emotional infidelity - the results of questions posed to probands are that women feel more disturbed by the thought of their partner forming an emotional attachment to another woman instead of just having sex, whereas men feel the other way round. Supposedly this proves that evolution strikes and we see a stone-age hunter-gatherer setup at work - men are worried they'd have to care for offspring not carrying their genes, whereas women are concerned with losing a nutrition-providing partner.

Except... the raw data shows that variations between cultures in the jealousy response are much greater than the effects claimed which remain after averaging across all cultures. Except, we don't know at all how stone age humans felt about jealousy and raised children - they might have raised kids in a large group, not caring about whose offspring a particular child is. The stone-age setup proposed is not based on evidence, but imagined such that it accounts for the facts. Without solid data on how stone-age humans actually thought and behaved, I can justify anything.

4) Enlightnenment

If you do brain imaging of Buddhist monks during meditation, you can observe certain areas of their brain go active. The supposed implication of this is that they don't really experience enlightenment, it's just a function of this particular brain center.

Of course, if you would do brain imaging of  scientists reading and understanding a research paper, you would observe certain areas of their brains go active. If the above implication were correct, then they wouldn't really understand the paper, it could all be explained in terms of a particular brain center being active.

Correlation isn't causation - this is a fairly elementary reasoning error.

I could go on with this, but I think you may spot where my problems with research in psychology and brain science reside...

12
Neuropath / Re: Countering the Argument with Thorsten
« on: May 15, 2013, 07:02:13 am »
Quote
I'm not privy to which academic discipline(s) you practice but your post seems philosophical, especially in that your assertions don't encounter real-life examples (I actually think Reductionism had a better chance of countering the argument, in this post).

I am a theoretical physicist by profession, working mostly in applied quantum field theory (Quantum Chromodynamics mostly).

The applicability of logic is a very deep one. How do you decide on something being true? Where do the logical deduction rules come from? Why do we think a principle like Occam's razor ('If there are several competing explanations, take the one requiring the least amount of assumptions.') is good for establishing truth?

You can't say 'Well, I just know it's true' - history tells that in the past people used very different criteria based on the same 'I just know - it's obvious'.

You can't argue 'Science must be true, because so many brilliant people are doing it and society spends so much money for it.' - in the past, brilliant people studied theology and society invested money to build churches.

If you start up-front from the position that science must be true, then you're no better than someone starting from the position that scripture must be true. Scientific principles aren't really self-justifying or self-evident - they are justified from somewhere, and I think this somewhere is experience.

Occam's razor is in my view a good principle because in my experience it turns out to be true most of the time. The whole set of scientific principles applied to physics is justified, because it actually works out for me - I can calculate a phenomenon, and my experience tells me that I successfully predicted what I will experience.

(As a side note, modern physics is all about what you experience and not at all about what things really *are* - all Quantum Field Theory is concerned with are 'observables', and it is very clear that we don't have a clue what nature is, only how it behaves when we look at it).

If you follow the chain that sometimes we discard experiences because of science, but science is ultimately justified by experience only, things start getting very very murky. I do not think one can automatically assume that the same deduction principles continue to hold - they have to be justified anew if applied to the mind. Especially because the mind is self-referencing, but several principles are known to break when applied to self-referencing systems. If psychologists would test the foundations of their own field with the same level of rigor they apply to, say, religious experiences, they'd be in for a bad surprise.

Or, to be slightly mean: Imagine one of the experimental papers supposedly disproving the notion of free will is sent to a journal. The referee recommends not to publish the paper. I am prepared to bet a lot of money that the researchers do not think 'Well, the referee has no free will, he is determined by circumstances to come to this decision to decline publication, so there's nothing we can do.' I am very sure what they will no is to make an appeal to the free decision-making ability of the referee to change this decision based on new arguments. Because science requires the ability to decide between a true proposition and a false one. If we could not make that decision because we'd be compelled by circumstance to believe something, science wouldn't work conceptually. So that's why researchers disproving free will don't act in any way as if their research would actually be true.

I've written a longer text about 'belief in evidence' in a discussion of Dawkins' The God Delusion in case you're interested - it's the second part.

Quote
You've treated this like a logic problem - my first thoughts are a couple of strange examples from neurological studies.(...)While these might support your Reductionism metaphor, I don't think it helps the Applicability of Logic.

I know these cases do exist, but what can we really deduce from them?

I have no doubt that there is a deep connection between mind and body at some level. A very simple example is the experience that when I drink alcohol, my mental experience changes.

Yet, on the next day, my mind reverts back to how it was.

But then there seems to be something as mind-internal experience -  I might have a crucial insight, or come to a major decision in my life. And on the next day, my mind does not revert back but remains changed from that point on.

So could this not suggest a hardware/software model in which in the first case I temporarily change the hardware, and as a result the software runs differently, but reverts back to its normal operation once the hardware operates normally, whereas the second case represents a change in the software which isn't easily revertable?

I would assume that if someone over night severs the neural connections to my leg and attaches the same nerve bundles to my arms, then my intention to move my leg will lead to some motion of my arms. In a similar way, I would assume that wrong wiring of senses can lead to all sorts of weird perceptions - like pleasure where pain would be expected. Such rewirings would be, unlike in the case of alcohol, more permanent hardware damage, with little change of the software to resume normal operation.

Yet, in many cases the mind seems to be able to work around damage. I vaguely remember an experiment in which people were asked to wear mirror glasses which showed the world upside-down, and while this was initially very confusing, their mind learned to undo the effect, and after a few days they saw the world normally again - and then inverted once they took the glasses off, until again after a few days the perception adjusted to normal.

The point seems to be that being able to prove that changes to the body/brain change the mental experience isn't the same thing as proving that there is no software equivalent and that the hardware is all there is to the problem.

Quote
Also, aside - I'm ecstatic that you've found your way here

Thanks - I appreciate that. I am reading the White-Luck Warrior at the moment, so I wanted to read up on the details of my Metaphysics (and partially languages) - so then I started looking again for where the discussions are.

13
Neuropath / Re: Countering the Argument with Thorsten
« on: May 14, 2013, 06:24:55 am »
You may perhaps, upon closer inspection, find me not quite as ignorant of psychological research as you think. But certainly I am not a psychologist, just someone who is very interested in the nature of mind and consciousness and tries to read up from different corners and to think things through - so if you find a genuine lack of understanding, I would ask you to educate me here.

I do feel you might have missed my main point though - I think the mind is not one of the problems where science can be safely applied to (cf. the section on applicability of logic), and hence I'm not sure how even perfect knowledge of the state of art of psychological research would change my position. I do not dispute that psychologists who make certain experiments get certain results, but I doubt the common interpretations attached to these.

To give a simple example (which I borrow from an exchange I had  with Axel Honneth, a German philosopher):

Assume you are in severe pain. Assume you are taken to a hospital, and your brain is scanned with the most sophisticated brain scanner. The neuroscientists then look at the results and conclude that the pain center in your brain isn't active, so you must be mistaken about feeling pain and ask you to go home. Do you

a) insist that you really feel pain and that something is wrong with the scanner, and no amount of additional scanning or scientific testimony would convince you that you're mistaken about being in pain
b) go home, concluding that your pain isn't really there because science must be right

If you select a), you might be willing to acknowledge the point that there is at least something unscientific to the mind.

Pages: [1]