Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - sciborg2

Pages: 1 2 [3] 4 5 ... 16
What is the Historical Study of Science and Magic Good for?

Andreas Sommer

Even though I was explicitly writing as a historian, I was perhaps a little light on actual history, at least concerning the periods we typically look at here on Forbidden Histories. In fact, I only gave a couple of examples to illustrate typical Enlightenment responses to reported ‘spirit-seership’ and briefly mentioned studies of hallucinations and apparitions of the dead in non-pathological populations by William James and English colleagues in the outgoing nineteenth century.

The remainder of the short piece is mainly concerned with relatively recent medical findings concerning constructive functions of certain hallucinations and ‘mystical’ experiences. Whereas previous generations of medics have regarded hallucinations, apparitions of the dead and similar experiences as inherently pathological and undesirable, these views began to be drastically modified in the early 1970s with new research on so-called ‘hallucinations of widowhood’.

From then on, it seemed like friendly ghosts and otherworldly visions were gradually making an entry into the mainstream medical literature not only in the shape of comforting visitations from the departed in widowhood, but also in often profoundly moving end-of-life experiences in palliative and hospice care. At around the same time, mystical experiences sometimes occurring during close brushes with death began to be recognized by mainstream medicine as often having constructively transformative effects. Not least, similar but psychedelically induced (rather than spontaneously occurring) experiences have been shown to be effective in the treatment of severe conditions including treatment-resistant depressions and post-traumatic stress disorder.

Following a summary of these clinical revisions, I touched a point that’s not usually raised: Questions of the ultimate reality of spirits and ‘magic’ aside, if otherworldly experiences can have constructive and even therapeutic functions at least for a part of humanity, could it be harmful to follow blindly the outdated historical standard narrative of Western modernity...

How Brain Scientists Forgot That Brains Have Owners

John Krakaeur, a neuroscientist at Johns Hopkins Hospital, has been asked to BRAIN Initiative meetings before, and describes it like “Maleficent being invited to Sleeping Beauty’s birthday.” That’s because he and four like-minded friends have become increasingly disenchanted by their colleagues’ obsession with their toys. And in a 2017 paper that’s part philosophical treatise and part shot across the bow, they argue that this technological fetish is leading the field astray. “People think technology + big data + machine learning = science,” says Krakauer. “And it’s not.”

He and his fellow curmudgeons argue that brains are special because of the behavior they create—everything from a predator’s pounce to a baby’s cry. But the study of such behavior is being de-prioritized, or studied “almost as an afterthought.” Instead, neuroscientists have been focusing on using their new tools to study individual neurons, or networks of neurons. According to Krakauer, the unspoken assumption is that if we collect enough data about the parts, the workings of the whole will become clear. If we fully understand the molecules that dance across a synapse, or the electrical pulses that zoom along a neuron, or the web of connections formed by many neurons, we will eventually solve the mysteries of learning, memory, emotion, and more. “The fallacy is that more of the same kind of work in the infinitely postponed future will transform into knowing why that mother’s crying or why I’m feeling this way,” says Krakauer. And, as he and his colleagues argue, it will not.

That’s because behavior is an emergent property—it arises from large groups of neurons working together, and isn’t apparent from studying any single one. You can draw parallels with the flocking of birds. Biologists have long wondered how they manage to wheel about the skies in perfect coordination, as if they were a single entity. In the 1980s, computer scientists showed that this can happen if each bird obeys a few simple rules, which dictate their distance and alignment relative to their peers. From these simple individual rules, collective complexity emerges.

But you would never have been able to predict the latter from the former....

The lure of ‘cool’ brain research is stifling psychotherapy

Allen Frances

The more we learn about genetics and the brain, the more impossibly complicated both reveal themselves to be. We have picked no low-hanging fruit after three decades and $50 billion because there simply is no low-hanging fruit to pick. The human brain has around 86 billion neurons, each communicating with thousands of others via hundreds of chemical modulators, leading to trillions of potential connections. No wonder it reveals its secrets only very gradually and in piecemeal fashion.

Genetics offers the same baffling complexity. For instance, variation in more than 100 genes contributes to vulnerability to schizophrenia, with each gene contributing just the tiniest bit, and interacting in the most impossibly complicated ways with other genes, and also with the physical and social environment. Even more discouraging, the same genes are often implicated in vulnerability to multiple mental disorders – defeating any effort to establish specificity. The almost endless permutations will defeat any easy genetic answers, no matter how many decades and billions we invest.

The NIMH has boxed itself into a badly unbalanced research portfolio. Playing with ‘cool’ brain and gene research toys trumps the much harder and less intellectually rewarding task of helping real people.

Contrast this current NIMH failure with a great success story from NIMH’s distant past...

Philosophy & Science / Semantic Apocalypse in Space?
« on: March 03, 2020, 02:16:39 am »
Why We Should Think Twice About Colonizing Space

In one article in Futures, which was inspired by political scientist Daniel Deudney’s forthcoming book Dark Skies, I decided to take a closer look at this question. My conclusion is that in a colonized universe the probability of the annihilation of the human race could actually rise rather than fall.

Consider what is likely to happen as humanity hops from Earth to Mars, and from Mars to relatively nearby, potentially habitable exoplanets like Epsilon Eridani b, Gliese 674 b, and Gliese 581 d. Each of these planets has its own unique environments that will drive Darwinian evolution, resulting in the emergence of novel species over time, just as species that migrate to a new island will evolve different traits than their parent species. The same applies to the artificial environments of spacecraft like “O’Neill Cylinders,” which are large cylindrical structures that rotate to produce artificial gravity. Insofar as future beings satisfy the basic conditions of evolution by natural selection—such as differential reproduction, heritability, and variation of traits across the population—then evolutionary pressures will yield new forms of life.

But the process of “cyborgization”—that is, of using technology to modify and enhance our bodies and brains—is much more likely to influence the evolutionary trajectories of future populations living on exoplanets or in spacecraft. The result could be beings with completely novel cognitive architectures (or mental abilities), emotional repertoires, physical capabilities, lifespans, and so on.

In other words, natural selection and cyborgization as humanity spreads throughout the cosmos will result in species diversification. At the same time, expanding across space will also result in ideological diversification. Space-hopping populations will create their own cultures, languages, governments, political institutions, religions, technologies, rituals, norms, worldviews, and so on. As a result, different species will find it increasingly difficult over time to understand each other’s motivations, intentions, behaviors, decisions, and so on. It could even make communication between species with alien languages almost impossible. Furthermore, some species might begin to wonder whether the proverbial “Other” is conscious. This matters because if a species Y cannot consciously experience pain, then another species X might not feel morally obligated to care about Y. After all, we don’t worry about kicking stones down the street because we don’t believe that rocks can feel pain. Thus, as I write in the paper, phylogenetic and ideological diversification will engender a situation in which many species will be “not merely aliens to each other but, more significantly, alienated from each other.”

But this yields some problems. First, extreme differences like those just listed will undercut trust between species. If you don’t trust that your neighbor isn’t going to steal from, harm, or kill you, then you’re going to be suspicious of your neighbor. And if you’re suspicious of your neighbor, you might want an effective defense strategy to stop an attack—just in case one were to happen. But your neighbor might reason the same way: she’s not entirely sure that you won’t kill her, so she establishes a defense as well. The problem is that, since you don’t fully trust her, you wonder whether her defense is actually part of an attack plan. So you start carrying a knife around with you, which she interprets as a threat to her, thus leading her to buy a gun, and so on. Within the field of international relations, this is called the “security dilemma,” and it results in a spiral of militarization that can significantly increase the probability of conflict, even in cases where all actors have genuinely peaceful intentions.

Ancient Mud Reveals an Explanation for Sudden Collapse of the Mayan Empire

During their 3,000-year dominance over Mesoamerica, the Mayans built elaborate architectural structures and developed a sophisticated, technologically progressive society. But immediately after reaching the peak of its powers over the entire Yucatan Peninsula, the Mayan Empire collapsed, falling apart in just 150 years. The reasons for its sudden demise remain a mystery, but in a 2018 Science study, scientists found clues buried deep in the mud of Lake Chichancanab.

Deforestation, overpopulation, and extreme drought have all been proposed as the reason for the empire’s collapse. The most probable of those, argue the University of Cambridge and University of Florida scientists in their study, is drought. The evidence they gathered in the muddy sediments underlying Lake Chichancanab, which was once a part of the empire, underscore the devastating power of a drought on a population.

The sediment cores that the scientists dug up from the depths of the lake are like a time machine, giving a glimpse of what past environments look like. In the study, the team specifically looked at precipitated gypsum, a soft mineral that incorporates oxygen and hydrogen isotopes of water molecules into its crystalline structure. Looking at it was like peering into fossil water, and in this case, it showed that the area surrounding the lake had gone through extremely arid periods. During periods of drought, larger amounts of water evaporate, and so a higher proportion of lighter isotopes in gypsum indicates a period of drought.

The team determined that between the years 800 and 1,000, annual rainfall in the Maya lowlands decreased by nearly 50 percent on average and up to 70 percent during peak drought conditions. This means the rainfall in this region essentially stopped about the same time that the empire’s city-states were abandoned.

How to Make the Study of Consciousness Scientifically Tractable

Strangely, modern science was long dominated by the idea that to be scientific means to remove consciousness from our explanations, in order to be “objective.” This was the rationale behind behaviorism, a now-dead theory of psychology that took this trend to a perverse extreme.

Behaviorists like John Watson and B.F. Skinner scrupulously avoided any discussion of what their human or animal subjects thought, intended or wanted, and focused instead entirely on behavior. They thought that because thoughts in other peoples’ heads, or in animals, are impossible to know with certainty, we should simply ignore them in our theories. We can only be truly scientific, they asserted, if we focus solely on what can be directly observed and measured: behavior.

Erwin Schrödinger, one of the key architects of quantum mechanics in the early part of the 20th century, labeled this approach in his philosophical 1958 book Mind and Matter, the “principle of objectivation” and expressed it clearly:

“By [the principle of objectivation] I mean … a certain simplification which we adopt in order to master the infinitely intricate problem of nature. Without being aware of it and without being rigorously systematic about it, we exclude the Subject of Cognizance from the domain of nature that we endeavor to understand. We step with our own person back into the part of an onlooker who does not belong to the world, which by this very procedure becomes an objective world.”

Schrödinger did, however, identify both the problem and the solution. He recognized that “objectivation” is just a simplification that is a temporary step in the progress of science in understanding nature.

He concludes: “Science must be made anew. Care is needed.”

We are now at the point, it seems to a growing number of thinkers who are finally listening to Schrödinger, where we must abandon, where appropriate, the principle of objectivation. It is time for us to employ a “principle of subjectivation” and in doing so understand not just half of reality—the objective world—but the whole, the external and internal worlds.

Philosophy & Science / Nautilus Panpsychism Issue
« on: February 27, 2020, 08:57:04 pm »


The Forest Spirits of Today Are Computers

We’ve made an artificially panpsychic world, where technology and nature are one.

Consciousness Isn’t Self-Centered

Think of consciousness like spacetime—a fundamental field that’s everywhere.

Is Matter Conscious?

Why the central problem in neuroscience is mirrored in physics.

A Clash of Perspectives on Panpsychism

What panpsychism does—and does not—explain about consciousness.

Philosophy & Science / Why your brain is not a computer
« on: February 27, 2020, 08:46:11 pm »
Why your brain is not a computer

Matthew Cobb

Reverse engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thought experiments are successful, encouraging us to pursue this way of understanding the squishy organs in our heads. But in 2017, a pair of neuroscientists decided to actually do the experiment on a real computer chip, which had a real logic and real components with clearly designed functions. Things did not go as expected.

The duo – Eric Jonas and Konrad Paul Kording – employed the very techniques they normally used to analyse the brain and applied them to the MOS 6507 processor found in computers from the late 70s and early 80s that enabled those machines to run video games such as Donkey Kong and Space Invaders.

First, they obtained the connectome of the chip by scanning the 3510 enhancement-mode transistors it contained and simulating the device on a modern computer (including running the games programmes for 10 seconds). They then used the full range of neuroscientific techniques, such as “lesions” (removing transistors from the simulation), analysing the “spiking” activity of the virtual transistors and studying their connectivity, observing the effect of various manipulations on the behaviour of the system, as measured by its ability to launch each of the games.

Despite deploying this powerful analytical armoury, and despite the fact that there is a clear explanation for how the chip works (it has “ground truth”, in technospeak), the study failed to detect the hierarchy of information processing that occurs inside the chip. As Jonas and Kording put it, the techniques fell short of producing “a meaningful understanding”. Their conclusion was bleak: “Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.”

This sobering outcome suggests that, despite the attractiveness of the computer metaphor and the fact that brains do indeed process information and somehow represent the external world, we still need to make significant theoretical breakthroughs in order to make progress.

n reality, the very structures of a brain and a computer are completely different. In 2006, Larry Abbott wrote an essay titled “Where are the switches on this thing?”, in which he explored the potential biophysical bases of that most elementary component of an electronic device – a switch. Although inhibitory synapses can change the flow of activity by rendering a downstream neuron unresponsive, such interactions are relatively rare in the brain.

A neuron is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, neurons respond in an analogue way, changing their activity in response to changes in stimulation. The nervous system alters its working by changes in the patterns of activation in networks of cells composed of large numbers of units; it is these networks that channel, shift and shunt activity. Unlike any device we have yet envisaged, the nodes of these networks are not stable points like transistors or valves, but sets of neurons – hundreds, thousands, tens of thousands strong – that can respond consistently as a network over time, even if the component cells show inconsistent behaviour.

Understanding even the simplest of such networks is currently beyond our grasp. Eve Marder, a neuroscientist at Brandeis University, has spent much of her career trying to understand how a few dozen neurons in the lobster’s stomach produce a rhythmic grinding. Despite vast amounts of effort and ingenuity, we still cannot predict the effect of changing one component in this tiny network that is not even a simple brain.

This is the great problem we have to solve. On the one hand, brains are made of neurons and other cells, which interact together in networks, the activity of which is influenced not only by synaptic activity, but also by various factors such as neuromodulators. On the other hand, it is clear that brain function involves complex dynamic patterns of neuronal activity at a population level. Finding the link between these two levels of analysis will be a challenge for much of the rest of the century, I suspect. And the prospect of properly understanding what is happening in cases of mental illness is even further away.

Ancient animistic beliefs live on in our intimacy with tech


Recently, science has come to understand the emotions of social bonding, and I think it helps us understand why it’s so easy to fall into these ‘as-if intimacies’ with things. Care or bonding is a function of oxytocin and endorphin surging in the brain when you spend time with another person, and it’s best when it’s mutual and they’re feeling it too. Nonhuman animals bond with us because they have the same brain chemistry process. But the system also works fine when the other person doesn’t feel it – and it even works fine when the other person isn’t even a ‘person’. You can bond with things that cannot bond back. Our emotions are not very discriminating and we imprint easily on anything that reduces the feeling of loneliness. But I think there’s a second important ingredient to understanding our relationship with tech.

The proliferation of devices is certainly amplifying our tendency for anthropomorphism, and many influential thinkers claim that this is a new and dangerous phenomenon, that we’re entering into a dehumanising ‘artificial intimacy’ with gadgets, algorithms and interfaces. I respectfully disagree. What’s happening now is not new, and it’s more interesting than garden-variety alienation. We are returning to the oldest form of human cognition – the most ancient pre-scientific way of seeing the world: animism.

General Misc. / Doki Doki Literature Club (Trigger Warning: EVERYTHING)
« on: February 23, 2020, 12:04:24 am »
Played Doki Doki Literature Club...fuck me one of those times when every Trigger Warning is rightfully applied, not for the depressed/anxious/etc...

Can't say more without spoiling, will post some more but decided this could have its own thread given the meta-textual aspects.

The Decoy Effect: How You Are Influenced to Choose Without Really Knowing It

How Decoys Work

When consumers are faced with many alternatives, they often experience choice overload – what psychologist Barry Schwartz has termed the tyranny or paradox of choice. Multiple behavioural experiments have consistently demonstrated that greater choice complexity increases anxiety and hinders decision-making.

In an attempt to reduce this anxiety, consumers tend to simplify the process by selecting only a couple of criteria (say price and quantity) to determine the best value for money.

Through manipulating these key choice attributes, a decoy steers you in a particular direction while giving you the feeling you are making a rational, informed choice.

The decoy effect is thus a form of “nudging” – defined by Richard Thaler and Cass Sunstein (the pioneers of nudge theory) as “any aspect of the choice architecture that alters people’s behaviour in a predictable way without forbidding any options”. Not all nudging is manipulative, and some argue that even manipulative nudging can be justified if the ends are noble. It has proven useful in social marketing to encourage people to make good decisions such as using less energy, eating healthier or becoming organ donors.

Philosophy & Science / The Unicorn, the Phoenix, and the Dragon
« on: February 14, 2020, 06:53:42 am »
The Unicorn, the Phoenix, and the Dragon

The key to the magical dimension of historical cycles lies in a detail of history that Vico and Barfield both grasped firmly: the fact that human beings don’t think the same way at one stage in the historical process as they do at other stages. Barfield’s claim was that all of humanity passes through a single process of change in consciousness, starting with his hypothesized “original participation” and ending in his equally hypothetical “final participation.” Vico’s, far more troubling to the modern mind, was that each nation goes through predictable changes in consciousness, and that modern societies are repeating the same stages that can be traced in the classical world. It’s always possible to claim that Barfield is right on the largest scale, since it’s possible to claim anything at all about that without risk of disproof, but in terms of time frames that are subject to verification, the facts support Vico instead.

There are various ways to talk about “the course the nations run,” the cycle of consciousness through which each society passes over the course of its history, but I’m going to use a few of Vico’s own examples here. As mentioned earlier, the earliest law codes in any civilization are specific, concrete lists of crimes and their punishments.  The final law codes in any civilization you care to name are intricately crafted tissues of abstract reasoning. That movement from the concrete to the abstract, from the richly sensory image to the richly intellectual concept, is among the consistent features of the history of a civilization—and so is the collapse of abstraction in the final era of a civilization and its replacement by a newly concrete consciousness rooted, once again, in sensory images.

The movement from concrete to abstract consciousness that both Vico and Barfield understood in their own ways, and it’s one of the things that makes Barfield’s Saving the Appearances and his works on the history of language worth reading despite the Procrustean bed of linear time into which he forces his data. Take any word in modern English that has an abstract connotation—for example the word “abstract” itself. English got that word from Latin, and in Latin, its original sense is clear: ab- is a prefix meaning “from, away from, out of,” and traho is a verb meaning “to pull.” (A tractor, similarly, is something that pulls.)  An abstraction is thus a set of perceptions that have been pulled out of their original setting amid the other details of everyday life, and turned into a concept. Put another way—and this will be crucial for our further work—an abstraction is a model of experience, created by cherrypicking certain features of that experience and treating those as the things that matter, while dismissing every other feature as secondary or irrelevant.


We can call the first kind concrete concepts, and the second abstract concepts. There’s a continuum connecting them, created by repeated abstraction—that is to say, repeated construction of categories that moves further away from the concrete experience at its root. “Sally,” “girl,” “human,” “primate,” “mammal,” “animal,” and “life” are all descriptions of the same child playing in a sandbox; each movement further into abstraction allows something to be said about wider and wider circles of other concrete phenomena, which is what gives abstract thinking its power; at the same time, it allows less and less to be said accurately about those phenomena, which is what gives abstract thinking its vulnerability to delusion.

Now it so happens, as already pointed out, that civilizations start out thinking in concrete concepts. That’s true of their law codes and their literature, their political institutions and their practical arts, and every other dimension of their lives. In the earliest stage, the stage Vico called the barbarism of sense, those concrete concepts aren’t related to one another in any compelling way, and the result is chaos—mental chaos, but also cultural, social, and political chaos, because people who can’t assemble a meaningful world in their heads aren’t going to be able to do so in any more concrete sense either.

What puts an end to the barbarism of sense is the emergence of a pattern that reduces the cognitive chaos to order: not an abstract pattern, as the capacity for abstraction is just beginning to develop within the newborn culture, but a set of concrete mental representations charged with emotional force. The social form that gives context of this emergent pattern is a religion—one could as well say that the religion is the emergent pattern. North of the Mediterranean, for example, the representations around which a new society crystallized in the wake of Rome were the core images of Christianity.  Images, not abstract concepts: what mattered in the post-Roman chaos was not abstract theology but the tremendous images of God born in a stable, wandering with his disciples in Galilee and Judea, dying a brutal death on the cross, emerging alive from the grave, and rising miraculously into the sky.

Thinking in the early stage of a civilization always centers on some such set of emotionally charged representations that bring order to the cognitive chaos of a fallen civilization. Such thinking differs in important ways from the sort of thinking that’s common nowadays, or more generally in the last centuries of any civilization. We think abstractly, analytically, sorting out our perceptions into one or another scheme of categories; people in dark ages think concretely, synthetically, relating their perceptions to one or another set of compelling images. Thus it never occurred to medieval authors to suggest that Christmas should be celebrated at the time of year when shepherds in Judea actually keep watch in the fields, as the Biblical narrative specifies.  To the medieval mind, the birth of Christ and the winter solstice, when the first slight northward movement of the sun’s apparent path in the sky announces the return of light and life to the world, belong so self-evidently to the same synthetic pattern of imagery that mere history had no power to separate them.

The transition from the numinous, emotionally charged images that surround a civilization’s cradle to the finely wrought but passionless abstractions that gather around its deathbed takes place, broadly speaking, in three stages. It so happens that very often, those three stages are assigned distinct names by historians, which makes the process easy to trace. In the modern Western world, those three stages are called the Middle Ages, the Renaissance, and the Modern Era; in the history of ancient Greece, they were the Archaic period, the Classical period, and the Hellenistic period, and so on. I propose to give them more general names, and since this is a blog about occult philosophy, I don’t propose to limit myself to the sort of dry nomenclature historians think they have to use these days.  The names I’ll use for these periods are the time of the Unicorn, the time of the Phoenix, and the time of the Dragon.

Let’s take them one at a time....

Mysterious Quantum Rule Reconstructed From Scratch

In other words, Born’s rule connects quantum theory to experiment. It is what makes quantum mechanics a scientific theory at all, able to make predictions that can be tested. “The Born rule is the crucial link between the abstract mathematical objects of quantum theory and the world of experience,” said Lluís Masanes of University College London.

The problem is that Born’s rule was not really more than a smart guess — there was no fundamental reason that led Born to propose it. “It was an intuition without a precise justification,” said Adán Cabello, a quantum theorist at the University of Seville in Spain. “But it worked.” And yet for the past 90 years and more, no one has been able to explain why.

Law Without Law

The project pursued here is one that has become popular with several researchers exploring the foundations of quantum mechanics: to see whether this seemingly exotic but rather ad hoc theory can be derived from some simple assumptions that are easier to intuit. It’s a program called quantum reconstruction. Cabello has pursued that aim too, and has suggested an explanation of the Born rule that is similar in spirit but different in detail. “I am obsessed with finding the simplest picture of the world that enforces quantum theory,” he said...

His approach starts with the challenging idea that there is in fact no underlying physical law that dictates measurement outcomes: Every outcome may take place so long as it does not violate a set of logical-consistency requirements that connect the outcome probabilities of different experiments. For example, let’s say that one experiment produces three possible outcomes (with particular probabilities), and a second independent experiment produces four possible outcomes. The combined number of possible outcomes for the two experiments is three times four, or 12 possible outcomes, which form a particular, mathematically defined set of combined possibilities.

Philosophy & Science / No One Can Explain Why Planes Stay in the Air
« on: February 11, 2020, 08:36:01 pm »
No One Can Explain Why Planes Stay in the Air

On a strictly mathematical level, engineers know how to design planes that will stay aloft. But equations don't explain why aerodynamic lift occurs.

There are two competing theories that illuminate the forces and factors of lift. Both are incomplete explanations.

Aerodynamicists have recently tried to close the gaps in understanding. Still, no consensus exists.

In December 2003, to commemorate the 100th anniversary of the first flight of the Wright brothers, the New York Times ran a story entitled “Staying Aloft; What Does Keep Them Up There?” The point of the piece was a simple question: What keeps planes in the air? To answer it, the Times turned to John D. Anderson, Jr., curator of aerodynamics at the National Air and Space Museum and author of several textbooks in the field.

What Anderson said, however, is that there is actually no agreement on what generates the aerodynamic force known as lift. “There is no simple one-liner answer to this,” he told the Times. People give different answers to the question, some with “religious fervor.” More than 15 years after that pronouncement, there are still different accounts of what generates lift, each with its own substantial rank of zealous defenders. At this point in the history of flight, this situation is slightly puzzling. After all, the natural processes of evolution, working mindlessly, at random and without any understanding of physics, solved the mechanical problem of aerodynamic lift for soaring birds eons ago. Why should it be so hard for scientists to explain what keeps birds, and airliners, up in the air?

Philosophy & Science / An Existential Crisis in Neuroscience
« on: February 11, 2020, 08:32:23 pm »
An Existential Crisis in Neuroscience

The question of how we might begin to grasp the entirety of the organ that generates our minds has been pressing me for a while. Like most neuroscientists, I’ve had to cultivate two clashing ideas: striving to understand the brain and knowing that’s likely an impossible task. I was curious how others tolerate this doublethink, so I sought out Jeff Lichtman, a leader in the field of connectomics and a professor of molecular and cellular biology at Harvard.

Lichtman’s lab happens to be down the hall from mine, so on a recent afternoon, I meandered over to his office to ask him about the nascent field of connectomics and whether he thinks we’ll ever have a holistic understanding of the brain. His answer—“No”—was not reassuring, but our conversation was a revelation, and shed light on the questions that had been haunting me. How do I make sense of gargantuan volumes of data? Where does science end and personal interpretation begin? Were humans even capable of weaving today’s reams of information into a holistic picture? I was now on a dark path, questioning the limits of human understanding, unsettled by a future filled with big data and small comprehension.

Pages: 1 2 [3] 4 5 ... 16