Clearing Things Up

I’ve spent the past few weeks considering various aspects of the techniques and concepts I’ve been writing about here, and I realized I’ve made at least one error that I consider to be pretty serious. I’ve also failed to make some crucial distinctions clear – and in doing so, I’ve done injustice to some of my favorite topics.

Default Mode Network, I never meant to wrong you.

So I’d like to use this post to lay out – in a way that makes sense – just what I’m getting at when I mention concepts like gods in the same breath as, say, the DMN.

My biggest error was failing to clearly demarcate the boundary between science and pseudoscience in some of my earlier posts. As obvious as it may seem to point this out, science is defined by a few general characteristics:

a) It seeks to systematize observed phenomena in a testable way.
b) It makes specific falsifiable predictions (hypotheses).
c) It makes repeated experiments, which produce concrete results that other scientists can replicate independently
d) It constantly revises its hypotheses and principles in light of new data.

[SCIENCE UPDATE! As Joshua Vogelstein pointed out in the comments section, some scientific fields (such as genomic and connectomic mapping) don’t fit all these criteria – they’re more oriented toward data-gathering than  attempting to falsify specific hypotheses. Thus, fields like these might be called descriptive sciences. This leaves us with the question of where, exactly, the boundary between science and pseudoscience falls – and I’m no longer sure there’s a cut-and-dried answer to that.

As Vogelstein puts it, “One can fit any data with an infinite number of functions, each fitting with arbitrarily good accuracy. Thus, goodness-of-fit does not mean correct; [it] just means sufficient for that particular measure of goodness-of-fit.”

It seems to me that one distinguishing feature of science – as opposed to pseudoscience – is that scientists are (ideally) open to revising any of their premises in view of new data, and that they work to make sure their measures of goodness-of-fit are applicable to the topics being investigated. For instance, studying astrology might be highly useful for understanding the social interactions of people who believe in astrology – but it won’t tell you much of anything about astrophysics.

At any rate, the concept of “science” is itself a human conception, and subject to interpretation. I fully admit that this is all getting rather Wittgensteinian, and I’m still working it out myself… END OF SCIENCE UPDATE!]

Now, in the real world, the process of scientific inquiry is often beset by hierarchical jockeying and personal bias. But science as an idea represents a distinct way of looking at reality – a process that treats organized rational analysis of empirical facts as more reliable than intuition, tradition, or belief. So it’s ironic that it’s my passion for logic and empiricism that’s driven me to analyze all of human experience – and even human consciousness itself – within a rational framework.

Which brings me to my equally ardent (but very different) love for mysteries, rituals, gods, memes, self-programming, and a cornucopia of other fascinating topics that are emphatically not rational or scientific in nature. What I mean is that science can only study the behavior of humans who believe in these concepts – it’s not possible to precisely detect and define a meme or a god in repeatable lab experiments. And yet, no matter how much I adore the logic of a well-performed experiment or a beautiful equation, I can’t deny the parts of my personality that want to dive into a novel by China Miéville or Arthur Machen, and enjoy mysteries just for being so mysterious – because I like them better that way.

What I want to try to do here is explain what happened when my love for reason collided with my love for mystery.

The idea that imaginary concepts can produce physiological effects – sometimes as potent as those induced by stimuli from the external world – has fascinated me for years. If imaginary monsters in closets can make us afraid of real closets, why can’t imaginary friends – or consciously constructed self-images – make us more confident in ourselves?

fMRI image of activation patterns associated with empathy, with the IFG pointed out.

When you or I sit down to watch a movie or read a novel, we’re cognitively aware that the characters aren’t going to be real people, and we certainly know those characters won’t corporeally inhabit the same physical time and space as we do. Still, our brains respond to fictional characters in much the same way as real people. So here we have something intriguing: an intersection between measurable (if not yet well-understood) phenomena (fMRI data) and completely imaginary characters.

Well, right here is where I think I’ve “leaped” in previous posts, when I should have taken time to clearly lay out what this intersection between the imaginary and the real actually means. For starters, it’s crucial to a scientific approach that I distinguish between 1) external matter – or light or electromagnetic waves or something that can be measured objectively, and 2) subjective internal experiences that – while they may have neurological correlates in the physical world, exist in the consciousness of the person who experiences them.*

Some of our subjective experiences correspond closely to external phenomena, and enable us to interact with the physical universe around us; and current neuroscience has unraveled quite a bit about how we see, hear, and otherwise sense the external world. But then there are concepts our minds synthesize from various external experiences, without any direct material correlates at all – these include things such as unicorns, superheroes, The Middle Ages, and hedonism – none of those are specific collections of matter and energy that you can point to in the physical world. Nevertheless, those concepts dreamed up by humans can produce measurable effects on the physical world, through the human nervous system.

But this intersection between the physical and the abstract implies even more – it implies that if we can teach ourselves to consciously choose what we’re going to believe about which concepts, we can not only hack our own connectomes; we can modify our body’s interactions with the physical world. Just as we “suspend disbelief” in a movie, and feel genuine fear or happiness for the characters, it’s not hard to learn to “suspend disbelief” – or “suspend belief” – in just about anything. That’s (obviously) not to say you can learn to walk through walls – physical reality has its own set of laws. But it is possible to reshape your own thought patterns and behavioral responses.

I’m also not trying to say that all belief is worthwhile – or that any belief can fit into a rational world-view. The difference, I think, is whether you’re in control of your beliefs, or your beliefs are controlling you. It’s not so much what the beliefs are about that matters – it’s how aware you are of their correct context within a world-view that’s logically consistent with all observed phenomena.

Beliefs become dangerous when they grow to the point that we’re no longer able to analyze and  question them rationally – when we can no longer properly contextualize them in a rational way. And that’s equally true whether those are beliefs about my own self-worth or beliefs about a vengeful deity.

For example, I know that Frodo Baggins never really lived on Earth, but I can still choose to be inspired by his actions. On the other hand, I can’t know beforehand the results of, say, a date I’m about to go on, or a meeting I’m about to walk into – but I can choose to believe in a positive outcome, which will keep me focused on behaviors I think are likely to produce one.

But over time, it becomes easier for our minds to reinforce existing beliefs, and ignore evidence to the contrary – which is why it’s crucial to keep tabs on your beliefs. Some of your beliefs actually correspond to reliable data, while you might choose to suspend disbelief in non-empirical ideas for the sake of accomplishing a goal. The point is, some of your beliefs are useful in terms of your current reality and goals, while others aren’t – and the most useful beliefs are not always the most rational ones. Sometimes, choosing to hold a non-rational belief can be the most rational choice.

ProTip: Choose to believe in a deity that doesn't demand regular human sacrifices.

What I’m trying to say is, beliefs seem to be multi-layered mental tools: we can believe in a character’s feelings, while also keeping on hand the belief that the character is fictional. We can believe in a positive outcome while remaining aware that we don’t know empirically whether it’s going to be true. We can rejoice in the mysteriousness of mysteries, while understanding the line between fiction and reality. And we can choose to believe, on weekday afternoons only (except Tuesdays), that certain gods or ancestors are watching out for us – so long as we keep a clear sense of what we actually mean when we say that.

We may be the only animals on this planet that can consciously choose to alter their perceptions and expectations of physical reality, and to imagine beings and spaces apart from the physical reality we can test and study. Those ideas can motivate us to heights of artistic or athletic achievement, or launch us into depths of childish behavior (or worse).

But at the very least, isn’t this a toolkit whose potential benefits are worth investigating further?

I’ll leave you with a quote:

Tales of strange creatures, occurrences, and sightings persist from the first days of man until today. To discard them out of hand is foolishness, as is to believe blindly.


* Somewhere in between these two realms, we have what might be called “data space” – the information encoded on computer hard drives and in books, which is represented by physical matter and energy, yet can produce meanings within the consciousness of a reader or viewer who perceives its output.

9 thoughts on “Clearing Things Up

  1. various connectome projects: human connectome project, open connectome project, whole mouse brain, etc., don’t seem to meet your criteria for science. do you not consider exploratory analysis of brains and/or other things science?

  2. You raise an interesting point, and I’ll do my best to explain my perspective on it.

    The distinction I was mainly trying to draw was between a scientific mindset, which openly invites independent testing to validate or disprove its claims, and a pseudoscientific mindset, which uses sciencey-sounding words to supposedly provide “backup” or “proof” for a pre-formed and/or non-falsifiable system of thought.

    I don’t mean to be dense, but I’m not sure what about those projects doesn’t meet the criteria I listed. They seek to systematize observed phenomena; their methods are based on hypotheses that can be validated or falsified through independent testing; they test their data through repeated experimentation, with the goal of (eventually) generating independently replicable datasets; and they seem to be open to revising their hypotheses in light of new data. I’ll grant you, though, that the field of connectomics as a whole might still be considered to be in its developmental stages…one might say it’s been crossing the border from philosophy into hard science over the past five years or so.

    I hope that makes sense. If I’m still missing something, I’d be very curious to hear your perspective in more detail.

    1. one might argue that all the “ome” fields are fundamentally *not* hypothesis driven, rather, they are “observational” or “descriptive” science. true, one could posit that they are operating on the hypothesis that “the ome will have many useful applications,” but i believe that is not generally thought of as a hypothesis, because it makes the category of hypothesis testing too broad (eg, all of math could be stated in those terms). moreover, i doubt anybody will ever test that hypothesis. so, they are not hypothesis testing in the classical sense. the publications do not necessarily posit hypotheses, rather, they mostly present data. often, getting funding is difficult for them, because they are not doing standard hypothesis testing, and popper seems to have confused people (imho) that the only science worth investigating is “falsifiable”. i think our world, science funding, science publication, etc., would all benefit by explicitly broadening our view of what counts as good science, to include efforts to generate data that many scientists want, even in the absence of specific hypotheses to test (note that i recognize that the data will facilitate many hypotheses being tested, but generating the data itself is not that). is that more clear? curious if that changes your perspective at all?

  3. Thanks for explaining that in more detail, Joshua. I understand much better now what you mean – the “ome”-mapping projects don’t test specific hypotheses, but rather seek to generate data that can be used to test hypotheses. I agree that in spite of this, the “ome” projects are scientific in nature.

    So, after reading your comment, I realized that several problems do exist with my definition – my criteria also don’t make room for what might be called the “practical” sciences (such as physical therapy and surgery), which focus more on applying gathered data than on testing hypotheses. And I agree that what you describe as “descriptive” science should fall within any definition of science.

    So it seems that what unites all scientific enterprises is an underlying drive to update and revise practices, T/F designations, working hypotheses, etc. in light of new data. Although scientists try to be as accurate as possible in their day-to-day work, a more significant goal than just being (or seeming to be) right is constantly working toward being “less wrong” overall – to actively seek out and acknowledge any shortcomings in our methodologies or datasets, and try to be truer to *all* known data tomorrow than we were yesterday… if that makes sense.

    This is by no means a complete definition of what science is – in fact, in view of your points above, I’m no longer sure that such a single hard-and-fast definition is possible. But that distinction does seem to be a helpful barometer for differentiating between, say, astronomy and astrology.

    I’m interested to hear if that view seems more reasonable to you. If so, I’ll add a revision to my post.

  4. my view, based on my understanding of wittgenstein, is that we cannot have hard-and-fast definitions of anything, including the word “science”. moreover, i take a more pragmatic view: the pursuit of getting descriptions of the world that are better with regard to some particular task, eg, predicting the outcome of future experiments, or suggesting future experiments. i would be surprised if one could find a set of necessary and sufficient conditions to distinguish astronomy and astrology; i find that astronomy’s predictions of the universe to be more accurate, but other’s might have other experiences.

    ps – sorry for the delay, somehow i didn’t get an email of your comment.

  5. “It seems to me that one distinguishing feature of science – as opposed to pseudoscience – is that scientists are (ideally) open to revising any of their premises in view of new data, and that they work to make sure their measures of goodness-of-fit are applicable to the topics being investigated.”

    well, the dalai lama adapts from science, does that make tibetan buddhism science or religion?

    discrete boundaries are harder to draw, imho, than relative rankings. i’d encourage for searching for criteria for fitness for a particular task to decide to what extent one is going to trust an explanation or predication; over boundaries. continuous dimensions have infinite cardinalities, humanly binned spaces will be finite. binning (ie, drawing lines) can help for bias-variance trade-off purposes, and i think it advisable to consider what bias we might be introducing.

    1. “well, the dalai lama adapts from science, does that make tibetan buddhism science or religion?”

      In my personal view, certain schools of Buddhism are more philosophical than they are scientific or religious. I admit I’m pretty ignorant of the Dalai Lama’s philosophy – but Theravada Buddhism (for instance) actively encourages its students to doubt and field-test its principles, and to value real-world observations over received dogma. While I wouldn’t claim that this is “hard” science by any means (or representative of Buddhism in general), it seems to hold more in common with philosophy than with religion.

      You make an excellent point, though, that attempting to “bin” many of these methodologies only results in further confusion. I did my best to address your statement about “criteria for fitness” (or, as I tend to call them, “ranges of relevancy”) in my update in the post; I certainly agree, and I hope I was clear about that.

      I also agree that drawing lines (verbal or mental) is likely to introduce bias. Interestingly enough, a related idea is fundamental to many schools of Buddhist thought – that by giving anything a name (i.e., a “bin”), we cordon off the “that” from the “not-that,” and may thus exclude data that could be helpful.

      I suppose my response wandered a bit, but your points are well taken – “continuous dimensions have infinite cardinalities, humanly binned spaces will be finite.” I think we’d all do well to apply that principle more in our day-to-day work.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s