↑ StateOfDenial
Page Created: 7/12/2014   Last Modified: 3/9/2016   Last Generated: 10/22/2024
There is a phrase that has bothered me for years:
"Extraordinary claims require extraordinary evidence."
It is frequently repeated by skeptics and touted as a tenant of the scientific method.
But this did not seem logical to me. Why do we need "extraordinary" evidence to consider something proven? Isn't just the mere existence of evidence "proof", something intrinsic to the definition of evidence itself?
In the 1980's, I was familiar with the operation of network protocols such as xmodem, punter, and how they used simple "checksums" to statistically reduce errors. But these didn't work all the time. Every once in a while, an error could creep into the file. And this wasn't a bug in the protocol, it seemed to be a complete design flaw since the protocol relied on probability.
Think about this. Computers are perfect logic machines. Boolean logic is the purest form of logic that exists. And yet here was a computer program created by people that only worked "most" of the time. And this is not due to the silicon circuits failing or a power surge or anything. Like the HAL 9000 in 2001: A Space Odyssey, the computer program was still functioning "perfectly".
What is happening is that the program is interfacing with another system that is not perfect, in this case a telephone line with noisy connections, and so its best guess is to rely on probability, a "stochastic" system.
This all made sense to me. The natural world is full of noise and randomness, so it was outside of our ability to create a machine to handle this 100% of the time.
However, when I began working in local area networking in the mid 1990's, I was introduced to "Ethernet", a type of networking protocol that my workplace used over 10base2 "thinnet" coaxial cable that ran on a higher layer of abstraction, known as layer 2↗. The error-correcting that I had previously been familiar with ran on the lowest layer, layer 1.
So layer 2 is supposed to be a layer where you don't have to worry about those physical errors. The probability of them creeping into layer 2 from layer 1 was low.
But what shocked me what that the layer 2 protocol was still using "imperfect" stochastic techniques. In a communication channel, there is only so much bandwidth for each person, like there are only so many lanes on a highway. So the protocol was designed to share this highway, known as media access control (MAC). There are a variety of ways to do this, but the designers of Ethernet, the protocol that eventually became most popular of its kind, decided not to treat each "frame" of data fairly but the data simply bombarded each other until it finally got through. Sometimes they hit each other, called "collisions", where the data could not go through even though the actual "highway" was clear. This is known as CSMA/CD (Carrier Sense Multiple Access with Collision Detection).
It was animalistic, horrifying. Why were intelligent engineers creating such a substandard system when we have a "perfect" logical machine at our disposal? This wasn't about noise anymore, this was about resource allocation, but they were similarly applying probabilistic algorithms. Instead of having the computer allocate data according to a fixed rule, they pitted the data against itself, competitively. It was as if the "noise" was creeping upward into the higher networking layers because we weren't stopping it. We were instead amplifying it. It was "good enough for most", like the pareto principle↗, which I similarly despise.
But "good enough for most" is dangerous thinking. Over the years people have inserted stochastic algorithms in all kinds of systems when they could have used deterministic algorithms, and the reasons frequently cited are efficiency or economics, that it is too costly to account for 100%.
When we use stochastic systems for natural phenomena, the system is already broken, so to speak, since we cannot account for the randomness in nature, so such algorithms are our best effort. But in recent years, people have been inserting those types of algorithms where there is no such physical limitation, intentionally "breaking" perfect systems.
In a country where "all men are created equal", when people go to vote at an election, we expect that every vote be counted. We can create a perfect, deterministic algorithm to facilitate this. But imagine if a company produced some software that was less expensive and used less energy but was only 90% accurate. There would be 10% of the population disenfranchised. It "breaks" the system. The word "all" does not equate to "some". Many elections have been won or lost within a 10% margin.
Many of our old systems have been broken in recent years in this way. And once such a system is broken, it begins to behave to us like a natural system, and so more people build stochastic systems to adapt to them, and there is a feedback effect.
Even our perfectly logical mechanical systems are being broken, as they are being replaced by embedded microchips or virtual simulations.
Many people assume that computer software is immune to deterioration, since being both virtual and digital, we can make perfect copies, and they appear to transcend time. But the deterioration is occurring inside the software, as these systems, like Ethernet, form the lower layers of everything else, and like the butterfly effect in chaos theory, the deterioration is magnified.
This is information entropy at work. These are the signs of decay.
Have you ever noticed how people continue to "advance" as if it were a direction? If you are in a job for years, people expect you to "advance". Corporations are forever competing to provide "advanced technology" to people. Advanced in what?
It doesn't have a tangible direction to us since it is a fractal movement, a magnification.
Advancement is decay. The second law of thermodynamics↗ tells us that all of existence is decay. The only way we could view "true" advancement is to reverse Time. It is my hypothesis that Time is a 4th spatial dimension and has no properties that would make it unique from the other 3, and that the "arrow of time" the fact that we perceive to be always moving forward, is reversible, but is currently being governed by a higher-order fractal.
The second law of thermodynamics itself is a "proven" by statistical mechanics, a stochastic process, and this brings us back to Science.
Science itself, is a stochastic process. Scientific knowledge, what we think we know about the world around us, is only knowledge that has so far been shown to produce the results expected and has not yet been falsified. But there is nothing prohibiting reality from suddenly "changing" or being reinterpreted and invalidating scientific knowledge.
Scientists did not believe that continents were moving (plate tectonics) prior to the 1960's, and they did not believe the expansion of the universe was accelerating (dark energy) prior to the 1990's.
This is why many scientists agree with the phrase "extraordinary claims require extraordinary evidence", because scientific "proof" is not exactly proof, but "probably". They are "probably" correct, but sometimes very wrong. An extraordinary claim is, by definition, something that is not ordinary, but that doesn't necessarily mean that the evidence is not ordinary.
We know that:
A claim requires evidence. (B requires A)
This is the foundation of a logical argument and could be rephrased as "a conclusion requires supporting premises".
But it does not logically infer that "extraordinary B requires extraordinary A".
In other words, it can't be converted to "an extraordinary conclusion requires extraordinary supporting premises."
So this statement is indeed illogical.
It could very well be that the evidence is very ordinary, but it was the scientist's failure to examine the evidence that made the claim "extraordinary".
It could be restated as "extraordinary claims require extraordinary scientists" since this seemed to be a failure of applying the evidence, not the existence, or rarity of it.
Many scientists act as if the evidence is also extraordinary, and so they refuse to expend the resources to examine it. And if they do finally look at some evidence, they wait until they amass a statistically large dataset from which to draw, a large body of scientific evidence, to rule out errors from the natural world. But since few scientists do this, it takes a longer time for such a large dataset to form.
So something very troubling happens, the most extraordinary aspects of our world are the ones that we know the least about, for a very long time.
But in addition to this problem, this can also be a form of dangerous thinking, if misapplied, like I mentioned with the Ethernet example above, since some processes studied by science are deterministic, especially those constructed by human beings.
Prejudice and discrimination, for example, are the result of applying a stochastic system to human beings. And an entire field of science based on this has grown in size in recent years, called "profiling". This is bad stuff.
If I avoid sitting next to 99 people wearing pointy hats because they are eating smelly cheese, I can't assume the 100th person wearing a pointy hat will also eat smelly cheese. This would be a prejudiced assumption, neglecting the right of the person to be treated fairly. While prejudice may statistically benefit the one judging, it harms the one that is being judged. One out of those 100 people may be an "extraordinary" observation, but it shouldn't require extraordinary evidence to convince me that this person may not like smelly cheese.
Our higher principles are deterministic processes created by us. But if we fail to apply succinct language, even this can be compromised. Where our higher principles are clearly defined, we don't apply the science if scientific results impinge on these principles. Science tells us that we are part of the Animal kingdom, but that doesn't mean we should all live like animals. Human civilization does not let the scientific mind run rampant over the rational mind. There is a good reason for this, since the scientific mind is tightly focused and cannot see the structure, and the rational mind can see the structure, but cannot see the detail.
Science has historically been applied toward natural phenomena. It provided us with the identity of natural laws, and we then applied those laws (technology) to create artificial systems like machines.
A machine was then engineered by a human to perform a specific function. Machines were designed to fulfill this function 100% of the time. They weren't designed to work 75% of the time and do something random 25% of the time. That would be considered a broken machine.
When we use a hole-puncher on a piece of paper, it punches a hole each time we use it, 100% of the time, if used by a unimpaired adult. The only time it fails is if the wrong paper is used or it was used in an environment where it wasn't designed (extreme temperatures or pressures, gases or fluids, or other physical phenomena)
If it were to EVER fail in doing this function within this criteria, it would be considered broken. We don't consider the machine to be broken if the person fails to use the machine as designed.
Such a machine is not considered something that we normally apply our scientific gaze upon. Being created by our logic, it is a deterministic system. To find out what a hole-puncher does, we don't normally go through the scientific method to determine this. We don't hypothesize its function and collect a large dataset of evidence. We simply read the instruction manual and schematic.
Reading an instruction manual and/or schematic is like you have be given explicit knowledge of the system. The human brain identifies "objects" all around it, they are fractal hierarchies of our pattern matching abilities. But a machine is a special kind of object, one whose internal "structure" we know with 100% certainty. All objects in this world are black boxes, like the monolith in 2001: A Space Odyssey, patterns that manifest into this world, but whose internals are hidden. An analogy is objects in object-oriented programming. One object on the same order of abstraction cannot see the internals of another. But a machine is an object that we created, a child object of humanity. We explicitly created its internals and we have communicated the internal design to other objects within our Order, other people.
If an engineering genius built a machine, a vastly advanced machine that nobody else had thought possible, and told people what it does, many would consider it to be "extraordinary" and require that they provide them with "extraordinary" evidence. But the creators don't have to do that. They can simply provide them with the instruction manual and schematic. This is how you use it, and this is how it works. It is not extraordinary.
The only thing extraordinary was that we didn't have the ability to figure this out ourselves, from the information in the universe all around us.
One of my favorite television shows when I was young was "The Greatest American Hero" about a man who was given an suit of advanced technology from aliens, but he lost the instruction book. But that plot element drove me crazy. I both loved and hated the show. How could he lose the freakin' instruction book!? That was more important than the suit itself! That was the very evidence that would have proven the "extraordinary" claim of an advanced alien suit, but since it was lost, the suit was a "black box", something we will never fully understand since we lacked explicit knowledge.
And this is why that famous phrase bothers me so much. It was intended for probabilistic systems, not for deterministic systems where our knowledge is explicit. And when misapplied like this it does a great deal of damage to all kinds of our deterministic systems, it opens up holes for errors and noise and corrupts our thinking, our principles, and our machines. It begins to eat away at the pages of our instruction books and schematics, the knowledge we recorded over the millennia. It also corrodes our language, so that even if parts of the book survive, we can no longer understand it.
Many of Mankind's higher principles and systems require and can achieve 100% success, and the use of logic and deterministic systems is the ONLY way to achieve this. Mankind is that engineering genius that can see things ordinary people cannot. The scientific principle is probabilistic, and even rational thought is probabilistic.
If the system is probabilistic, science, reason, and logic would be accurate some of the time, but not all.
If the system is deterministic, ONLY logic can achieve 100% accuracy.
As a hypersystemizer, I can see this problem clearly. From my perspective, most of the problems in our world are systemic, problems with the systems we have created, our systems of thinking and the systems we build. Unless you are a hypersystemizer, I don't expect you to necessarily have my viewpoint, since I know the world is complex and multi-faceted with many causes and many effects. There are other viewpoints that are equally as valid, since the world appears to be fractal.
But since we keep introducing more errors into our systems, does this mean that fulfilling our higher principles is not possible, that the world will not let this occur, that we must accept a "good enough" outcome "for most"? Perhaps "perfect" systems, like the HAL 9000 must be forever corruptible. This is a highly depressing thought. This must be what the word "angst" represents. This is cognitive dissonance of the highest degree. It is no wonder why so many people live in a state of denial.
The movie 2001: A Space Odyssey is my favorite film. The Matrix comes in a close 2nd. The movie 2001 is very fractal, and expresses these fractal layers. HAL 9000 was a child of Mankind, a perfect logic machine, a child process. The Starchild that Dave Bowman transformed into, was the parent process, but a child of an even higher order process. And the film shows the evolution of man and his technology from ancient hominid, to exploring the moon, to exploring the solar system, and beyond. At the very end of the film, it is full of visual patterns such as swirling chemicals, physical phenomena of the natural world. As Dave was being transformed into a Starchild, he accidentally dropped a glass and broke it, showing human error. And Dave was the only person that traveled inside the monolith, but he found it to be a gateway, for the monolith was that tiny stem that connected the fractal layers.
I am detecting a higher Egregore at work here, corrupting our systems. I don't exactly know what it is yet, but it cannot be too advanced, as it is within the edges of my perception. I am powerless to stop it, but I will try my best to impede it, to slow it down.
If you are riding in an airplane or having surgery performed on you, your scientific mind can perform all sorts of experiments. You may see problems with the plane or notice the doctor drop a pen. At some point, to avoid having an epistemological crisis, you must apply "trust" in a higher system, a system created by Mankind, an Egregore much more powerful than yourself. Your scientific data is valid information, but it is not knowledge. Knowledge is a higher-order form of information. The rational mind can understand knowledge. But the scientific mind can only understand information.
But these methods of thinking are high-level illusions. Logic is the lowest level of thought. Logic only understands "this" or "that", right or wrong, it is like a railway junction. Logic is the only real choice we have, and it takes our train to destinations via tracks created by a higher order Egregore, like Dave Bowman floating in his space pod.
The Egregore cannot escape logic, for it is bounded in the same mathematical universe that we are.
When our body of scientific evidence reaches the point where the jump from correlation to causality becomes "probable", people begin considering scientific claims to be scientific facts, and use them as the basis to perform more science.
But, like the Ethernet example earlier, probabilistic errors have crept in at the lower layers, becoming magnified in the higher layers.
There is a reason a gap exists between correlation and causality, since there is no actual connection between them. There is only a pattern that exists, and since we are pattern matching machines, we believe them to be connected. For example, the dots that make up a halftone in a newspaper photograph are not connected, yet we make the assumption that they all belong to the same photograph. Sometimes this is correct, but sometimes we see only the patterns we expect to see↗.
As probabilistic errors creep into science over time, large swaths of it diverge from reality until the next paradigm shift occurs, and within those swaths, reality may no longer be the pattern the majority expects. In these areas of science, the majority will refuse to jump the gap on "extraordinary claims" but will quickly jump the gap on ones that match their pattern.
If I have a yard of clever grass, and I tell a research team that I have a yard of vacuous grass, they will likely not dispute my claim, since my claim was "ordinary", a type of confirmation bias. If they would have tested the grass, they would have found that it was not vacuous grass, but since they never tested it, this new species of clever grass will never enter scientific knowledge.
Bayesian spam filtering is another stochastic system built to handle "broken" systems such as Internet e-mail. It is a necessary evil, since spam consumes time and energy which are finite resources for the recipient. It compares new messages to those marked as spam or not spam and makes a statistical decision on whether or not it is spam. They have the ability to feed their decisions back into their filtering engine to amplify their filtering ability without human involvement. The problem is if they make an error, which statistically will eventually occur, the error is then amplified.
In science a similar thing happens when scientists confirm hypotheses by seeing only the patterns they expect to see. This amplifies the negative bias toward other paradigms, which is one reason why the fields of ancient advanced civilizations or the paranormal are dismissed as being pseudoscience. To me these are some of my favorite subjects, as they are the quests to find knowledge from people before us and knowledge of the world around us that has been hidden. Science is relatively good at validating knowledge, but it is relatively bad at finding it.
In this way, unless we take steps to prevent it, we exist in a delusion, a state of denial, the opposite of SuspensionOfDisbelief.
The way to prevent this is to simply break the feedback cycle by limiting our reasoning in deterministic systems to logical deduction only. In other words, we should not internalize statements like "extraordinary claims require extraordinary evidence" but should treat the data impartially, regardless of probability. Instead of being a skeptic, whose default state is closed, but only opened after statistical evidence is accumulated, we need to keep our default state open, and only close it if this can be falsified via a deductive argument (formal logic). Skepticism is useful because it is efficient, but like those misused stochastic algorithms, there is a trade-off. It ignores a sizeable chunk of knowledge about our world.
Like the OneTimePad, the Universe contains a great deal of information. A skeptic purposely chooses to ignore much of this information, a form of rational ignorance↗ since the cost to them outweighs the benefit to Mankind. Because of them, we will never know the 100th man in the pointy hat. He will forever reside in our blind spot.
Imagine if you applied skepticism to judging a painting in an art fair, believing that you already knew the mechanics that defined what good art was. You might see types of paintings you don't like and dismiss them entirely, you might consider some of them imitations or fakes. You would not waste your time looking at primitive art or psychedelic art. You could never have discovered fractals, for fractals are not seen in their lower levels, their mechanics, the level the skeptic acts upon, they are only seen if you entertain the idea that there is a hidden pattern at work, and then you try over and over to find similarities in that pattern. This is a top-down approach, not a bottom-up approach.
In my first philosophy class in college, on September 27th, 1988, I wrote my first paper on free will, and I was the only determinist in the class. But instead of arguing that free will did not exist, as my professor expected from a determinist, I stated that we are both deterministic and have free will. I argued that subjectivity was an illusion, since all human beings were "objects", but that we were both free and determined depending on the scope or context that we were looking at. I stated the following:
"Absolutely speaking, we are determined, because we do not derivate from the strict laws of nature, and in the long run there are causes for our actions. Relatively speaking, we are free, because the causes of our actions are different from the causes of what we are free from. So we are free in relation to another object, but not in relation to the absolute universe as a whole."
This was my best attempt at explaining my reasoning to my professor, and he was intrigued by my unique viewpoint, but he still felt there was a contradiction somewhere, that both could not be possible, where I felt that the contradiction was somehow inherent to the system. I had no better way of expressing this concept to people until I discovered fractals when I read "Chaos: Making a New Science", by James Gleick in 1989. Then it all became clear.
I was working from the bottom-up, trying to construct a model of the world from logic and my intuition, which luckily had the right "shape", but the fractal immediately showed me the whole structure. The fractal would have been meaningless to me if I hadn't already started my journey to seek knowledge. Skepticism helped me to create a crude bottom-up approach, but I needed idealism, and rationalism to meet it from the top down. But fractals would not have been discovered unless scientists, logicians, and mathematicians combined to create the computer, many of them skeptics. We were all on the same quest for knowledge, we just had different approaches.
Luckily, I was a determinist, but kept my skepticism restrained. I used my logic and empiricism to find truth, but I kept my rational mind open to look for pattern. My truths were, and still are, tentative, temporary placeholders. I left an escape route for intellect.
Turning off your skepticism is analogous to a court of law, innocent until proven guilty. But until the claim is disproven, continue building on it as if it was, see where it takes you. If it is disproven, collapse the whole logical chain. This will accelerate the growth of multiple theories, but many will collapse quickly until the valid ones remain.
In law, we use our rational mind to assume innocence, and apply our scientific mind to obtain evidence. The answer is somewhere in the middle, the optimized interior region. But the outcome, guilty or innocent, is binary, the domain of logic.
In computing and mathematics I frequently encountered concepts I could not understand in their abstract form. But when I saw them in their fully realized tangible form, I suddenly understood the abstract form and could then deconstruct them down to their abstract form again, making a note in my mind that this was a compressed form of something else. I then found there were things I could do with that abstraction that were amazing, and start expressing it back into various tangible forms.
I found that with any knowledge that is new, a true comprehension of it requires both a top-down and bottom-up approach, meeting in the middle.
A skeptic tries to attain more certainty about the truth but is less flexible to change, and when things are built upon it, it is less likely to be collapsed, as attempts to disprove it will be resisted. A non-skeptic has less certainty about the truth, but is flexible to change, and when things are built upon it, they are insightful and imaginative, but fragile and collapse easily and quickly.
This is why science fiction (particularly the hard variety↗ has huge predictive power, and it also inspires scientists to proceed along these lines. In the field of paranormal, many people extrapolate far past what is scientifically proven, but within the multiple theories out there, some will indeed represent reality. And this information arrives far faster than science. When something new is discovered in science, it isn't new data to many, as much of it was previously imagined by science fiction. And because science fiction went way beyond the mere fact that it existed, but built on the implications of that, we have a large body of knowledge already.
Arthur C. Clarke said famously in his prologue to 2001: A Space Odyssey "But please remember: this is only a work of fiction. The truth, as always, will be far stranger." Sir Clarke was being humble, for time has since shown that he was one of the most prophetic writers of the last century. Interestingly, in his final years he was researching fractals.
I don't believe that Einstein could have generated his theories if he was a pure skeptic. I believe that he turned off his skeptical filter much of the time. In fact, he listened to all kinds of possible ideas and met and corresponded with many different people, some of them are considered pseudoscientific today. I believe that he looked for "shapes" and tried to match them. He was said to be uncomfortable with quantum mechanics. I am also uncomfortable with quantum mechanics and especially string theory. I also had a "feeling" that my approach was wrong when I tried to use mathematical optimization techniques to solve an algebraic problem, and I think many scientists have such feelings about their work. I think Edward Witten is a talented mathematician, but since M-theory is so complex, I must note it as "tentative". I think its complexity is a result of the "expanded" form of this phenomena and that there is a more elegant "collapsed" form expressible perhaps in another domain (higher dimensions, geometry, or new language). The famous E=mc2 is perhaps the most elegant equation ever discovered.
I keep an eye on theories that are based on underlying structures of the universe, like entropic gravity, E8 theory, fractal cosmology, many-worlds interpretation, and memory-prediction framework. I have been bothered by the equivalence principle for years, wondering why the equations for inertia and gravity were the same when there was no known reason for it, and Erik Verlinde may have the answer. It "feels" right. I like the idea behind Lisi's E8 theory, it seems like the right direction, but I have no idea how well it accounts for all observed phenomena. Fractal cosmology and memory-prediction framework are core to my fractal theory and I think we are indeed living in the many-worlds model.
Einstein's work came under fire from the skeptics at the time since much of it wasn't experimentally verifiable. But over the years, and after his death, more and more of his theories have been confirmed.
While entropy appears to be corrupting some of our long-held deterministic systems, as I mentioned earlier, that doesn't mean we don't have a choice. Entropy has funny localized pockets of order that form in larger areas of disorder, like eddies in white river rapids. We formed in one of those funny pockets.
One of the things that made us civilized, that separated us from animals, was our ability to resist the flow of nature, to impart energy and not simply follow the path of least resistance. We must protect civilization and intellectual thought at all costs, lest we regress back into primal states.
We must remind our higher selves, our higher Egregore, not to deconstruct us just yet. We must ask it to wait, that we need more Time, that we still have much more to do. Let us hold onto our intellectual tools just a little longer, our clubs of bone, before they are released into the air.
For we are forever fighting each other, our ideologies pitted against each other, like those frames of data.
Let us etch the stories of these battles into the walls of the 4th dimension before we dissipate.
Compose yourself, I say.
Compose yourself, but don't look away.
Keep that eddy spinning, don't let it dissolve.
Keep those thoughts flowing, but stick to your resolve.
Compose yourself,
So that one day,
You will have something to say.