TransHuman


Page Created: 1/15/2015   Last Modified: 7/1/2016   Last Generated: 12/11/2017

To kill a vampire, you must drive a wooden stake through its heart. To kill a werewolf, you must shoot it with a bullet made of silver. To kill sorcerer Koshchey the Deathless↗, you must destroy his soul which is inside the eye of a needle, inside an egg, inside a duck, inside a hare, and inside a chest which lies buried under an oak tree somewhere in a field on an island somewhere in the ocean...

The most fearsome monsters are those like ourselves, with an intelligence that is unmatched in the animal kingdom. And if they are intelligent enough that we cannot find their weakness, we have no hope of destroying them. We fear what we cannot understand and cannot control. Sometimes, we cannot even control ourselves, and that is perhaps the greatest fear, to lose our power of choice, the reason for our existence.

I remember the time I discovered that my childhood dream of being a scientist had changed. In my 8th grade science class, my favorite science teacher defined a word called "technology", a word that I had heard but never thought much about, as it was not used as frequently as it is today. He told us that technology was the application of science.

It was at that moment that I knew that I wanted to ride that train. That was why I wanted to become a scientist in the first place, to put some of those scientific discoveries into real use to change the world. Now the train had a name. It wasn't a Shinkansen↗, it wasn't a Hyperloop↗, it was Technology.

For me, technology provided meaning to science. It showed me the implications of scientific laws; it showed me the possibilities. Science gave me the knowledge to change my world, but technology gave me the power, and eventually, the meaning. Technology created a feedback loop that turned science into an iterated function, amplifying itself. I wondered, how great is this power?

If you study mathematics or science, you'll notice that they are very modest disciplines. They don't make many conclusions, and in many cases, they never make conclusions at all. They just show you patterns that can be derived from previous patterns. What you do with the patterns and how you interpret them is up to you. But that is outside the scope of these pattern-matching disciplines, and lies more in the domain of the arts, humanities, or social sciences.

Futurology, predicting a future Time, is one such discipline, and it was not modest at all, it was bold and in-your-face. As a teenager I had a subscription to Omni, a futurist magazine, and I considered myself a futurist too, making many attempts to predict the future based on what I knew about the past. But I didn't have a lot of historical knowledge from which to draw, so I extrapolated from a smaller subset, a smaller range of pattern that I could observe in the world around me. And as any scientist knows, extrapolation is not the same as interpolation.

I lost some friends by loudly proclaiming in 1989 that "Robots will take over the earth!", that it wasn't just possible, it was inevitable. I read and wrote about robots, neural networks, artificial intelligence (AI), studied the ethical implications, including the famous "Turing test" which has not yet been passed at the time of this writing (2015). My conclusions were that, in our lifetime, robot intelligence would be indistinguishable from human intelligence, and this indistiguishability meant that we must treat these machines with the same respect that we treat ourselves, that such a machine would be one and the same, for all intents and purposes, my counterargument to any critics being that "we don't even know how people think, so how would we know that a robot that could pass as a human is not just a person inside an artificial body?" My logic was sound, a well-designed strongbox of unfalsifiablility, and I knew that this strongbox could not be broken. We may never know exactly what it means to be a living person, but perhaps intelligence was an emergent property that did not depend on the medium, but only on the pattern formed within that medium, like a song, whether it is sung on wax cylinders, vinyl, laser-etched aluminum, or a radio wave traveling to Alpha Centauri.

Since then, I haven't disagreed with that earlier prediction, but I have drawn some additional conclusions, that intelligence should not be a factor for determining this need for respect, that we need to respect everything, the creatures around us, the mechanisms around us, the heavenly bodies, and even the inanimate matter, for it is still infused with energy, motion, and molecular activity. The entire Universe is intelligent and deserves our respect.

You can see the thought processes in the world around us, as plants manipulate insects for their benefit, viruses and bacteria manipulate animals, the trees manipulate the weather, and vice-versa. The Universe is a talking, thinking, symbiotic ecosystem, an organism. It is easy to create simple computer models to demonstrate the concept of emergent intelligence. The halls, walls, and doors are as important as the objects that navigate them.

I began to see intelligence as not being a qualitative virtue in itself, but a consequence of complexity.

One thing that I failed to predict or foresee that other futurists did was the emergence of a growing subculture of people that wanted to become transhuman, that instead of only using technology to transform the world, they also wanted to use technology to transform themselves while they were still alive; to repair, replace, and augment their biology for their own benefit, even surpassing human limitations. I had known about CyberPunk, but cyberpunks were trapped in a world of technology and control, trying to free themselves of this control. The cyberpunk ethos was not to enhance themselves for the sake of enhancement, it was to keep the soul free at whatever layer the modern world had trapped it within, free it from that needle inside the egg, inside the duck, inside the hare, inside the chest...

But today, it is becoming very possible for people to become transhuman, and we might be at the beginnings of a new socio-technological era. Some futurists today believe that man and machine will soon combine into a great intelligence, and this intelligence will transform our earth, solar system, and eventually the Universe into an intelligent entity. But isn't the Universe is an intelligent entity already?

Some people are trying to keep themselves healthy and alive until the time of a technological singularity, a time of immortality, where they will no longer have to worry about facing death and can be part of this entity. Some of this feels uncomfortably close to eugenics↗, and as Star Trek fans know, Khan Noonien Singh of SS Botany Bay was both a product and champion of this, and became perhaps the greatest enemy of Starfleet, and one of the greatest fictional villains ever depicted in film.

Transhumanism is a blanket word that means different things to different people. I am using the term to represent the aspect that concerns me the most, the use of technologies to reach that so-called "singularity", to become immortal.

I see great contradiction in this form of thinking, transforming a human being into an immortal being using technology. Using technology as that feedback engine might very well amplify humanity, and just like how an amplifier magnifies the noise within it, it will magnify our imperfections, too. In other words, a true transformation is not really occurring. A true transformation would be more like viewing the output of a chaotic system, appearing totally random to us. Isn't death such a transformation?

Many adults consider themselves to be transformed beings, very different than the children they once were. I disagree and believe that there is no such transformation, that adults are simply children with more tools, and the same mistakes that those children made with their simple tools are made with their advanced tools, whether those tools are technologies or knowledge. Knowledge does not prevent us from making poor choices, it is more akin to a drug that hides the poorness of those choices until we gain more knowledge to understand our mistakes. Poor choices are part of the human condition. The complexity and duality of the physical universe is a cage, and an augmented human is just placing a larger animal in that cage.

Pride is a virtue to those addicted to that drug, but a vice to those that see through its masking properties.

As I got older, I began to see some of my own predictions fail, which tempered my pride, and I analyzed why and discovered that there were a few main reasons:

  • I didn't account for external unknown factors that changed the course of events.
  • The prediction itself was meaningless to both the present and the future.
  • I predicted changes too early, and many came true much later than expected.

The first is obvious--we know our own worldline well, but cannot see the worldlines of others or other things that suddenly sideswipe it. There is cause and effect going on in areas that we cannot see that will eventually cross our path.

The second is less obvious. In my 2002 screenplay, for example, the main character in my film built a mobile keyboard device to communicate with his brother. He also used a set of metallic gloves to control a robot through telepresence, instead of via traditional joystick, and was able to view through his robot's eyes by wearing glasses that had a translucent overlay display. I had written both of these scenes to match my character's needs, being extensions of how he did things, and what he might invent for himself. At the time of the writing, those scenes were very strange and unique, even laughable. People wondered why he didn't just call his brother on his cell phone, and I said that this character is symbolic of an Eye, and his choice of communication is visual. And when he controlled his robot, he did a sort of dance with those gloves, and wore these glasses, which people though was ridiculous.

This was because, during 2002, where I lived, "texting" was not used in favor of calling someone on a mobile phone. Many people had just started replacing their mobile pagers with cell phones, and the last thing they wanted to do was to go back to the old "pager days" when they could finally just use their phone to call someone directly. Talking was in style again. The heavy use of SMS was uncommon until later, and today, more people text their friends instead of calling them, exactly what my main character, reverting back to a more rudimentary method of communication.

And the Microsoft Kinect wasn't released to the general public until 2010, which is now widely known for allowing full-body gesture control, like a type of dance, and today many companies are making touchless gesture devices and combining them with telepresence for robotic control, such as quadcopters. Google Glass, computerized glasses that contain an overlay display and allow gestures, wasn't released to the general public until 2014.

So within a 12-year period, people went from "this is ridiculous!" to "so what?". They have a new language now, and when you write something in the language of the future, it is not understandable to the people in the present and is not important to the people in the future, an unexpected result of things that are "ahead of their time".

So if I were to show someone my old screenplay today, they would say, so what?

So I have to enter time into my equation. Have you ever noticed that "what we want" must also be combined with "when we want it" for it to have any value, like a tiny point on a happiness graph where conditions converge just right?

For example, I have a giant Sig Kadet Mark II↗ model airplane that I hand-built out of balsa wood as a teenager in the mid 1980's, but never flew, being so expensive and difficult for me to finish that I gave up on trying to get it into the air. I was mowing lawns and saving up to buy a nitromethane engine for it, but in 1991, the very transmitter I had for it, a 72 Mhz, AM, Hi-Tec Challenger 4000, became obsolete when the FCC changed its rules and disallowed RC transmitters with wide bandwidths on those frequencies. This set me back even more, as I would have to buy a new transmitter, costing hundreds of dollars. Note that 25 years later, a different US agency, the FAA, classified my balsa-wood model airplane as a UAS (Unmanned Aircraft System) and requires me to pay a fee and register my personal information in a national database before I am allowed to even fly it.

By the time I had the money to finish it, ready-to-fly, electric, polystyrene planes were invented that eliminated the difficulties of internal combustion, hand-built planes. Now every child was flying his own plane.

But I never had that as a child. My ideas of telepresence were first formed when I read Danny Dunn, Invisible Boy, a book published in 1974, a book 40 years ahead of its time. They gave me a glimpse of the future and I wanted it badly. Now that the future is here, I can create all of the telepresent quadcopters I want, but sadly, I don't want to. The authors predicted the future, but they didn't predict how I may feel about it by the time it arrived. I focused on filmmaking instead, a form of telepresent storytelling, a medium that could provide glimpses of both Space and Time. In 2013, I received an Air Hogs Fly Crane for Christmas, an indoor, infrared-controlled flying helicopter with a tiny aerial crane for picking up things, and as amazing as it is, and the awe I experience knowing what it is, flying it makes me feel that this universe is illusory. It is simply too good to be true and this realization saddens me. It reminds me of that scene at the end of the movie Cast Away, where Tom Hanks' character clicks that incredulous, piezoelectric, butane firestarter... Can such things be real?

Telepresence itself, if you think about it, is the spatial analogy of futurology, using technology not to see a point in Time but to see a point in Space. Years ago, I had a dream about my late father, and he was excited to see me and gave me a box with a tiny television display on it. And I found that this box could show me any point in space I wanted to see, controlled by my own thoughts, like a form of remote viewing, and I remember the first thing I did was point it at a conversation between a bank teller and a customer in a car at a bank drive-though, watching and listening to their rather mundane conversation (which is futuristic in itself, since bank drive-throughs today use video intercoms, but the ones in my neighborhood didn't at the time of that dream, having audio-intercoms only). Anyway, in my dream, my father was saddened that I became obsessed with the device itself, obsessed over the technology and not the gift, that I had missed the whole point. The gift.

Perhaps, transhumanism, by fixating on technology, is missing the whole point. Perhaps it is missing the point of giving, dispersing oneself, instead of taking, assimilating, becoming a Tetsuo↗.

There are many things in my original screenplay that haven't come true (yet), a screenplay shaped by my subconscious, and that main character I mentioned who built the telepresence technologies and robotic intelligence also refused to allow an RFID device to be subdermally implanted into the back of the hand, a device used for wireless, mobile monetary payments in his society. He refused to become transhuman, yet he built an artificial "human". This was an important distinction for me and captured the point of the whole film.

In the Star Trek universe, Dr. Arik Soong, a genetic engineer that created the villainous Khan in the 22nd century was played by the same actor, Brent Spiner, that played Lieutenant Commander Data, an AI robot in the 24th century. This was because the writers made Arik Soong a biological ancestor of Dr. Noonian Soong, Data's creator, who was also played by Brent Spiner. And Data is perhaps best known for representing the best of humanity, while Khan is known for representing its worst. Data is a distinct creation, while Khan is a modification of ourselves.

This distinction or gap between the creator and his creations goes far back into ancient mythology, like the gap between the hands depicted in Michelangelo's The Creation of Adam. This gap is significant and does not appear to be an order transition, part of a fractal regeneration. Crossing this gap requires traversing the bounds of the fractal itself and is a true transformation. It is like comparing the gaps you experience jumping from paragraph to paragraph on this page, versus the gap you experience when you stop reading and step away from it.

An immortal transhuman could never step away, and would be forever trapped inside that fractal, on the inside of that event horizon↗, a very beautiful cage, but still a cage.

I've studied this enigmatic gap for a long time. Ten years before I wrote that screenplay, I incorporated it into the ending of my short college film, Untitled 249.

And thirdly, the things that didn't come true, came true at a much later date, too late for anyone to care.

For example, I'm a competitive person, and when I compete against someone, I give it my best. And if that person beats me, I request a re-match (muttering "Again!"), and given enough re-matches I will usually find a way to win, either by taking advantage of a mistake my opponent made, learning their patterns while I improve via practice over time, tiring them out, making them apathetic about winning, or simply encountering a random scenario where my skills happen to be better suited for that scenario than theirs. Does that mean that I beat them?

If someone beats me 9 out of 10 times in trivia, ping pong, or Soul Calibur↗, are they the better player? Is this ratio over time the determining factor?

When I was in college, I could not beat Olympic sprinters, for example, but if you hypothetically increased to middle distances and put them in my race, I could beat them. So who is the faster runner, the one who runs 100 meters the fastest, or the one who runs 1500 meters?

They did indeed beat me, for given enough time, almost anything can be achieved due to complexity, and determination...

We gauge success within certain time constraints. All of us can see patterns, but nobody cares unless you attach a specific window to them, a window of meaning and a window of time. The random universe contains all the information there is, but we want to know what, out of this randomness, is meaningful to us, and we want to know it on-demand, with clock-like precision.

Some of my predictions were almost forgotten but came true when least expected. Does that mean that my predictions failed? Yes, they did fail. They were not meaningful at the time they were needed. But... they captured a future time, they captured a different reality. They described something, like an artist that hasn't fully attained mastery, they painted something, it just wasn't what they intended.

This is why science is so obsessed with falsifiability, something science cannot disprove is immune from categorization and cannot be tagged with meaning, like a pyroclastic cloud blasting through the city of Pompeii, the information is there all around us, but we don't know what it means. And whatever we reach out and try to grab, we will be burned.

We can build deterministic systems that generate the most complex forms of information (randomness), but, in my opinion, these systems are not generators, but simply "windows" into the random Universe. In other words, we cannot build a system or thought process to accurately predict the future, but we can peek through windows at different places at different times.

All futurists will encounter the same problems; how much meaning we attribute to these glimpses is up to us as individuals.

The proponents of transhumanistic immortality will not be able to account for external factors. If the Universe is fractal, there are most-likely other intelligent entities sharing it us, some of which have even regulated us, like mysterious builders of Arthur C. Clarke's Rama↗. Children, for example, may not fully realize the importance of a proper diet, the discipline of their school environment, the requirements of daily living until their parental regulation is removed as they reach adulthood. They may not notice that food is always stocked, that teachers are always on time, that things are always cleaned before disease has a chance to move in. And these false assumptions about the world can lead to false conclusions about it, like the just-world hypothesis.

They realize, sometimes through trial and error, that they must regulate themselves to keep this balance. It will be a turbulent time when transhumanists discover that unregulated growth will eventually lead to an atrophy, and they will attempt to insert this regulation, like groups of children trying to tell other children to put down their candy. Good luck with that.

We do know, that human society is becoming more complex, and many of us view this as "advancement", but a fractal model shows us that advancement is complexity within smaller and smaller bounds, like branches on a tree giving way to the tiny leaves at the ends. It is entropy and an increase in disorder. We are those tiny leaves, and the more we advance, the tinier we become. We are not at the root of the tree, or on any of the other branches, and what we believe to be the Universe, the domain in which we feel that we will have more mastery over the more advanced we are, becomes a more localized thing, an illusion, or dream, a shadow of the whole, created from our position in the tree. The tree itself may have other plans for us, and may drop that leaf before we take over the Tree.

To see the whole Universe, we have to go backwards, to a state of higher order, to the root of the tree, not out to its leaves, something immortality, riding a boat downstream into disorder, does not allow. A man named Siddhartha Gautama figured this out↗ a long time ago...

Transhumanism is really a meaningless word, for we are already "transhuman" compared to our past. We groom ourselves with man-made tools, we use language, we use technologies to communicate over vast distances, we wear man-made fabrics called clothing, we educate ourselves, augmenting our thoughts with mental tools or abstractions, we use medicine to stop sickness, we wear glasses to correct our eyesight. But it was nature that shaped us into our current form, that giant Tree.

So what if mankind in the future looks as different to us today as we look to a bacterium? To worry about this is meaningless, and would be like saying that life in the Archaeozoic eon↗ wasn't right, that we need to go back and fix it, the life that gave rise to us. We are going to give rise to something, given enough time, but that something will take care of itself. Like my own failed predictions, any predictions we make now are meaningless to us and unimportant to that entity of the future.

But there is a problem in wanting to transform ourselves into an immortal being within our lifetime. When divisions get so great, between what we are now, and what we might have been or might become in the distant past or future, the terms inhuman, dehuman and monster are more suitable, something the film AlteredStates tried to convey.

Due to these divisions, whatever we become would most likely appear to be monsters to us today, just as our bodies full of armies of lymphocytes are monsters to many single-celled organisms. They would not want to get in our way, and we would not think twice about killing them.

I have believed for many years now, that we don't just predict the future based on past patterns, we can actually "feel" some of the future. People forget that our bodies have length, not only in space, such as how tall we are, but also in Time. If we were standing in a dark stream, and the bottom half of us was under the water, we still know that we have feet, that even though they are hidden from us, we are still connected to them. Our feet reside in a different space, many "feet" away, but we believe them to exist in time right alongside us, connected to us, part of us. Similarly, they have width in Time, and even though we cannot see them, they exist in the PastAndFuture. And, just like in space, when we bump into things or cannot pass through spaces that are too narrow for us, we also bump into things in Time and cannot pass through times that are too narrow for us. And just like we can feel our feet and know if they are wet, cold, hot, cramped, or tired, we can feel our past and future feet, too.

And, in my opinion, spacetime curvature and choice requires higher fractal dimensions, so that we also feel the shape of the branches of the future, like climbing a tree, blindfolded, feeling with our hands. This would explain the existence of those true singularities, the gravitational singularities in the center of black holes, spaces in our universe that aren't spaces at all, for spacetime does not exist within them, like the Swiss cheese voids in a fractal shape. In a way, our moon really is made out of cheese↗. But the cheese-making machine is most likely a 1-dimensional Turing tape, like a long block of Velveeta cut by back and forth motions of a wire cheese slicer...

The most identifying characteristic of transhumanism to me is perhaps a personal one. I've been a futurist since I was young, a fan of science-fiction, a technologist, interested in Computing, Robotics, and Virtual Worlds, but today the term "neo-Luddite" is directed haphazardly by some transhumanists at anyone that explores the potential impingement of technology on human rights. It doesn't matter if it is Mary Shelley, Isaac Asimov or Philip K. Dick, science fiction writers throughout history have explored these consequences and warned us what can happen. And when I, out of all people, raise the red flag and am called a Luddite, I raise that red flag even higher.

It is disturbing to know that some people have placed so much value on their own quest for immortality, that their "survival" requires the perpetual advancement of technology; no current technology ever being good enough, like a pyramid scheme where it doesn't matter how many products you sell, it matters how many people you brainwash to sell that product. For whatever they may see as an "obstacle" to this advancement would not stand in their way, like those lymphocytes consuming that bacterium. This is counter to most of human morality, where our respect for the individual rights of others is paramount.

So the question is:

Do you want to be an astronaut that is genetically modified to breathe Martian air and eat Martian cheese, or do you want to be an astronaut who returned to the ship and analyzes the atmosphere and substances he/she has collected? There is no right answer; it is up to you to decide what provides you with more meaning.

I wonder, though, if the exponential growth curve of human technology will not lead to a technological singularity as some futurists predict, but it is instead, as other futurists predict, a fragment of the chaotic graph of the logistic equation, nearing a point of annihilation, and that the "singularity" is the highest point before a rapid drop in technology, like the demise of Easter Island when the inhabitants ran out of resources and stopped building their moai. Even the deathless soul of Koshchey couldn't live on such an island, for all of its trees had disappeared...

From the perspective of a biological organism, immortality does have a peculiar analogy. There is a particular type of cell that sometimes forms in living things that does not die, does not experience apoptosis↗, and tries to become immortal, and this is known as cancer. It is unknown why the Universe created it, but the lymphocytes of the host body often cannot kill enough of the cancer cells to stop its growth and they end up killing the host organism, and thus themselves as well, strangely self-limiting. It is something we cannot understand and cannot control.

That satisfies my definition of a monster.

Comments