SINGULARITY and Humanity

Ray Kurzweil - Singularity Summit at Stanford

The Singularity is the technological creation of smarter-than-human intelligence. There are several technologies that are often mentioned as heading in this direction. The most commonly mentioned is probably Artificial Intelligence, but there are others: direct brain-computer interfaces, biological augmentation of the brain, genetic engineering, ultra-high-resolution scans of the brain followed by computer emulation. Some of these technologies seem likely to arrive much earlier than the others, but there are nonetheless several independent technologies all heading in the direction of the Singularity – several different technologies which, if they reached a threshold level of sophistication, would enable the creation of smarter-than-human intelligence.

A future that contains smarter-than-human minds is genuinely different in a way that goes beyond the usual visions of a future filled with bigger and better gadgets. Vernor Vinge originally coined the term "Singularity" in observing that, just as our model of physics breaks down when it tries to model the singularity at the center of a black hole, our model of the world breaks down when it tries to model a future that contains entities smarter than human.

Human intelligence is the foundation of human technology; all technology is ultimately the product of intelligence. If technology can turn around and enhance intelligence, this closes the loop, creating a positive feedback effect. Smarter minds will be more effective at building still smarter minds. This loop appears most clearly in the example of an Artificial Intelligence improving its own source code, but it would also arise, albeit initially on a slower timescale, from humans with direct brain-computer interfaces creating the next generation of brain-computer interfaces, or biologically augmented humans working on an Artificial Intelligence project.

Some of the stronger Singularity technologies, such as Artificial Intelligence and brain-computer interfaces, offer the possibility of faster intelligence as well as smarter intelligence. Ultimately, speeding up intelligence is probably comparatively unimportant next to creating better intelligence; nonetheless the potential differences in speed are worth mentioning because they are so huge. Human neurons operate by sending electrochemical signals that propagate at a top speed of 150 meters per second along the fastest neurons. By comparison, the speed of light is 300,000,000 meters per second, two million times greater. Similarly, most human neurons can spike a maximum of 200 times per second; even this may overstate the information-processing capability of neurons, since most modern theories of neural information-processing call for information to be carried by the frequency of the spike train rather than individual signals. By comparison, speeds in modern computer chips are currently at around 2GHz – a ten millionfold difference – and still increasing exponentially. At the very least it should be physically possible to achieve a million-to-one speedup in thinking, at which rate a subjective year would pass in 31 physical seconds. At this rate the entire subjective timespan from Socrates in ancient Greece to modern-day humanity would pass in under twenty-two hours.

Humans also face an upper limit on the size of their brains. The current estimate is that the typical human brain contains something like a hundred billion neurons and a hundred trillion synapses. That's an enormous amount of sheer brute computational force by comparison with today's computers – although if we had to write programs that ran on 200Hz CPUs we'd also need massive parallelism to do anything in realtime. However, in the computing industry, benchmarks increase exponentially, typically with a doubling time of one to two years. The original Moore's Law says that the number of transistors in a given area of silicon doubles every eighteen months; today there is Moore's Law for chip speeds, Moore's Law for computer memory, Moore's Law for disk storage per dollar, Moore's Law for Internet connectivity, and a dozen other variants.

By contrast, the entire five-million-year evolution of modern humans from primates involved a threefold increase in brain capacity and a sixfold increase in prefrontal cortex. We currently cannot increase our brainpower beyond this; in fact, we gradually lose neurons as we age. (You may have heard that humans only use 10% of their brains. Unfortunately, this is a complete urban legend; not just unsupported, but flatly contradicted by neuroscience.) An Artificial Intelligence would be different. Some discussions of the Singularity suppose that the critical moment in history is not when human-equivalent AI first comes into existence but a few years later when the continued grinding of Moore's Law produces AI minds twice or four times as fast as human. This ignores the possibility that the first invention of Artificial Intelligence will be followed by the purchase, rental, or less formal absorption of a substantial proportion of all the computing power on the then-current Internet – perhaps hundreds or thousands of times as much computing power as went into the original Artificial Intelligence.

But the real heart of the Singularity is the idea of better intelligence or smarter minds. Humans are not just bigger chimps; we are better chimps. This is the hardest part of the Singularity to discuss – it's easy to look at a neuron and a transistor and say that one is slow and one is fast, but the mind is harder to understand. Sometimes discussion of the Singularity tends to focus on faster brains or bigger brains because brains are relatively easy to argue about compared to minds; easier to visualize and easier to describe. This doesn't mean the subject is impossible to discuss; section III of our "Levels of Organization in General Intelligence" does take a stab at discussing some specific design improvements on human intelligence, but that involves a specific theory of intelligence, which we don't have room to go into here.

However, that smarter minds are harder to discuss than faster brains or bigger brains does not show that smarter minds are harder to build – deeper to ponder, certainly, but not necessarily more intractable as a problem. It may even be that genuine increases in smartness could be achieved just by adding more computing power to the existing human brain – although this is not currently known. What is known is that going from primates to humans did not require exponential increases in brain size or thousandfold improvements in processing speeds. Relative to chimps, humans have threefold larger brains, sixfold larger prefrontal areas, and 98. 4% similar DNA; given that the human genome has 3 billion base pairs, this implies that at most twelve million bytes of extra "software" transforms chimps into humans. And there is no suggestion in our evolutionary history that evolution found it more and more difficult to construct smarter and smarter brains; if anything, hominid evolution has appeared to speed up over time, with shorter intervals between larger developments.

But leave aside for the moment the question of how to build smarter minds, and ask what "smarter-than-human" really means. And as the basic definition of the Singularity points out, this is exactly the point at which our ability to extrapolate breaks down. We don't know because we're not that smart. We're trying to guess what it is to be a better-than-human guesser. Could a gathering of apes have predicted the rise of human intelligence, or understood it if it were explained? For that matter, could the 15th century have predicted the 20th century, let alone the 21st? Nothing has changed in the human brain since the 15th century; if the people of the 15th century could not predict five centuries ahead across constant minds, what makes us think we can outguess genuinely smarter-than-human intelligence?

Because we have a past history of people making failed predictions one century ahead, we've learned, culturally, to distrust such predictions – we know that ordinary human progress, given a century in which to work, creates a gap which human predictions cannot cross. We haven't learned this lesson with respect to genuine improvements in intelligence because the last genuine improvement to intelligence was a hundred thousand years ago. But the rise of modern humanity created a gap enormously larger than the gap between the 15th and 20th century. That improvement in intelligence created the entire milieu of human progress, including all the progress between the 15th and 20th century. It is a gap so large that on the other side we find, not failed predictions, but no predictions at all.

Smarter-than-human intelligence, faster-than-human intelligence, and self-improving intelligence are all interrelated. If you're smarter that makes it easier to figure out how to build fast brains or improve your own mind. In turn, being able to reshape your own mind isn't just a way of starting up a slope of recursive self-improvement; having full access to your own source code is, in itself, a kind of smartness that humans don't have. Self-improvement is far harder than optimizing code; nonetheless, a mind with the ability to rewrite its own source code can potentially make itself faster as well. And faster brains also relate to smarter minds; speeding up a whole mind doesn't make it smarter, but adding more processing power to the cognitive processes underlying intelligence is a different matter.

But despite the interrelation, the key moment is the rise of smarter-than-human intelligence, rather than recursively self-improving or faster-than-human intelligence, because it's this that makes the future genuinely unlike the past. That doesn't take minds a million times faster than human, or improvement after improvement piled up along a steep curve of recursive self-enhancement. One mind significantly beyond the humanly possible level would represent a Singularity. That we are not likely to be dealing with "only one" improvement does not make the impact of one improvement any less.

Combine faster intelligence, smarter intelligence, and recursively self-improving intelligence, and the result is an event so huge that there are no metaphors left. There's nothing remaining to compare it to.

The Singularity is beyond huge, but it can begin with something small. If one smarter-than-human intelligence exists, that mind will find it easier to create still smarter minds. In this respect the dynamic of the Singularity resembles other cases where small causes can have large effects; toppling the first domino in a chain, starting an avalanche with a pebble, perturbing an upright object balanced on its tip. (Human technological civilization occupies a metastable state in which the Singularity is an attractor; once the system starts to flip over to the new state, the flip accelerates.) All it takes is one technology – Artificial Intelligence, brain-computer interfaces, or perhaps something unforeseen – that advances to the point of creating smarter-than-human minds. That one technological advance is the equivalent of the first self-replicating chemical that gave rise to life on Earth.

If you travelled backward in time to witness a critical moment in the invention of science, or the creation of writing, or the evolution of Homo sapiens, or the beginning of life on Earth, no human judgement could possibly encompass all the future consequences of that event – and yet there would be the feeling of being present at the dawn of something worthwhile. The most critical moments of history are not the closed stories, like the start and finish of wars, or the rise and fall of governments. The story of intelligent life on Earth is made up of beginnings.

Imagine traveling back in time to witness a critical moment in the dawn of human intelligence. Suppose that you find an alien bystander already on the scene, who asks: "Why are you so excited? What does it matter?" The question seems almost impossible to answer; it demands a thousand answers, or none. Someone who valued truth and knowledge might answer that this was a critical moment in the human quest to learn about the universe – in fact, the beginning of that quest. Someone who valued happiness might answer that the rise of human intelligence was a necessary precursor to vaccines, air conditioning, and the many other sources of happiness and solutions to unhappiness that have been produced by human intelligence over the ages. There are people who would answer that intelligence is meaningful in itself; that "It is better to be Socrates unsatisfied than a fool satisfied; better to be a man unsatisfied than a pig satisfied." A musician who chose that career believing that music is an end in itself might answer that the rise of human intelligence mattered because it was necessary to the birth of Bach; a mathematician could single out Euclid; a physicist might cite Newton or Einstein. Someone with an appreciation of humanity, beyond the individual humans, might answer that this was a critical moment in the relation of life to the universe – the beginning of humanity's growth, of our acquisition of strength and understanding, eventually spreading beyond Earth to the rest of the galaxy and the universe.

The beginnings of human intelligence, or the invention of writing, probably went unappreciated by the individuals who were present at the time. But such developments do not always take their creators unaware. Francis Bacon, one of the critical figures in the invention of the scientific method, made astounding claims about the power and universality of his new mode of reasoning and its ability to improve the human condition – claims which, from the perspective of a 21st-century human, turned out to be exactly right. Not all good deeds are unintentional. It does occasionally happen that humanity's victories are won not by accident but by people making the right choices for the right reasons.

Why is the Singularity worth doing? The Singularity Institute for Artificial Intelligence can't possibly speak for everyone who cares about the Singularity. We can't even presume to speak for the volunteers and donors of the Singularity Institute. But it seems like a good guess that many supporters of the Singularity have in common a sense of being present at a critical moment in history; of having the chance to win a victory for humanity by making the right choices for the right reasons. Like a spectator at the dawn of human intelligence, trying to answer directly why superintelligence matters chokes on a dozen different simultaneous replies; what matters is the entire future growing out of that beginning.

But it is still possible to be more specific about what kinds of problems we might expect to be solved. Some of the specific answers seem almost disrespectful to the potential bound up in superintelligence; human intelligence is more than an effective way for apes to obtain bananas. Nonetheless, modern-day agriculture is very effective at producing bananas, and if you had advanced nanotechnology at your disposal, energy and matter might be plentiful enough that you could produce a million tons of bananas on a whim. In a sense that's what nanotechnology is – good-old-fashioned material technology pushed to the limit. This only begs the question of "So what?", but the Singularity advances on this question as well; if people can become smarter, this moves humanity forward in ways that transcend the faster and easier production of more and more bananas. For one thing, we may become smart enough to answer the question "So what?"

In one sense, asking what specific problems will be solved is like asking Benjamin Franklin in the 1700s to predict electronic circuitry, computers, Artificial Intelligence, and the Singularity on the basis of his experimentation with electricity. Setting an upper bound on the impact of superintelligence is impossible; any given upper bound could turn out to have a simple workaround that we are too young as a civilization, or insufficiently intelligent as a species, to see in advance. We can try to describe lower bounds; if we can see how to solve a problem using more or faster technological intelligence of the kind humans use, then at least that problem is probably solvable for genuinely smarter-than-human intelligence. The problem may not be solved using the particular method we were thinking of, or the problem may be solved as a special case of a more general challenge; but we can still point to the problem and say: "This is part of what's at stake in the Singularity."

If humans ever discover a cure for cancer, that discovery will ultimately be traceable to the rise of human intelligence, so it is not absurd to ask whether a superintelligence could deliver a cancer cure in short order. If anything, creating superintelligence only for the sake of curing cancer would be swatting a fly with a sledgehammer. In that sense it is probably unreasonable to visualize a significantly smarter-than-human intelligence as wearing a white lab coat and working at an ordinary medical institute doing the same kind of research we do, only better, in order to solve cancer specifically as a problem. For example, cancer can be seen as a special case of the more general problem "The cells in the human body are not externally programmable." This general problem is very hard from our viewpoint – it requires full-scale nanotechnology to solve the general case – but if the general problem can be solved it simultaneously solves cancer, spinal paralysis, regeneration of damaged organs, obesity, many aspects of aging, and so on. Or perhaps the real problem is that the human body is made out of cells or that the human mind is implemented atop a specific chunk of vulnerable brain – although calling these problems raises philosophical issues not discussed here.

Singling out "cancer" as the problem is part of our culture's particular outlook and technological level. But if cancer or any generalization of "cancer" is solved soon after the rise of smarter-than-human intelligence, then it makes sense to regard the quest for the Singularity as a continuation by other means of the quest to cure cancer. The same could be said of ending world hunger, curing Alzheimer's disease, or placing on a voluntary basis many things which at least some people would regard as undesirable: illness, destructive aging, human stupidity, short lifespans. Maybe death itself will turn out to be curable, though that would depend on whether the laws of physics permit true immortality. At the very least, the citizens of a post-Singularity civilization should have an enormously higher standard of living and enormously longer lifespans than we see today.

What kind of problems can we reasonably expect to be solved as a side effect of the rise of superintelligence; how long will it take to solve the problems after the Singularity; and how much will it cost the beneficiaries? A conservative version of the Singularity would start with the rise of smarter-than-human intelligence in the form of enhanced humans with minds or brains that have been enhanced by purely biological means. This scenario is more "conservative" than a Singularity which takes place as a result of brain-computer interfaces or Artificial Intelligence, because all thinking is still taking place on neurons with a characteristic limiting speed of 200 operations per second; progress would still take place at a humanly comprehensible speed. In this case, the first benefits of the Singularity probably would resemble the benefits of ordinary human technological thinking, only more so. Any given scientific problem could benefit from having a few Einsteins or Edisons dumped into it, but it would still require time for research, manufacturing, commercialization and distribution.

Human genius is not the only factor in human science, but it can and does speed things up where it is present. Even if intelligence enhancement were treated solely as a means to an end, for solving some very difficult scientific or technological problem, it would still be worthwhile for that reason alone. The solution might not be rapid, even after the problem of intelligence enhancement had been solved, but that assumes the conservative scenario, and the conservative scenario wouldn't last long. Some of the areas most likely to receive early attention would be technologies involved in more advanced forms of superintelligence: broadband brain-computer interfaces or full-fledged Artificial Intelligence. The positive feedback dynamic of the Singularity – smarter minds creating still smarter minds – doesn't need to wait for an AI that can rewrite its own source code; it would also apply to enhanced humans creating the next generation of Singularity technologies.

The Singularity creates speed for two reasons: First, positive feedback – intelligence gaining the ability to improve intelligence directly. Second, the shift of thinking from human neurons to more readily expandable and enormously faster substrates. A brain-computer interface would probably offer a limited but real version of both capabilities; the external brainpower would be both fast and programmable, although still yoked to an ordinary human brain. A true Artificial Intelligence, or a human scanned completely into a sufficiently advanced computer, would have total self-access. At this point one begins to deal with superintelligence as the successor to current scientific research, the global economy, and in fact the entire human condition; rather than a superintelligence plugging into the current system as an improved component. At this point human nature sometimes creates an "Us Vs. Them" view of the situation – the instinct that people who are different are therefore on a different side – but if humans and superintelligences are playing on the same team, it would be straightforward for the most advanced mind at any given time to offer a helping hand to anyone lagging behind; there is no technological reason why humans alive at the time of the Singularity could not participate in it directly. In our view this is the chief benefit of the Singularity to existing humans; not technologies handed down from above but a chance to become smarter and participate directly in creating the future.

One idea that is often discussed along with the Singularity is the proposal that, in human history up until now, it has taken less and less time for major changes to occur. Life first arose around three and half billion years ago; it was only eight hundred and fifty million years ago that multi-celled life arose; only sixty-five million years since the dinosaurs died out; only five million years since the hominid family split off within the primate order; and less than a hundred thousand years since the rise of Homo sapiens sapiens in its modern form. Agriculture was invented ten thousand years ago; Socrates lived two and half thousand years ago; the printing press was invented five hundred years ago; the computer was invented around sixty years ago. You can't set a speed limit on the future by looking at the pace of past changes, even if it sounds reasonable at the time; history shows that this method produces very poor predictions. From an evolutionary perspective it is absurd to expect major changes to happen in a handful of centuries, but today's changes occur on a cultural timescale, which bypasses evolution's speed limits. We should be wary of confident predictions that transhumanity will still be limited by the need to seek venture capital from humans or that Artificial Intelligences will be slowed to the rate of their human assistants (both of which I have heard firmly asserted on more than one occasion).

We can't see in advance the technological pathway the Singularity will follow, since if we were that smart ourselves we'd already have done it. But it's possible to toss out broad scenarios, such as "A smarter-than-human AI absorbs all unused computing power on the then-existent Internet in a matter of hours; uses this computing power and smarter-than-human design ability to crack the protein folding problem for artificial proteins in a few more hours; emails separate rush orders to a dozen online peptide synthesis labs, and in two days receives via FedEx a set of proteins which, mixed together, self-assemble into an acoustically controlled nanodevice which can build more advanced nanotechnology." This is not a smarter-than-human solution; it is a human imagining how to throw a magnified, sped-up version of human design abilities at the problem. There are admittedly initial difficulties facing a superfast mind in a world of slow human technology. Even humans, though, could probably solve those difficulties, given hundreds of years to think about it. And we have no way of knowing that a smarter mind can't find even better ways.

If the Singularity involves not just a few smarter-than-usual researchers plugging into standard human organizations, but the transition of intelligent life on Earth to a smarter and rapidly improving civilization with an enormously higher standard of living, then it makes sense to regard the quest to create smarter minds as a means of directly solving such contemporary problems as cancer, AIDS, world hunger, poverty, et cetera. And not just the huge visible problems; the huge silent problems are also important. If modern-day society tends to drain the life force from its inhabitants, that's a problem. Aging and slowly losing neurons and vitality is a problem. In some ways the basic nature of our current world just doesn't seem very pleasant, due to cumulative minor annoyances almost as much as major disasters. This may usually be considered a philosophical problem, but becoming smarter is something that can actually address philosophical problems.

The transformation of civilization into a genuinely nice place to live could occur, not in some unthinkably distant million-year future, but within our own lifetimes. The next leap forward for civilization will happen not because of the slow accumulation of ordinary human technological ingenuity over centuries, but because at some point in the next few decades we will gain the technology to build smarter minds that build still smarter minds. We can create that future and we can be part of it.

If there's a Singularity effort that has a strong vision of this future and supports projects that explicitly focus on transhuman technologies such as brain-computer interfaces and self-improving Artificial Intelligence, then humanity may succeed in making the transition to this future a few years earlier, saving millions of people who would have otherwise died. Around the world, the planetary death rate is around fifty-five million people per year (UN statistics) - 150,000 lives per day, 6,000 lives per hour. These deaths are not just premature but perhaps actually unnecessary. At the very least, the amount of lost lifespan is far more than modern statistics would suggest.

There are also dangers for the human species if we can't make the breakthrough to superintelligence reasonably soon. Albert Einstein once said: "The problems that exist in the world today cannot be solved by the level of thinking that created them." We agree with the sentiment, although Einstein may not have had this particular solution in mind. In pointing out that dangers exist it is not our intent to predict a dystopian future; so far, the doomsayers have repeatedly been proven wrong. Humanity has faced the future squarely, rather than running in the other direction as the doomsayers wished, and has thereby succeeded in avoiding the oft-predicted disasters and continuing to higher standards of living. We avoided disaster by inventing technologies which enable us to cope with complex futures. Better, more sustainable farming technologies have enabled us to support the increased populations produced by modern medicine. The printing press, telegraph, telephone, and now the Internet enable humanity to apply its combined wisdom to problem-solving. If we'd been forced to move into the future without these technologies, disaster probably would have resulted. The technology humanity needs to cope with the coming decades may be the technology of smarter-than-human intelligence. If we have to face challenges like basement laboratories creating lethal viruses or nanotechnological arms races with just our human intelligence, we may be in trouble.

Finally, there is the integrity of the Singularity itself to safeguard. This is not necessarily the most difficult part of the challenge, compared to the problem of creating smarter-than-human intelligence in the first place, but it needs to be considered.It is possible that the integrity of the Singularity needs no safeguarding; that any human from Gandhi to Stalin, if enhanced sufficiently far beyond human intelligence, would end up being wiser and more moral than anyone alive today; that the same holds true for all minds-in-general from enhanced chimpanzees to arbitrarily constructed Artificial Intelligences. But this is not something we know in advance. Since we don't know how many moral errors persist in our own civilization, safeguarding the integrity of the Singularity – in our view – consists more of ensuring the will and ability to grow wiser with increased intelligence than of trying to find perfect candidates for human intelligence enhancement. An analogous problem exists for Artificial Intelligence, where the task is not enforcing servitude on the AI or coming up with a perfect moral code to "hardwire", but rather transferring over the features of human cognition that let us conceive of a morality improving over time (see the section on Friendly Artificial Intelligence for more information).

Safeguarding the integrity of the Singularity is another reason for facing the challenge of the Singularity squarely and deliberately. It may be that human intelligence enhancement will turn out well regardless, but there is still no point in taking unnecessary risks by driving the projects underground. If human intelligence enhancement is banned by the FDA, for example, this just means that the first experiments will take place outside the US, slightly later than they otherwise would have; increasing the possible risks, delaying the possible benefits. If human intelligence enhancement is banned by the UN this means the experiments will take place offshore, out of the public eye, and perhaps sponsored by groups that we would prefer not be involved – although there is a significant chance it would turn out well regardless. In the case of Artificial Intelligence there are certain specific things that must be done to place the AI in the same moral "frame of reference" as humanity – to ensure the AI absorbs our virtues, corrects any inadvertently absorbed faults, and goes on to develop along much the same path as a recursively self-improving human altruist. Friendly Artificial Intelligence is not necessarily more difficult than the problem of AI itself, but it does need to be handled along with the creation of Artificial Intelligence. In both cases, we can best safeguard the integrity of the Singularity by confronting the Singularity intentionally and with full awareness of the responsibilities involved.

What does it mean to confront the Singularity? Despite the enormity of the Singularity, sparking the Singularity – creating the first smarter-than-human intelligence – is a problem of science and technology. The Singularity is something that we can actually go out and do, not a philosophical way of describing something that inevitably happens to humanity. It takes the sweep of human progress and a whole technological economy to create the potential for the Singularity, just as it takes the entire framework of science to create the potential for a cancer cure, but it also takes a deliberate effort to run the last mile and fulfill that potential. If someone asks you if you're interested in donating to AIDS research, you might reply that you believe that cancer research is relatively underfunded and that you are donating there instead; you would probably not say that by working as a stockbroker you support the world economy in general and thereby contribute as much to humanity's progress toward an AIDS cure as anyone. In that sense, sparking the Singularity is no different from any other grand challenge – someone has to do it.

At this moment in time, there is a tiny handful of people who realize what's going on and are trying to do something about it. It is not quite true that if you don't do it, no one will, but the pool of other people who will do it if you don't is smaller than you might think. If you're fortunate enough to be one of the few people who currently know what the Singularity is and would like to see it happen – even if you learned about the Singularity just now – we need your help because there aren't many people like you. This is the one place where your efforts can make the greatest possible difference – not just because of the tremendous stakes, though that would be far more than enough in itself, but because so few people are currently involved.

2012: Matrix Singularity Seminar Part 2--Fractal Time -

ALIENSHIFT talks about Extraterrestrials and their cultures.
"V" also have been called a New Hope for Humanity Pioneers in Alientology study of UFO, Earth Changes, extraterrestrails, Moon Base, Mars colony, Deep Underground Military Bases, super soldiers, GMO, Illuminati 4th dimensional workings, Planet Nibiru, Global warming,  Pole Shift of 2012, Life on MARS, U.S bases on Moon and MARS, Alternative 3, Time travel, Telepathy, Teleportation, Metaphysics, The Rainbow Project, Project Invisibility, Phoenix Project, Teleportation Projects, Nicola Tesla. John Von Neumann, USS Eldrige, The Montauk Chair, Dr, Wolf, Alternate Reality, Warping Space Time, HARRP Project and Weather Control, Alternate Time Lines, ET Taught Military the Laser & Stealth Technology. Tesla Towers, Hollow Earth, Teleportation to the Planet MARS and NIBIRU, Time travel back to Atlantis, how to build a Teleportation Machine, Under ground Extraterrestrial Bases, Tesla Arranges ET's meeting, Pleiadians, Gray Reptilians, Arcturians, Tall Whites, Ancient Civilizations, Time Tunneling, Grey Alien projects, Atlantis, Secret Society's, German Mars Projects, Nazi UFO, Nazi Gold and Fort Knox, Albert Einstein, Nicola Tesla, Anti Christ using A.I and Micro chip Implants, Worm Holes, Space Time, Moon, Moon Landing, Rings of Saturn, Life on Venus, Time Travel Machine Build by GE next 20 years, Weather Control, Psychic Frequency. Time Vortex, Face of Mars, Ancient Civilizations, Mars Ruins, Telepathic Thought and Powers of the Mind, Artificial Intelligence, Et message of Islam, Islamic Jihadist Revival and Islamophobia created by Western Alliance to offset the Chinese and Soviet Balance of Power by creating the Al-Qaeda in Afghanistan during 80s. Montauk Base, Zero Time frame Reference, Akashic location system, Ancient Religious artifacts in Iraq and MARS, Alien walk in's, Jesus, Sci Fi UFO video in New York, Philadelphia Experiment. The subconscious mind, Video of FOX 11 news on Chicago OHRE UFO Sighting. Numerology and effect of numbers: 0, 1, 11, 1111, 1112, 1119, 1911, 2111, 511, 711, 811, 911, 1212, 555, 786, 19, 110, 2012, 999, 888, 777. Current Cycle of Man ends as MAYAN CALENDAR ends on Sunday 12/ 21/ 2012. Watch your back on that Day! Bob Lazar, John Lear, Mars Face, Mars Pyramid, Why we are not allowed to have Dept of UFOLOGY and ET studies at University of Yale, Princeton , UCLA, USC, CAL TECH, MIT or Harvard? American Torture in Abu Gulag and on the Air Secret planes new way of American Democracy!
Sufism the Alchemy of Happiness, evolution, Dan Burisch and other famous scientists video of J-ROD at AREA 51, S4 level and New Mexico DULCE Alien Facility, Why Mosad killed JFK to cover Israel Nuclear Facility inspection, Islamic Prince was killed in Princess Diana womb to avert the future time line, Dalai LAMA never let go of Tibet and fight with China, There is no Oil shortages and global warming, Earth warming is due to arrival of Planet Nibiru or Planet of Crossing which is heating up the whole solar system,  Saturn, Mars, Uranus, Mayan Calendar year 2012 your time line is crossing get ready prepare for Contact, Solar System causing global warming Nibiru approaching, GOOGLE, You Tube UFO Videos, ALIEN WEBBOT, MAYAN CALENDAR ENDS 2012, Big Foot, Milky way and the Big Bang, real UFO, UFO photos, Information page on ET Civilizations, ALIEN and UFO videos, story of Real Aliens, picture of Aliens, Magnetic Pole Shift, ET CO-Operation in Iran and IRAQ, crystals, NIBIRU, 2012, METEORITE, IMPACT, EARTH, GIANT, NEPHILIM, ALIEN, ALIENS.Vote OBAMA, SAEED DAVID FARMAN
Bermuda triangle, Chupacabra, ISLAMIC UFOLOGY, MAJESTIC 12, SHADOW GOV, Rabbit Foot and Cracking the CODE, CROP CIRCLES, CLONING, MASNAVI OF RUMI with Rumi You tube Videos, METATRON, DRUNVALO MELCHIDEZEK, MESSAGE OF KRYON, Master Kuthumi and Ker-On. US SENATORS, MESSAGE of Commander Hatonn, ST. Germain on ASCENSION and AG-Agria, John Lear ,WILLIAM COOPER VIDEOS , ALEX COLLIER VIDEOS . Bush Skull and Bones family connection to Illuminati CFR New World order.Mysoace in Paris Hilton Hotels within Google, Yahoo, MSN, EBAY, EBAY, MAPQUEST, TIGERDIRECT,COM, NEWEGG,COM, BUY.COM, Some playing Golf on, with Disney Pokeman play Gamealso free music downloads in CNN.COM, ABC.COM, CBS.COM, NASA.COM, etc. Allah is Great Long Live freedom and ET message of Love and Islam. Master of all Masters Ibn Arabi, Persian music the Alchemy of Soul, SUFI MASTERS, IMAM MAHDI, Preparation prayer for Pole Shift 2012 in Mecca, Nostradamus, Oil, Flying Saucers Antimatter, Gravity waves DISCS at S4 Groom Lake or Papoose Lake nick named AREA 51 in Nevada.