RESTORED: 8/14/22
The more that I learn about computers, CERN, the WorldWideWeb and Artificial Intelligence, the more convinced I am that demonic entities are behind it, inside it and in control of it. I never believed for one minute that man is so clever he could make a machine that is capable of thinking for itself. Now we are learning that that is exactly what is happening and man is no longer in control of Artificial Intelligence if indeed he ever was.
There are many people who believe that CERN created the Worldwide Web in order to communicate with entities from other dimensions. I am convinced that is true, but in my mind, the demonic entities are driving the mad scientists to create these things in order to gain access and control of the minds of the masses. I don’t believe they need the internet to open portals into our world because I believe that GOD has released them and removed his protective veil because mankind has turned their back on HIM. They are crying for Lucifer and all the gods of the Ancient past… so that is exactly what they are getting.
But I believe the internet is what Satan has devised to gain access into the minds of individuals, where he is required to get permission. He presents these things to mankind as something attractive and even productive when in truth it is a ruse to gain not only permission and information on what we think, hope and desire but to take full control of our thoughts and even change our chemical makeup and DNA to conform to his plan. Once you have viewed everything in this series, you may come to the same conclusion.
Published on May 23, 2013
Defense Advanced Research Projects AgencyAbout Us
The Defense Advanced Research Projects Agency (DARPA) is an agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Advanced Research Projects Agency ( ARPA ), the agency was created in February 1958 by President Dwight D. Eisenhower
ARPA Becomes DARPA
Defense Advanced Research Projects AgencyAbout Us
By Christina SarichIn the movie 21 Grams, the idea is presented that upon the moment of death, the human body instantaneously loses exactly 21 grams – supposedly the weight of the soul. Though this is not scientifically proven, there seems to be evidence that our consciousness is indeed a transferable entity – and the ancient Egyptians likely knew exactly how to transfer consciousness, or the soul, from one person to another.
There is a similar aim within DARPA and the Shadow Government’s pricey experiments. The merging of humans and machines is already happening whereupon human consciousness is “uploaded” into AI computers. Mind uploading is expected to be in full force by 2045, but has it been done already, long ago?
Before we examine the ancient history of uploading consciousness, let’s look at a few examples in recent Hollywood history which suggest this is already being done.
In the movie, What Dreams May Come staring the late Robin Williams, his character dies and then traverses gleefully through a place that looks just like the paintings his wife used to create when he was still alive. The film pictorially demonstrates that William’s is in his own eternity developed by his “KA” state – a term I’ll explain momentarily. His wife has a similar, negative experience of a seeming hell, based on her experience of suicide. She enters a dreamlike “hallucination” that she cannot escape based on her own “KA” state.
The ancients understood that consciousness separates from the physical body at death. The life we live in physical form is repeated – perhaps over and over again – in the KA state, with all our wisdom, all our ignorance, our mistakes and triumphs, loves’ lost, and friends won.
Janet Cunningham, a researcher of ancient Egyptian rituals, writes:
“There is a growing view that the King’s Chamber of the Great Pyramid was where the highest initiation took place. Brunton (1984), whose experiences in the Great Pyramid confirm some of the esoteric writers (Hall, Leadbeater, and Haich), says that in the beginning there is terror, uncertainty, wandering and darkness. This terror was described to me by a client who had prayed for two years to have an out-of-body experience; when it actually occurred, it was uncontrolled and horrific to her (Cunningham, 1997).
According to Brunton, that experience is followed by a miraculous and divine light. Again and again, in the Egyptian Book of the Dead (Faulkner, 1994), one reads, I go to Osiris. I am One with Osiris; I am Osiris. This gives support to the theory of guiding the initiate to become a god living eternally with the gods. It becomes evident that, if one views these writings as guidance for the initiate as well as guidance for the deceased after death, the reader is encouraged to become One with the resurrected god, Osiris.”
Among many theories of the purposes of the pyramids, there are those who suggest that these structures acted as a means to transfer “KA” from one pharaoh or king to another as they passed from earthly form. The Egyptians described the soul and the spirit as two separate entities, the BA, and KA, respectively. The BA, or soul, never dies, but simply reincarnates so that it can reach higher levels of conscious evolution. The Egyptians sometimes depicted the BA as a human head with bird’s wings.
The KA is the part of consciousness which stays here on planet earth. It was represented in hieroglyphs as two up-stretched arms reaching toward the sky. The KA, or spirit, is the part of us which can roam illusory creations of the mind such as we witnessed in What Dreams May Come.
The Egyptians also understood, however, that the KA and BA were very significantly intertwined. There were specific practices which were carried out based on their understanding of the physical and non-physical realms of space and time.
KA, according to the ancient Egyptians, included all genetic material and cellular memories from parents and ancestors. That genetic “residue” of their genetic and epigenetic experience helped to form us. This is why homage was paid to ancestors – as they were though to exist in a very real way in our present form. With the right awareness, we are also able to tap into their reserves of knowledge and wisdom.
Another reason that Pharaoh’s tombs were packed with jewels, food, and other personal items was because the Egyptians believed that the KA would carry physical objects acquired in one life along to the next life, as long as they are still in existence in that time. This is also why it is believed that psychics can hold an article of clothing or a personal item of someone to understand more about them without ever having met them. They are simply picking up on traces of KA energy.
The KA and BA needed to be transferred in an unbroken manner at death in order for one King to receive the wisdom of the previous King. The practice of mummification is part of this process. The Egyptians believed that the KA could be preserved if the decay of the body was slowed.
Moreover, the ancients would attempt to stop the BA from falling back into reincarnation, and the KA from reliving a fantasy based on their former state(s) of consciousness. The KA had to be “grounded” and preserved so that an ethereal link between BA and KA was not severed. If this was successfully accomplished, one would become an etheric shaman, capable of directing one’s soul and spirit in a conscious way.
If this sounds strangely like a new tech start-up that wants to transfer your consciousness into an artificial body so that you can live forever, that’s because the technology is no sci-fi fantasy, but more likely a reality. Are modern-day techies simply the wanna-be pharaonic priests of ancient ages? One thing is for certain, the quest for immortality is as old as human history itself.
This article (Did the Egyptians Know How to Transfer Consciousness From One Entity to Another?) was originally published on Waking Times and syndicated by The Event Chronicle.
From 1983 to 1993 DARPA spent over $1 billion on a program called the Strategic Computing Initiative. The agency’s goal was to push the boundaries of computers, artificial intelligence, and robotics to build something that, in hindsight, looks strikingly similar to the dystopian future of the Terminator movies. They wanted to build Skynet.
Much like Ronald Reagan’s Star Wars program, the idea behind Strategic Computing proved too futuristic for its time. But with the stunning advancements we’re witnessing today in military AI and autonomous robots, it’s worth revisiting this nearly forgotten program, and asking ourselves if we’re ready for a world of hyperconnected killing machines. And perhaps a more futile question: Even if we wanted to stop it, is it too late?
“The possibilities are quite startling…”
If the new generation technology evolves as we now expect, there will be unique new opportunities for military applications of computing. For example, instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. The possibilities are quite startling, and suggest that new generation computing could fundamentally change the nature of future conflicts.
That’s from a little-known document presented to Congress in October of 1983 outlining the mission of the new Strategic Computing Initiative (SCI). And like nearly everything DARPA has done before and since, it’s unapologetically ambitious.
The vision for SCI was wrapped up in a completely new system spearheaded by Robert Kahn, then director of Information Processing Techniques Office (IPTO) at DARPA. As it’s explained in the 2002 book Strategic Computing, Kahn wasn’t the first to imagine such a system, but “he was the first to articulate a vision of what SC might be. He launched the project and shaped its early years. SC went on to have a life of its own, run by other people, but it never lost the imprint of Kahn.”
The system was supposed to create a world where autonomous vehicles not only provide intelligence on any enemy worldwide, but could strike with deadly precision from land, sea, and air. It was to be a global network that connected every aspect of the U.S. military’s technological capabilities—capabilities that depended on new, impossibly fast computers.
But the network wasn’t supposed to process information in a cold, matter-of-fact way. No, this new system was supposed to see, hear, act, and react. Most importantly, it was supposed to understand, all without human prompting.
An Economic Arms Race
The origin of Strategic Computing is often associated with the technological competition brewing between the U.S. and Japan in the early 1980s. The Japanese wanted to build a new generation of supercomputers as a foundation for artificial intelligence capabilities. Pairing the economic might of the Japanese government with Japan’s burgeoning microelectronics and computer industry, they embarked on their Fifth Generation Computer System to achieve it.
The goal was to create unbelievably fast computers that would allow Japan to leapfrog other countries (most importantly the United States and its emerging “Silicon Valley”) in the race for technological dominance. They gave themselves a decade to accomplish this task. But much like the United States, no matter how much faster they made their machines, they couldn’t seem to make them “smarter” with strong AI.
Japan’s ambition terrified many people in the U.S. who worried that America was losing its technological edge. This fear was stoked in no small part by a 1983 book called The Fifth Generation: Artificial Intelligence and Japan’s Computer Challenge to the World by Edward A. Feigenbaum and Pamela McCorduck, which was seen as a must-read on Capitol Hill.
“The consumer electronics industry will integrate new-generation computing technology and create a home market for applications of machine intelligence.”
Reaching out to the private sector and the university system would also ensure that the best and brightest were contributing to DARPA’s mission for the program:
Equally important is technology transfer to industry, both to build up a base of engineers and system builders familiar with computer science and machine intelligence technology now resident in leading university laboratories, and to facilitate incorporation of the new technology into corporate product lines. To this end we will make full use of regulations of Government procurement involving protection of proprietary information and trade secrets, patent rights, and licensing and royalty arrangements.
The long and short of it? The government gave assurances to private industry that the technology developed wouldn’t be handed off to competing companies.
But economic competition with the Japanese, while very much a motivator, was almost a sideline concern for many policymakers embroiled in Cold War politics. Military build-up was the prime concern for the more hawkish members of the Republican party. The military threat from the Soviet Union was seen by many of them as the larger issue. And SCI was designed to address that threat head-on.
The Star Wars Connection
The launch of the Strategic Computing program and DARPA’s requests for proposals in 1983 and 1984 set off a heated debate in the academic community—the same community that would ultimately benefit from DARPA funding from this project. Some were skeptical that the ambitious plans for advanced artificial intelligence could ever be accomplished. Others worried that advancing the cause of AI for the military would usher in a terrifying era of autonomous robot armies.
It was a valid concern. If the goal of Star Wars—the popular nickname for Ronald Reagan’s Strategic Defense Initiative (SDI), and a popular political football at the time—was an automated response (or semi-automated response) to any missile threat from the Soviets, it would seem absurd not to tie it into a larger network of truly intelligent machines. The missions of the two projects—-not to mention their originating institutions—overlapped too much to be a coincidence, despite everyone’s insistence that it was just that.
From a 1988 paper by Chris Hables Gray:
The Star Wars battle manager, probably the most complex and the largest software project ever, is conceptually (though not administratively) a part of [Strategic Computing Initiative]. Making the scientific breakthroughs in computing that the SDI needs is a key goal of the [Strategic Computing Initiative].
If you ask anyone who worked on the SCI at the highest levels (as Roland did for his 2002 book on the project) they’ll insist that SCI had nothing to do with Ronald Reagan’s dream for Star Wars. But right from Strategic Computing’s early days, people were making connections between the SCI and the SDI. The connective tissue came in part simply because the programs shared similar names, and were even named by the same man, DARPA director from 1981 until 1985, Robert Cooper. And perhaps people saw a thread because the interconnecting computing power being developed for SCI just made sense as an application for a space-based strategy of missile defense.
Whether or not you believe SCI was going to function as an arm of the Star Wars mission for space-based defense, there’s no denying that if both had worked out, they would’ve been natural collaborators.
Applying Strategic Computing on Land, Sea and Air
The 1983 chart above outlined the mission of Strategic Computing. The goal was clear: develop a broad base of machine intelligence tech to increase national security and economic strength. But to do that, Congress and the military institutions that would eventually benefit from SCI would need to see it in action.
SCI had three applications that were supposed to prove its potential, though it would acquire many more by the late 1980s. Leading the charge were the Autonomous Land Vehicle, the Pilot’s Associate, and the Aircraft Carrier Battle Management System.
These applications were built on top of the incredibly advanced computers that were being developed at places like BBN, the Cambridge company probably best known for its work on developing the early internet, and would allow for advancements in things like vision systems, language comprehension, and navigation—vital tools for an integrated military force of man and machine.
The Driverless Vehicle of 1985
The most ominous-looking product to emerge from SCI was the Autonomous Land Vehicle. The 8-wheeled unmanned ground vehicle was 10 feet tall and 13.5 feet long, with a camera and sensors mounted on the roof guiding its vision and navigation system.
Martin Marietta, which merged with the Lockheed Corporation in 1995 to become Lockheed Martin, won the bid in the summer of 1984 to create the experimental ALV. They would get $10.6 million in the three and a half years of the program (about $24 million adjusted for inflation) with an optional $6 million after that if the project met certain benchmarks.
The October 1985 issue of Popular Science included a story about the tests that were being conducted at a secret Martin Marietta facility southwest of Denver.
Writer Jim Schefter described the scene at the test facility:
The boxy blue-and-white vehicle crawls sedately along a narrow Colorado valley road, never venturing far from the center line. A single window, set cylcops-like in the vehicle’s slab face, gives no clue about the driver. The tentative trek looks out of character for the massive 10-foot tall eight-wheeled vehicle. Although three on-board diesel engines roar, the wheels creep along at three mph.
After about a half-mile, the hulking vehicle stops. But nobody climbs out. There is no one aboard — just a computer. Using laser and video for eyes, a seminal — yet still advanced — artificial-intelligence program has sent the vehicle down the road without human intervention.
DARPA paired Martin Marietta with the University of Maryland, whose earlier work in vision systems was seen as instrumental to make the autonomous vehicle portion of the program a success.
As it turns out, creating a vision system for an autonomous vehicle is incredibly difficult. The system was fooled by light and shadows, and thus couldn’t work with any degree of consistency. It might be able to detect the edge of the road at noon just fine, only to be thrown off by the shadows cast during the early evening.
Any environmental change (like mud tracked along the road by a different vehicle) also threw the vision system for a loop. This, of course, was unacceptable even in the highly controlled testing area. If it couldn’t handle such seemingly simple obstacles, how would such a vehicle deal with the countless variables it would surely encounter out in the battlefield?
Despite meeting significant milestones by November of 1987, the ALV component of SCI was effectively abandoned by the end of the year. Though the autonomous vehicle was still quite primitive, some people at DARPA thought it was being dumped way too soon.
In the end, it couldn’t overcome its battle unreadiness. As Alex Roland notes in the book Strategic Computing, “One officer, who completely misunderstood the concept of the ALV program, complained that the vehicle was militarily useless: huge slow, and painted white, it would be too easy a target on the battlefield.” DARPA formally cancelled work on the ALV in April of 1988.
R2-D2 in Real Life
The pilot would still make the final decisions in this scenario. But the Pilot’s Associate was going to be smart enough not only to know who, what, and how to ask questions. It also understood why.
From the Strategic Computing planning document:
Pilots in combat are regularly overwhelmed by the quantity of incoming data and communications on which they must base life or death decisions. They can be equally overwhelmed by the dozens of switches, buttons, and knobs that cover their control handles demanding precise activation. While each of the aircraft’s hundreds of components serve legitimate purposes, the technologies which created them have far outpaced our skill at intelligently interfacing the pilot with them.
It’s here that we see DARPA’s case emerge for needing a Skynet of its own. The overwhelming nature of combat—overwhelming, DARPA implies, only because battlefield technology had already advanced so quickly—could only be achieved with new machines. The pilot may still be the one pushing the button, but these computers would do at least half the thinking for him. When mankind can’t keep up, hand it off to the machines.
The Pilot’s Associate application never got the same exposure in the American press that the ALV did, probably because it was harder to visualize than an enormous, driverless tank rolling down the road. But looking at the speech recognition tech of today, it’s easy to see where all that research into a Pilot’s Associate ended up.
The Invisible Robot Advisor
The Battle Management System was the third of the three applications originally planned to prove that SCI was a practical endeavor.
As it’s described in Strategic Computing (2002):
In the naval battle management system envisioned for SC, the expert system would “make inferences about enemy and own force order-of-battle which explicitly include uncertainty, generate strike options, carry out simulations for evaluating these options, generate the [operations plan], and produce explanations.
The Battle Management System was essentially the brain of the entire operation, and for that reason it was kept out of the spotlight more so than grunts like the ALV. Robots rolling down the road without human control is terrifying enough for some people. Invisible robots with their invisible finger on the very real nuclear button? You don’t exactly send press releases out for that one.
The Battle Management System was devised as an application specifically for the Navy (just as the ALV had been designed for the Army, and the Pilot’s Associate for the Air Force) but it was really just a showcase for the broader system. Every one of these technologies was intended to eventually be used wherever it was most needed. The voice recognition software developed for the Pilot’s Associate would need to work for every branch of the military, not just the Air Force. And the Battle Management System would have to play nice with everyone—except the enemy target, of course.
Piecing Together Skynet
All of the various components of the Strategic Computing Initiative were part of a larger hypothetical system that could have radically changed the nature of war in the 21st century.
Imagine a global wireless network overseeing various subnetworks within the U.S. military. Imagine armies of robot tanks on the ground talking to fleets of drones in the sky and unmanned submarines in the sea—all coordinating their activities faster than any human commander ever could. Now imagine it all being that much more complicated, with nukes waiting to be deployed in space.
The vision for the Strategic Computing Initiative was incredibly bold, and yet somehow quaint when we look at just how far it could have gone. The logical extensions of strong AI and a global network of killing machines are not hard to envision, if only because we’ve seen them played out in fiction countless times.
The Future of War and Peace
What finally killed the Strategic Computing Initiative in the early 90s was the acceptance—after nearly a decade of trying—that strong artificial intelligence on the level DARPA had imagined was simply unattainable. But if all of these various technologies developed in the 1980s sound eerily familiar, it’s probably because they’re all making headlines here in the early 21st century.
We see the vision systems that were imagined for ALV emerging in robots like Boston Dynamics’ Atlas, we see the Pilot Associate’s Siri-like understanding of speech being utilized by the US Air Force, and we see autonomous vehicles being tested by Google, among a host of other companies. They’re all the future of war. And if companies like Google are to be believed, they’re the future of peace as well.
Google’s recent purchase of Boston Dynamics has raised quite a few eyebrows among those concerned about a future filled with autonomous robot armies. Google says that Boston Dynamics will honor old contracts with military clients, though they’ll no longer accept any new ones.
But whether or not they continue to accept military contracts (and it’s certainly possible that they could do so under the radar within a secretive black budget) there’s no question that the line between military and civilian technology has always been blurred. If Boston Dynamics never again works for organizations like DARPA, and yet Google benefits from research paid for by the military, then ostensibly the system worked.
The military got what it needed by advancing the science of robotics with a private company. And now lessons from that military tech will show up in our everyday civilian lives—just like countless other technologies, including the internet itself.
In truth, this post barely scratches the surface of DARPA’s aspirations for Strategic Computing. But hopefully, by continuing to explore yesterday’s visions of the future we can gain some historical perspective to better appreciate that these new advancements don’t emerge out of thin air. They’re not even that new. They’re the product of decades of research and billions of dollars being spent by hundreds of organizations—both public and private.
Ultimately, Strategic Computing wasn’t derailed by some fear of what creating such a program would do to our world. The technology to build it—from the advanced AI to the autonomous vehicles—simply wasn’t evolving fast enough. But here we are, two decades after SCI faded away; two decades further into the development of this vision for smart machines.
Our future of super-smart, interconnected robots is nearly here. You don’t have to like it, but you can’t say you weren’t warned.
Sources: Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993 by Alex Roland with Philip Shiman (2002); Strategic Computing: New Generation Computing Technology: A Strategic Plan for its Development and Application to Critical Problems in Defense by DARPA (28 October 1983); Strategic Computing at DARPA: Overview and Assessment by Mark Stefik (1985); Arms and Artificial Intelligence: Weapons and Arms Control Applications of Advanced Computing edited by Allan M. Din (1988); The Strategic Computing Program at Four Years: Implications and Intimations by Chris Hables Gray (1988);
Images: ALV with laser sight via Lockheed Martin; Strategic Computing logo from the 1983 DARPA planning document; Cover of Fifth Generation scanned from the book cover; PIM/p computer via JipDec; Ronald Reagan and Star Wars screenshot taken from the PBS American Masters program; Black and white ALV outside Denver, scanned from an archival press photo; ALV illustration scanned from the October 1983 issue of Popular Science; Artists’ concept illustration for the Pilot’s Associate found in an early online draft of The Quest for Artificial Intelligence by Nils Nilsson (2009); Black and white Pilot’s Associate illustration scanned from the book Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993 by Alex Roland and Philip Shiman (2002); Atlas Robot and Google driverless car taken from WikiCommons; Future autonomous military fighter via the Unmanned Systems Integrated Roadmap FY 2011-2036 published by the U.S. Department of Defense
Voices and Images From Beyond: Using Electronics to Communicate with the Spirit World
You Will Wish You Watched This Before You Started Using Social Media | The Twisted Truth
Absolute Motivation
Published on Apr 20, 2018
She was HEAD of DARPA and has gone on to head up departments at MOTOROLA, GOOGLE, and FACEBOOK. She has never really LEFT any of those, trust me, she is working for the ELITE, seeing that all their interconnected and interfaced projects come to fruition. She is Regina Dugan.
Regina Dugan at D11 2018 – Former DARPA Director
THE
ALEX JONES RADIO SHOW
|
EX-DARPA HEAD WANTS YOU TO SWALLOW ID MICROCHIPS
Infowars.comJanuary 7, 2014 Former DARPA director and now Google executive Regina Dugan is pushing an edible “authentication microchip” along with an electronic tattoo that can read your mind. No this isn’t a movie script about a futuristic scientific dictatorship, it’s trendy and cool!
Dugan, who is Head of Advanced Technology at (Google-owned) Motorola, told an audience at the All Things D11 Conference that the company was working on a microchip inside a pill that users would swallow daily in order obtain the “superpower” of having their entire body act as a biological authentication system for cellphones, cars, doors and other devices.
“This pill has a small chip inside of it with a switch,” said Dugan. “It also has what amounts to an inside out potato battery. When you swallow it, the acids in your stomach serve as the electrolyte and that powers it up. And the switch goes on and off and creates an 18 bit ECG wide signal in your body and essentially your entire body becomes your authentication token.”
Dugan added that the chip had already been FDA approved and could be taken 30 times a day for someone’s entire life without affecting their health, a seemingly dubious claim.
Would you swallow a Google microchip every day simply to access your cellphone?
Privacy advocates will wince at the thought, especially given Dugan’s former role as head of DARPA, the Pentagon agency that many see as being at the top of the pyramid when it comes to the Big Brother technocracy.
Indeed, when host Walt Mossberg asked Dugan, “Does Google now know everything I do and everywhere I go because let’s face it….you’re from Google,” she responded by laughing and saying Mossberg should just swallow the pill.
In addition to the edible microchip, Motorola is also working on a wearable e-tattoo that could also read a user’s mind by detecting the unvocalized words in their throat.
“It has been known for decades that when you speak to yourself in your inner voice, your brain still sends neural spike volleys to your vocal apparatus, in a similar fashion to when you actually speak aloud,” explains Extreme Tech’s John Hewitt, noting that the device could allow covert voice activation as well as being used to detect stress and emotion (because Big Brother cares about your feelings).
During the D11 conference, Dugan predicted that if the e-tattoo was made to look cool with different artistic designs, young people would want to have it fused to their skin, “if only to piss off their parents.”\
The edible microchip and the wearable e-tattoo are prime examples of how transhumanism is being made “trendy” in an effort to convince the next generation to completely sacrifice whatever privacy they have left in the name of faux rebellion (which is actually cultural conformism) and convenience.
spacer
spacer
Dugan is departing after just 18 months to “lead a new endeavor.”
Regina Dugan had been working with Facebook since 2016 where she headed the “Building 8” Division. The article refers to this project as the FACEBOOK “secretive brain-computer interface” division. They call it a “controversial project about which the Media Giant has revealed few details. At it’s unveiling Dugan refers to it as Building 8’s mind-reading project named Silent Voice First. She says their goal is to use optical imaging to scan our brains and read the silent words we speak to ourselves. WHAT? Talk about invasion of PRIVACY!
She says it will enable us to control computers and virtual reality experiences. I say, with the way AI is developing it will only give the AI more control over us! Give them more details they can use to deceive and enslave us.
She stated that she was leaving Facebook to build and lead a new endeavor working with LEADERSHIP to ensure Building 8’s smooth transition 2018. SO you can be sure they are already using this technology.
The article goes on to say the Facebook is not the only entity working on the brain-machine interface. DARPA is also pursuing their plans for a neuro connection which they state will open THE CHANNEL between brain and electronics.
Bill Gates’ New Population Control Microchip Due for Launch in 2018
By: Jay Greenberg, Neon Nettle |
Multi-billionaire Bill Gates has developed a new microchip, along with researchers at MIT, that will allow for adjustments to be made to a person’s hormone levels via remote control, in a bid to reduce the planet’s population.
The Bill and Melinda Gates Foundation has been working in conjunction with a small Massachusetts startup to develop the “digital pill” that will enable women’s fertility to be switched on or off, remotely, with the touch of a button.
The new “digital version of the contraceptive” pill will be tested in Africa this year where the Bill and Melinda Gates Foundation has spent years developing vaccination and family planning programs.
Following testing, the microchips are due to be rolled out globally in 2018 with “every woman in America” replacing their regular contraceptive pill with the new remote-controlled chips, according to Gates.
According to the Guardian, the chip is implanted under the skin and releases small doses of the contraceptive hormone levonorgestrel on a daily basis, with enough capacity to last 16 years.
About the same size as a Scrabble tile, it houses a series of micro-reservoirs covered by an ultra-thin titanium and platinum seal.The hormone is released by passing a small electric current from an internal battery through the seal, which melts it temporarily, allowing a 30 microgram dose of levonorgestrel to seep out each day. And it can be simply switched off by a wireless remote, avoiding the clinical procedures needed to deactivate other contraceptive implants.
“The ability to turn the device on and off provides a certain convenience factor for those who are planning their family,” says MIT’s Dr. Robert Farra, adding that “the idea of using a thin membrane like an electric fuse was the most challenging and the most creative problem we had to solve.”
But just as hackers can spoof wifi remotes to operate neighbors’ garage doors and flip their TV channels, could a remote-controlled contraceptive open the floodgates for a new form of ovarian hacking?“Someone across the room cannot reprogramme your implant,” says Farra.
“Communication with the implant has to occur at skin contact-level distance. Then we have secure encryption. That prevents someone from trying to interpret or intervene between the communications.”
The idea for micro-dispensing chips was first developed in the 1990s by Professor Robert Langer at MIT, the founder of innumerable biotech companies and holder of more than 800 patents, known in the industry as “the most cited engineer in history”. His lab caught the attention of Bill Gates in 2012, during his search for a revolution in birth control (which has already spawned plans for a graphene condom), and Langer subsequently leased the technology to Microchips, a company already working on a micro-dosing implant for osteoporosis.
Langer says that the implant will be available by 2018, once the coming trials are complete, and that the device will be “competitively priced” in a bid to ensure it replaces conventional contraception.
Bill Gates Admits Vaccines Are Used for Human Depopulation!
Published on May 11, 2011
Bill Gates recently caused controversy after he spoke out about the immigration crisis in Europe saying that the continent will be “devastated by African refugees” unless severe and immediate action is taken to reduce the population in Africa.
This has left many questioning Gates’ motives behind his vaccination programs after the Bill and Melinda Gates Foundation had previously been accused of secretly sterilizing millions of women in Africa by doctors in Kenya after abortion drugs were discovered in Tetanus vaccines.
The program, which is funded by Bill Gates, has been accused of conducting a mass depopulation experiment on the people of Kenya without their consent.
Read more at: http://www.neonnettle.com/features/1046-bill-gates-new-population-control-microchip-due-for-launch-in-2018
© Neon Nettle
Humans, Gods and Technology – VPRO Documentary – 2017
Yuval Harari – Historian
“If you have enough Data and Computing power, you can understand a person better than they understand themselves. Then you can control them, manipulate them and make decisions for them. You don’t know yourself, but Facebook Google and the Chinese Government know you better than you know yourself.”
“What is the Authority that answers your big questions in life…GOOGLE!” YOU MUST WATCH THIS WHOLE VIDEO.
“The new Algorithmic Religion which will tell people the source of Authority is big data Algorithm.”
I don’t know what planet this guy lives on, but technology has not solved all the world’s problems and people are dying so are animals and plants and it is not due to global warming. It is due to technology, insane mad scientists and AI. It is about GOD and HE is the Authority. Whether you like it or not, whether you believe it or not. GOD is in control and HE is the one who makes the rules. He is laughing at these buffoons who think that they are smart enough to be GOD. What a joke. Thank GOD that they are not in control.
PLEASE CONTINUE: Part 2 of THE FALLEN in THE NET