A trailer (see below) was recently released for an upcoming movie about the technological singularity – a hypothetical point when AI begins to vastly exceed human intelligence. Premiering in theaters in April, ‘Transcendence’ will portray what might occur during the early stages of the singularity. While technological advancements still have a way to go before this becomes reality, this movie will likely give us at least one example of what life might be like for our planet once the singularity occurs. Read More →
Share the post "Examining the Aftermath of the Singularity"
With the production of portable digital computers at an all-time high, intracranial nanowires are looking like a realistic emerging technology. Since the cortical plasticity of the brain is so high, the brain can accept neuroprosthetic devices as though they were natural sensors. An uncomfortable adaptation period might come with the implant, but after a patient works through this they should be otherwise fine.
Various mind-machine interface devices have been used in experiments in the pursuit of giving sight to the blind. Other units intended to aid the disabled have been worked on since at least the 1970s. However, most devices aren’t designed to carry any additional signals. Researchers are now looking at the possibility of electroencephalographic equipment that can deliver informational and instructional content to users.
Recent commercial experiments have largely focused on using heads-up displays in goggles to beam images directly onto a user’s retina. This is not much different from the way that some virtual reality headsets work. Intracranial nanowires directly interface with a user’s neural network, meaning that they supplant natural senses.
Like many emerging technologies, intracranial nanowires have been emerging for a long time. The earliest brain-computer interfaces were constructed in 1924. Hans Berger developed electroencephalography in the early 1920s, and by 1924 Berger was the first individual to record human brain activity. He identified oscillatory activity in the brain. The 8-12 Hz alpha wave is still sometimes referred to as Berger’s wave.
He was able to insert silver wires under the scalps of test subjects. Eventually he started to use rubber bands to attach silver foil plates to the head of patients. While he was never able to directly influence human thought patterns, his research was vital in the diagnosis of certain brain disorders. Researchers are now rediscovering his work and learning about new ways to interface neural reactions with computer devices.
Motor neuroprosthetics is a rapidly emerging field that uses brain interfaces coupled with highly portable computers to restore motion to paralyzed limbs. Electronic neural networks can take the place of organic neurons in some patients. Neuroprosthetic units can also help patients to control robotic replacement limbs merely by thinking about it. Most models use clunky physical controls instead of a neural interface, so this would be a huge step forward.
Neurogaming might be a bit more interesting to those without a flare for medicine. Joysticks and controllers are starting to give way to motion capture units in some modern gaming consoles. By using brain interfaces coupled with intracranial nanowires gamers could actually feed a virtual reality environment directly to their auditory and visual receptors. This would make them truly feel like they were in another environment.
Some experiments with motion controllers wired to receive neural impulses have already been conducted, but few test subjects seem to be willing to actually make the plunge and receive neurological implants. Nevertheless, a new breed of sentient being that is neither fully organic nor machine might be coming in the decades ahead. Neural implants could potentially make it possible for people to live in cyberspace, as though it were a physical place. Could this be the next state of human evolution?
Share the post "Jacking into Cyberspace via Intracranial Nanowires"
Part of the reason why fierce debates rage around the origins of man – in the conflicts between Creationism and Darwinism that we see within many schools, for example – is because our beliefs about where we came from can strongly influence our sense of identity and our feelings of self-worth. It’s impossible to separate our self-image from our life philosophies in that regard. The stories we cling to will paint our inner pictures of who we are, where we come from and what our race can achieve.
Unfortunately, the stories that we’ve inherited in our culture paint a fairly unflattering picture that does little to inspire us to discover and express our true potential in this world.
Science spins its own version of reality. If you believe that the sky is blue because of the chemical composition of the gases that exist up there, and the way that light refracts off of them, then that’s all you’ll ever see. You won’t perceive the unfathomable mystery of it all. What is the true nature of light, or gases, or the color blue? Questions like these are beyond our ken. The theory of evolution teaches us that it’s useless to ask such questions anyhow, though. This theory, which forms the backbone of so much scientific thought and of our very definitions of humanity, maintains that matter came first and consciousness emerged later – almost as an afterthought; and certainly by accident.
What if the mind formed matter? What if consciousness preceded everything else, and created form? Our scientific indoctrination has convinced us that reality works the other way around, but we’ve been offered little actual proof of this. What is obvious, however, is that the belief thatconsciousness always comes first would do much more to uphold the beauty, grace and potential of our natures than does the belief that our existence was the random result of accidental evolution.
We would do well to adopt stories that inspire us and offer us a new vision of what humanity can aspire to. When trying to grasp the nature of our reality as human beings, and drawing upon the resources that civilization offers us, we’ve thus far been essentially left with a choice between atonement (the predominant religious thinking of the West), accepting that the world we exist in is illusory (the predominant religious thinking of the East), or the theory of evolution. Typically, we are never taught or encouraged to believe that we are, ourselves, divine.
None of the arguments that uphold a notion of a barren and sterile universe can hold water. Most children know better than to believe in those wet-blanket descriptions of reality. Sadly, though, they eventually learn to accept them. How could they not, when our cultural beliefs make their survival virtually dependent upon it?
Love has to come from somewhere. But within the world’s established religions, love always has its conditions; and within the world of science, love can be explained away in terms of neurological transmissions and chemical interactions. It seems that our race, by and large, is willing to accept practically any belief except for one that maintains that what we are is something miraculous.
Most scientists or religious scholars would dispute that we are miraculous, by virtue of being conscious beings. Could it be that consciousness came first; that we did not become humans by accident? What if consciousness created our world in order to express all that it is, and to become better acquainted with itself? If this is true, how might it change the idea that consciousness will arise in machines once we’ve reverse-engineered the brain?
One of the most talked about subjects in robotics today is the uncanny valley hypothesis. So many works of speculative fiction feature robots in relationships with humans that it’s become a cliche, but this idea states that there’s a dip in the graph of human comfort levels when they approach machines that look too much like people. Devices that are disturbingly close to organic life forms often repulse human observers. However, the emotional response becomes far more positive as the machine becomes even closer to humanity.
The term comes from a robotics professor named Masahiro Mori, who referred to the idea as Bukimi no Tani Gensho. This hypothesis was linked to the much earlier essay “On the Psychology of the Uncanny,” which had been completed by Ernst Jentsch in 1906. Even Sigmund Freud‘s 1919 essay “Das Unheimliche” has been linked with the idea that humans are repulsed by devices that are too close to humans.
Several Japanese and Korean companies have built robots that are eerily close to their creators. People are often unsettled when they view images of these androids. Overcoming the uncanny valley opens up a new can of worms. A society in which people are indistinguishable from machinery would be filled with ethical quandaries.
Artificial eyes are a common theme in science fiction. A certain television character from the early 1990s made the idea popular. While there have been a few prototypes in the real world, mechanical ocular implants aren’t regular medical devices just yet. When they come out, however, they will be welcome additions to many ophthalmology programs.
Presbyopia comes from Greek, and the term means old eyes. As adults age, they slowly start to lose their vision. Small muscles bend the clear lens at the front of the eye in a normal individual. However, as they get older, these muscles don’t work quite as well. Patients with these problems are prime candidates for future treatments.
Patients with presbyopia usually get eyeglasses, but many people don’t wish to. They would rather squint. Mechanical eyes would make social stigmas a thing of the past, but ethical guidelines stand in the way of medical science once again. People could very easily interface infrared cameras with their brain. While some people might develop the harmless “x-ray vision” promised in countless old comic book ads, others might use their newfound mechanical powers to collect sensitive information. Privacy issues are already coming to a head today, so one can imagine that cyberization will only continue to shine the spotlight on privacy in the years ahead.
While there have been a number of compelling essays on gender equality, physical differences continue to define human beings. Cyborg life forms would be without these constraints. Certain feminist authors have actually focused on this as a radical way to achieve gender equality, though other commentators have held less optimistic views.
Computer culture is probably a good indication of the way that society is moving as it approaches a technological singularity. Even though Internet technology has continued to shape the way people think, it seems that people are still slaves to their genetic code. Role-play characters are an excellent example of this.
Gamer culture focuses heavily on the ability to take on the roles of different characters. While players can create any sort of being that they wish to in a freeform role-play, many of them seem to be uncomfortable playing characters that completely lack gender roles. Neuter characters must seem alien to the human psyche.
While it is a primitive desire, humans are currently still required to reproduce to survive as a species. Cyborg life is somewhat unsettling in this respect. Few people are ready to throw away their ability to reproduce organically for instance. Mass production is frowned upon in many circles, and it might be difficult to find people who would like to apply that technique to humanity as a whole.
As people continue to struggle with problems involving organ donation, a few robotic engineers continue to push the boundaries between humanity and machinery. A recent report in Nature (cited below) showed that two patients were able to overcome some aspects of their paralysis by way of an implant. Reaching and grabbing motions were possible by way of a carefully designed robotic arm. One individual involved in the study was able to enjoy a drink by herself. She didn’t seem to require assistance outside of the prosthetic limb.
The Associated Press, Wall Street Journal, NPR (audio interview below) and other blogs/media outlets reported on this last week but I wanted to mull over the implications of this before posting about this on here. Here are my initial thoughts.
NPR Interview: Reporting in Nature, researchers write that two individuals, both paralyzed by stroke, made reach-and-grasp movements using a thought-controlled robotic arm. One participant was even able to a sip a drink by herself. Neuroengineer Dr. Leigh Hochberg discusses the paper and the ongoing trial.
The Potential Upside:
This provides real hope for stroke victims who suffer from the loss of a limb. Everyone wants to be independent. This is simply a fact of life. For better or worse, there are a number of serious ethical and philosophical questions that come with robotic implants and organ donation. However, one might suggest that programs such as these are far less concerning.
They don’t interfere with any concepts of life. While futurists might like to make suggestions about the path of humanity after total industrialization, it isn’t too hard to assume that this is only a positive aspect. Most people probably wouldn’t put too much thought into the implications of robotic limbs that are used only for medical purposes.
However, patients are surely glad to be able to receive this feeling of independence once again. While one might be able to receive an organic heart or legs, it wouldn’t be easy to simply graft a foreign limb onto a different body. In fact, that might come with far more complicated ethical questions than a mechanical one ever would.
The Potential Downside:
I’m hesitant to even mention these but there were a few things that came to mind once I got over the initial “cool” factor.
How might the military use this technology in the future? What is the potential that this might be hacked in the future to control humans (that’s what led to Tuesday’s post). These are the two immediate issues that concern me.
Then of course there is the problem scientists have with understanding how the brain actually works. It still surprises them on a regular basis and we have a long, long way to go in this regard. The other issue of course is overcoming the human body’s reaction to invasive brain implants. As with this advancement avoe, the researchers are trying to mitigate negative reactions via the use of bio-friendly materials (as opposed to gold/silicon). Much research is being done in both invasive and non-invasive interfaces but the results still leave much to be desired. The problem isn’t necessarily with the research being done but rather the fragmentation that is occurring within the scientific community on this. There is more than one way to “skin a cat” when it comes to replacement limbs and brain interfaces. For example work is being done to determine how to make replacement limbs work via neural stimuli while similar work is being done using robotics as we’ve seen above.
This is a step in the right direction. The start of any ground-breaking science is going to be rough-going early on but progress is almost always a good thing. I have high hopes for this and believe we’re moving in the right direction. I don’t want to take away from this important research at all. As I’ve written on here frequently, I believe this is just another step towards the next phase of human evolution. As always, feel free to share your thoughts below.
Hochberg, L., Bacher, D., Jarosiewicz, B., Masse, N., Simeral, J., Vogel, J., Haddadin, S., Liu, J., Cash, S., van der Smagt, P., & Donoghue, J. (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm Nature, 485 (7398), 372-375 DOI: 10.1038/nature11076
Often when I write/speak about synthetic biology or the future merging of humans and technology at the biologic level, one of the primary concerns expressed by others involves the possibility of virus corruption. The fear of hackers that engineer viruses or bacteria to control humans may be a valid concern. What do you think? Is mind hacking a real threat to humans down the road once we’re “plugged in”?
A Look Back at the Alabama Virus
Computer viruses are never a positive thing. They’re malicious, and there has been a great deal of debate over them. However, the Alabama virus can be used as an interesting thought experiment. Considering that it infected computers in October 1989, using it as a point of reference is probably innocuous.
In its day, the Alabama virus infected executable DOSfiles. It was loaded up into memory when a user executed an infected program. Infected programs grew by 1,560 bytes. Each Friday, the virus started to mess with the file allocation tables in order to insure that infected files were run preferentially over uninfected ones. This process was dangerous, and caused people to loose files.
Interestingly enough, it had an anti-piracy message. After staying in memory for an hour, the virus would tell the user that software copies were forbidden under international law. It then displayed a PO box address located in Tuscambia, Alabama. Tuscambia actually doesn’t exist. Since the virus was probably developed in Israel, the author may have confused the spelling of Tuscumbia.
Additional infections were carried out by carefully inspecting the directory to note which files were clean. The virus attacked the program being run if there were no further files to infect. Considering this selective nature, the virus program almost seemed like a living thing. It was apparently supposed to impart a moral lesson and act on behalf of its creator. In that respect, it almost seems like the living arm of the individual who programmed it.
While it would certainly be foolish to call it a completely independently acting piece of artificial intelligence, the Alabama virus does have some aspects that make it resemble a living thing. It also may represent the dangers of letting computers act in a totally autonomous fashion.
So do we need to worry about viruses/hacking used to control robots and/or humans in the future? What are your thoughts?
Gaining access to the inner workings of a neuron in the living brain offers a wealth of useful information: its patterns of electrical activity, its shape, even a profile of which genes are turned on at a given moment. However, achieving this entry is such a painstaking task that it is considered an art form; it is so difficult to learn that only a small number of labs in the world practice it. Read More →
Share the post "Robot Reveals the Inner Workings of Brain Cells"
Future humans might very well be a certain variety of organic machine hybrid. But what about right now? Are we witnessing the emergence of cyborg life within our society today? Consider that artificial hearts and prosthetic limbs are probably the best-known types of machinery intended to be grafted onto or within human flesh. Such artificial designs are supposed to fulfill a medical need. They are used when human organs or tissue have failed or has been amputated. Is this not a form of cybernetic integration? One would not usually receive a prosthetic for any recreational reason today however, I can see a future where perhaps this may be commonplace (or even trendy) in instances where prosthetic devices or artificial organs might enhance human performance and/or prolong life. Given that we’re headed into the weekend, I thought this might be a good topic to wrap up the week. Read More →
Share the post "Humanity 2.0 – Cybernetic Individuals Today"