Caption: Perseus galaxy cluster. [D. W. Hogg/M. Blanton/SDSS Collaboration].
Four unknown galaxy clusters each potentially containing thousands of individual galaxies have been discovered some 10 billion light years from Earth.
An international team of astronomers, led by Imperial College London, used a new way of combining data from the two European Space Agency satellites, Planck and Herschel, to identify more distant galaxy clusters than has previously been possible. The researchers believe up to 2000 further clusters could be identified using this technique, helping to build a more detailed timeline of how clusters are formed. Read More →
Share the post "Researchers Discover Four New Galaxy Clusters"
Jamie Tyler, Assistant Professor, Virginia Tech Carilion Research Institute, VTCRI, lab, research
Virginia Tech Carilion Research Institute scientists have demonstrated that ultrasound directed to a specific region of the brain can boost performance in sensory discrimination.
Whales, bats, and even praying mantises use ultrasound as a sensory guidance system — and now a new study has found that ultrasound can modulate brain activity to heighten sensory perception in humans. Read More →
Share the post "Using Ultrasound to Boost Brain Performance"
Dipole-mediated energy transport of Rydberg-excitations (glowing balls) in an atomic sea – artist impression. Picture credits: S. Whitlock / G. Günter
Physicists discover new properties of energy transport in experiments on “atomic giants”
By realizing an artificial quantum system, physicists at Heidelberg University have simulated key processes of photosynthesis on a quantum level with high spatial and temporal resolution. In their experiment with Rydberg atoms the team of Prof. Dr. Matthias Weidemüller and Dr. Shannon Whitlock discovered new properties of energy transport. This work is an important step towards answering the question of how quantum physics can contribute to the efficiency of energy conversion in synthetic systems, for example in photovoltaics. The new discoveries, which were made at the Center for Quantum Dynamics and the Institute for Physics of Heidelberg University, have now been published in the journal “Science” [citation below].
In their research, Prof. Weidemüller and his team begin with the question of how the energy of light can be efficiently collected and converted elsewhere into a different form, e.g. into chemical or electric energy. Nature has found an especially efficient way to accomplish this in photosynthesis. Light energy is initially absorbed in light-harvesting complexes – an array of membrane proteins – and then transported to a molecular reaction centre by means of structures called nanoantennae; in the reaction centre the light is subsequently transformed into chemical energy. “This process is nearly 100 per cent efficient. Despite intensive research we’re still at a loss to understand which mechanisms are responsible for this surprisingly high efficiency,” says Prof. Weidemüller. Based on the latest research, scientists assume that quantum effects like entanglement, where spatially separated objects influence one another, play an important role.
In their experiments the researchers used a gas of atoms that was cooled down to a temperature near absolute zero. Some of the atoms were excited with laser light to high electric states. The excited electron of these “atomic giants”, which are called Rydberg atoms, is separated by macroscopic distances of almost a hair’s breadth from the atomic nucleus. Therefore these atoms present an ideal system to study phenomena at the transition between the macroscopic, classical world and the microscopic quantum realm. Similar to the light-harvesting complexes of photosynthesis, energy is transported from Rydberg atom to Rydberg atom, with each atom transmitting its energy packages to surrounding atoms, similar to a radio transmitter.
“To be able to observe the energy transport we first had to find a way to image the Rydberg atoms. At the time it was impossible to detect these atoms using a microscope,” explains Georg Günter, a doctoral student in Prof. Weidemüller’s team. A trick from quantum optics ensured that up to 50 atoms within a characteristic radius around a Rydberg atom were able to absorb laser light. In this way each Rydberg atom creates a tiny shadow in the microscope image, allowing the scientists to measure the positions of the Rydberg atoms.
The fact that this technique would also facilitate the observation of energy transport came as a surprise, as PhD student Hanna Schempp emphasizes. However, the investigations with the “atomic giants” showed how the Rydberg excitations, which are immersed in a sea of atoms, diffused from their original positions to their atomic neighbors, similar to the spreading of ink in water. Aided by a mathematical model the team of Prof. Weidemüller showed that the atomic sea crucially influences the energy transport from Rydberg atom to Rydberg atom.
“Now we are in a good position to control the quantum system and to study the transition from diffusive transport to coherent quantum transport. In this special form of energy transport the energy is not localized to one atom but is distributed over many atoms at the same time,” explains Prof. Weidemüller. As with the light-harvesting complexes of photosynthesis, one central question will be how the environment of the nano-antennae influences the efficiency of energy transport and whether this efficiency can be enhanced by exploiting quantum effects. “In this way we hope to gain new insights into how the transformation of energy can be optimized in other synthetic systems as well, like those used in photovoltaics,” the Heidelberg physicist points out.
G. Günter, H. Schempp, M. Robert-de-Saint-Vincent, V. Gavryusev, S. Helmrich, C.S. Hofmann, & S. Whitlock, M. Weidemüller (2013). Observing the Dynamics of Dipole-Mediated Energy Transport by Interaction Enhanced Imaging. Science Express DOI: 10.1126/science.1244843
Share the post "Photosynthesis Simulated on the Quantum Level"
Christoph Deutsch, Martin Brandstetter and Michael Krall in the cleanroom at TU Vienna.
Whether used in diagnostic imaging, analysis of unknown substances or ultrafast communication – terahertz radiation sources are becoming more and more important. A recent Vienna University of Technology breakthrough has been made in this important area [Citations below]. Terahertz waves are invisible, but incredibly useful; they can penetrate many materials which are opaque to visible light and they are perfect for detecting a variety of molecules. Terahertz radiation can be produced using tiny quantum cascade lasers, only a few millimetres wide. This special kind of lasers consists of tailor made semiconductor layers on a nanometer scale. At the Vienna University of Technology (TU Vienna) a new world record has now been set; using a special merging technique, two symmetrical laser structures have been joined together, resulting in a quadruple intensity of laser light.
Jumping Electrons Create Terahertz Light
The newly developed quantum cascade laser (QCL) at the Vienna University of Technology
For the electrons in each layer of the quantum cascade laser, only certain discrete energy levels are allowed. If the right electrical current is applied, the electrons jump from layer to layer, in each step emitting energy in the form of light. This way, the exotic terahertz radiation with wavelengths in the sub-millimetre regime (between microwaves and infrared) can be produced with high efficiency.
Many molecules absorb light of this spectral region in a very characteristic way – they can be considered to have an “optical fingerprint”. Because of this, terahertz radiation can be used for chemical detectors. It also plays an important role for medical imaging; one the one hand, it is non-ionizing radiation, its energy is considerably lower than that of roentgen radiation, therefore it is not dangerous. On the other hand, its wavelength is shorter than that of microwave radiation, which means that it can be used to create higher resolution images.
Two lasers are connected, creating a larger and much more efficient one.
These applications may bring back memories of the legendary “Tricorder” from Star Trek, a portable multi-purpose analytical instrument. For measuring objects at a distance and for medical imaging, compact light sources with a very high optical power are required.
A possible way to increase the laser power is to use more semiconductor layers. A higher number of layers means that the electron changes its energy states when it passes through the structure, and therefore the number of emitted photons increases. The production of such multi-layer structures, however, is extremely difficult. Prof. Karl Unterrainer’s team at the Institute of Photonics at the Vienna University of Technology has now succeded in merging two separate quantum cascade lasers in a so-called bonding process.
“This only works for a very specific design of the quantum cascade structure”, says Christoph Deutsch (TU Vienna), “With standard quantum cascade lasers, this would definitely be impossible.” Symmetrical lasers are required, through which electrons can pass in both directions. The team had to study and compensate for the asymmetries which usually arise in the laser.
The World Record Laser
The higher the number of layers, the more photons are produced. In addition to that, the efficiency is increased due to improved optical properties. “This is why doubling the number of layers eventually leads to quadruple power”, explains Martin Brandstetter (TU Vienna). The previous world record for terahertz quantum cascade lasers of almost 250 milliwatts was held by the Massachussetts Institute of Technology (MIT). The laser of TU Vienna now produces one watt of radiation. This is not only another record for TU Vienna, breaking the one-watt barrier is considered to be an important step for the application of terahertz lasers in a variety of technological fields.
Genome-based identification of drugs (Image: University of Basel)
Through analysis of the human genome, Basle scientists have identified molecules and compounds that are related to human memory. In a subsequent pharmacological study with one of the identified compounds, the scientists found a drug-induced reduction of aversive memory. This could have implications for the treatment of post-traumatic stress disorder, which is characterized by intrusive traumatic memories. The findings have been published in the latest edition of the magazine PNAS.
In the last decade, the human genome project has led to an unprecedented rate of discovery of genes related to humane disease. However, so far it has not been clear to what extent this knowledge can be used for the identification of new drugs, especially in the field of neuropsychiatric disorders. The research groups of Prof. Andreas Papassotiropoulos and Prof. Dominique de Quervain of the Psychiatric University Clinics, the Department of Psychology and the Biozentrum of the University of Basel performed a multinational collaborative study, in order to analyze the genetic basis of emotionally aversive memory – a trait central to anxiety disorders such as posttraumatic stress disorder. In a gene-set analysis the scientists identified 20 potential drug target genes that are involved in the process of remembering negative events.
Known Antihistamine shows Effect
In a double-blind, placebo-controlled study and based on the results of the genetic analysis, the scientists examined a compound that interacts with one of the previously identified gene products. Surprisingly, the said compound is a known antihistamine. A single dose of the drug led to significant reduction of memory recall of previously seen aversive pictures; however, it did not affect memory of neutral or positive pictures. These findings could have implications for the treatment of post-traumatic stress disorder.
With this study, the scientists were for the first time able to demonstrate that human genome information can be used to identify substances that can modulate memory. “The rapid development of innovative methods for genetic analysis has made this new and promising approach possible”, says Papassotiropoulos. The scientists are now planning subsequent studies: “In a further step, we will try to identify and develop memory enhancing drugs”, explains de Quervain. The scientists hope to provide new input for the development of urgently needed improved drugs for the treatment of neuropsychiatric diseases.
Company for clinical Applications
In order to bring their findings to clinical application, de Quervain and Papassotiropulos founded the company GeneGuide Ltd. this year. The company has specialized in the human genome-based research approach and the discovery of new drugs for neuropsychiatric diseases. This novel approach has been met with great interest by the pharmaceutical industry, since so far the development of improved neuropsychiatric drugs has been rather disappointing.
Even when neurons in the visual cortex are cut off from their main source of information, within 48 hours their activity returns to a level similar to that prior to the disruption. Under the microscope the currently active cells light up thanks to the addition of a calcium indicator. Credit: MPI of Neurobiology/Hübener
The brain is an extremely adaptable organ – but it is also quite conservative. That’s in short, what scientists from the Max Planck Institute of Neurobiology in Martinsried and their colleagues from the Friedrich Miescher Institute in Basel and the Ruhr-Universität Bochum were now able to show. The researchers found that neurons in the brain regulate their own activity in such a way that the overall activity level in the network remains as constant as possible. This remains true even in the event of major changes: After the complete loss of information from a sensory organ, for example, the almost silenced neurons re-establish levels of activity similar to their previous ones after only 48 hours. The mean activity level thus achieved is a basic prerequisite for a healthy brain and the formation of new connections between neurons – an essential capacity for regeneration following injury to the brain or a sensory organ, for example.
Neurons communicate using electrical signals. They transmit these signals to neighboring cells via special contact points known as synapses. When new information needs processing, the nerve cells can develop new synaptic contacts with their neighboring cells or strengthen existing synapses. To be able to forget, these processes can also be reversed. The brain is consequently in a constant state of reorganization, yet individual neurons need to be prevented from becoming either too active or too inactive. The aim is to keep the level of activity constant, as the long-term overexcitement of neurons can result in damage to the brain.
Too little activity is not good either. “The cells can only re-establish connections with their neighbors when they are ‘awake’, so to speak, that is when they display a minimum level of activity”, explains Mark Hübener, head of the recently published study (citation below). The international team of researchers succeeded in demonstrating for the first time that the brain is able to compensate even massive changes in neuronal activity within a period of two days, and can return to an activity level similar to that before the change.
Up to now, only cell cultures gave an indication of this astonishing ability of the brain. It was also unclear as to how neurons could control their own activity in relation to the activity of the entire network. Now, the scientists have made significant progress towards finding an answer to this question. In their study, they examined the visual cortex of mice that recently went blind. As expected, but never previously demonstrated, the activity of the neurons in this area of the brain did not fall to zero but to half of the original value. “That alone was an amazing finding, as it shows the extent to which the visual cortex also processes information from other areas of the brain,” explains Tobias Bonhoeffer, who investigates processes in the visual cortex with his department at the Max Planck Institute of Neurobiology for many years. “However, things became really exciting when we continued to observe the area over the following hours and days.”
The scientists were able to witness “live” through the microscope how the neurons in the visual cortex became active again. After just a few hours, they could clearly observe how the contact points between the affected neurons and their neighboring cells increased in size. When synapses get bigger, they also become stronger and signals are transmitted faster and more effectively. As a result of this synaptic upscaling, the activity of the affected network returned to its starting value after a period of between 24 and 48 hours. “To put it simply, due to the absence of visual input, the cells had less to say – but when they did say something, they said it with particular emphasis,” explains Mark Hübener.
Due to the simultaneous strengthening of all of the synapses of the affected neurons, major reductions in the neuronal activity can be normalized again with surprising speed. The relatively stable activity level thereby achieved is an essential prerequisite for maintaining a healthy, adaptable brain.
Tara Keck, Georg B. Keller, R. Irene Jacobsen, Ulf T. Eysel, Tobias Bonhoeffer, & Mark Hübener (2013). Synaptic scaling and homeostatic plasticity in the mouse visual cortex in vivo Neuron, 80 (2), 327-334 DOI: 10.1016/j.neuron.2013.08.018
Share the post "When Neurons Have Less to Say, They Speak Up"
The right supramarginal gyrus plays an important role in empathy
Egoism and narcissism appear to be on the rise in our society, while empathy is on the decline. And yet, the ability to put ourselves in other people’s shoes is extremely important for our coexistence. A research team headed by Tania Singer from the Max Planck Institute for Human Cognitive and Brain Sciences has discovered that our own feelings can distort our capacity for empathy. This emotionally driven egocentricity is recognised and corrected by the brain. When, however, the right supramarginal gyrus doesn’t function properly or when we have to make particularly quick decisions, our empathy is severely limited.
When assessing the world around us and our fellow humans, we use ourselves as a yardstick and tend to project our own emotional state onto others. While cognition research has already studied this phenomenon in detail, nothing is known about how it works on an emotional level. It was assumed that our own emotional state can distort our understanding of other people’s emotions, in particular if these are completely different to our own. But this emotional egocentricity had not been measured before now.
This is precisely what the Max Planck researchers have accomplished in a complex marathon of experiments and tests. They also discovered the area of the brain responsible for this function, which helps us to distinguish our own emotional state from that of other people. The area in question is the supramarginal gyrus , a convolution of the cerebral cortex which is approximately located at the junction of the parietal, temporal and frontal lobe. “This was unexpected, as we had the temporo-parietal junction in our sights. This is located more towards the front of the brain,” explains Claus Lamm, one of the publication’s authors.
On the empathy trail with toy slime and synthetic fur
Using a perception experiment, the researchers began by showing that our own feelings actually do influence our capacity for empathy, and that this egocentricity can also be measured. The participants, who worked in teams of two, were exposed to either pleasant or unpleasant simultaneous visual and tactile stimuli.
While participant 1, for example, could see a picture of maggots and feel slime with her hand, participant 2 saw a picture of a puppy and could feel soft, fleecy fur on her skin. “It was important to combine the two stimuli. Without the tactile stimulus, the participants would only have evaluated the situation ‘with their heads’ and their feelings would have been excluded,” explains Claus Lamm. The participants could also see the stimulus to which their team partners were exposed at the same time.
The two participants were then asked to evaluate either their own emotions or those of their partners. As long as both participants were exposed to the same type of positive or negative stimuli, they found it easy to assess their partner’s emotions. The participant who was confronted with a stinkbug could easily imagine how unpleasant the sight and feeling of a spider must be for her partner.
Differences only arose during the test runs in which one partner was confronted with pleasant stimuli and the other with unpleasant ones. Their capacity for empathy suddenly plummeted. The participants’ own emotions distorted their assessment of the other person’s feelings. The participants who were feeling good themselves assessed their partners’ negative experiences as less severe than they actually were. In contrast, those who had just had an unpleasant experience assessed their partners’ good experiences less positively.
Particularly quick decisions cause a decline in empathy
The researchers pinpointed the area of the brain responsible for this phenomenon with the help of functional magnetic resonance imaging, generally referred to as a brain scanning. The right supramarginal gyrus ensures that we can decouple our perception of ourselves from that of others. When the neurons in this part of the brain were disrupted in the course of this task, the participants found it difficult not to project their own feelings onto others. The participants’ assessments were also less accurate when they were forced to make particularly quick decisions.
Up to now, the social neuroscience models have assumed that we mainly draw on our own emotions as a reference for empathy. This only works, however, if we are in a neutral state or the same state as our counterpart – otherwise, the brain must counteract and correct.
Giorgia Silani, Claus Lamm, Christian C. Ruff, & Tania Singer (2013). Right Supramarginal Gyrus Is Crucial to Overcome Emotional Egocentricity Bias in Social Judgements The Journal of Neuroscience, Online (33(39)), 15466-15476 : 10.1523/JNEUROSCI.1488-13.2013
The first of these collisions was observed by A. Wesley from Australia and C. Go from Philippines on June, 3 2010. The second object was observed by three Japanese amateur observers (M. Tachikawa, K. Aoki and M. Ichimaru) on August, 20 that year and a third collision was observed by G. Hall from USA on September, 10 2012 after a report of a visual observation from D. Petersen from USA. Credit: Hueso/Wesley/Go/Tachikawa/Aoki/Ichimaru/Petersen
The solar system is crowded with small objects like asteroids and comets. Most have stable orbits which keep them out of harm’s way, but a small proportion of them are in orbits that risk them colliding with planets.
The smaller the objects, the more numerous they are, and the more frequent these collisions should occur. Collisions like the recent meteor seen over Chelyabinsk, Russia, in February 2013, are rare because the object was relatively large, around 17 meters across. The giant planet Jupiter — a big target with tremendous gravitational attraction — gets hit far more often than the Earth, and these collisions are much faster, happen at a minimum speed of 60 kilometers per second.
Amateur astronomers observing Jupiter with video cameras have been able to observe three of these collisions in the last 3 years and a detailed report of these collisions has been presented at the European Planetary Science Congress at UCL this week by Ricardo Hueso (University of the Basque Country, Spain).
“Our analysis shows that Jupiter could be impacted by objects around 10 meters across between 12 and 60 times per year,” Hueso says. “That is around 100 times more often than the Earth.”
The study, a broad collaboration between professional and amateur astronomers, also includes detailed simulations of objects entering Jupiter’s atmosphere and disintegrating at temperatures above 10,000 °C and observations from telescopes such as the Hubble Space Telescope or the Very Large Telescope of the impact area taken only tens of hours after the impact. Despite observing the planet soon after the impact, Hubble and the VLT saw no signature of the disintegrated objects, showing that such impacts are very brief events.
Because the glow of these impacts is so short-lived, and they happen at unpredictable times, major observatories like Hubble and the VLT cannot reliably observe them — these telescopes have packed observing schedules and cannot be dedicated to long-term monitoring of a planet. Amateur astronomers, who can dedicate night after night to observing a planet, have a far better chance of spotting these impacts, even if their equipment is far more rudimentary.
Perhaps one of the most defining features of humanity is our capacity for empathy – the ability to put ourselves in others’ shoes. A new University of Virginia study strongly suggests that we are hardwired to empathize because we closely associate people who are close to us – friends, spouses, lovers – with our very selves.
“With familiarity, other people become part of ourselves,” said James Coan, a psychology professor in U.Va.’s College of Arts & Sciences who used functional magnetic resonance imaging brain scans to find that people closely correlate people to whom they are attached to themselves. The study appears in the August issue of the journal Social Cognitive and Affective Neuroscience (citation below).
“Our self comes to include the people we feel close to,” Coan said. In other words, our self-identity is largely based on whom we know and empathize with. Coan and his U.Va. colleagues conducted the study with 22 young adult participants who underwent fMRI scans of their brains during experiments to monitor brain activity while under threat of receiving mild electrical shocks to themselves or to a friend or stranger.
The researchers found, as they expected, that regions of the brain responsible for threat response – the anterior insula, putamen and supramarginal gyrus – became active under threat of shock to the self. In the case of threat of shock to a stranger, the brain in those regions displayed little activity. However when the threat of shock was to a friend, the brain activity of the participant became essentially identical to the activity displayed under threat to the self.
“The correlation between self and friend was remarkably similar,” Coan said. “The finding shows the brain’s remarkable capacity to model self to others; that people close to us become a part of ourselves, and that is not just metaphor or poetry, it’s very real. Literally we are under threat when a friend is under threat. But not so when a stranger is under threat.”
Coan said this likely is because humans need to have friends and allies who they can side with and see as being the same as themselves. And as people spend more time together, they become more similar.
“It’s essentially a breakdown of self and other; our self comes to include the people we become close to,” Coan said. “If a friend is under threat, it becomes the same as if we ourselves are under threat. We can understand the pain or difficulty they may be going through in the same way we understand our own pain.”
This likely is the source of empathy, and part of the evolutionary process, Coan reasons.
“A threat to ourselves is a threat to our resources,” he said. “Threats can take things away from us. But when we develop friendships, people we can trust and rely on who in essence become we, then our resources are expanded, we gain. Your goal becomes my goal. It’s a part of our survivability.” People need friends, Coan added, like “one hand needs another to clap.”
Lane Beckes, James A. Coan, & Karen Hasselmo (2013). Familiarity promotes the blurring of self and other in the neural representation of threat Social Cognitive and Affective Neuroscience, 8 (6), 670-677 DOI: 10.1093/scan/nss046
Share the post "Human Brains Hardwired for Empathy & Friendship"
University of Illinois researchers developed a cradle and app for the iPhone to make a handheld biosensor that uses the phone’s own camera and processing power to detect any kind of biological molecules or cells. | Photo by Brian T. Cunningham
University of Illinois at Urbana-Champaign researchers have developed a cradle and app for the iPhone that uses the phone’s built-in camera and processing power as a biosensor to detect toxins, proteins, bacteria, viruses and other molecules.
Having such sensitive biosensing capabilities in the field could enable on-the-spot tracking of groundwater contamination, combine the phone’s GPS data with biosensing data to map the spread of pathogens, or provide immediate and inexpensive medical diagnostic tests in field clinics or contaminant checks in the food processing and distribution chain.
“We’re interested in biodetection that needs to be performed outside of the laboratory,” said team leader Brian Cunningham, a professor of electrical and computer engineering and of bioengineering at the U. of I. “Smartphones are making a big impact on our society – the way we get our information, the way we communicate. And they have really powerful computing capability and imaging. A lot of medical conditions might be monitored very inexpensively and non-invasively using mobile platforms like phones. They can detect molecular things, like pathogens, disease biomarkers or DNA, things that are currently only done in big diagnostic labs with lots of expense and large volumes of blood.”
The wedge-shaped cradle contains a series of optical components – lenses and filters – found in much larger and more expensive laboratory devices. The cradle holds the phone’s camera in alignment with the optical components.
At the heart of the biosensor is a photonic crystal. A photonic crystal is like a mirror that only reflects one wavelength of light while the rest of the spectrum passes through. When anything biological attaches to the photonic crystal – such as protein, cells, pathogens or DNA – the reflected color will shift from a shorter wavelength to a longer wavelength.
For the handheld iPhone biosensor, a normal microscope slide is coated with the photonic material. The slide is primed to react to a specific target molecule. The photonic crystal slide is inserted into a slot on the cradle and the spectrum measured. Its reflecting wavelength shows up as a black gap in the spectrum. After exposure to the test sample, the spectrum is re-measured. The degree of shift in the reflected wavelength tells the app how much of the target molecule is in the sample.
Video of the App in Action:
The entire test takes only a few minutes; the app walks the user through the process step by step. Although the cradle holds only about $200 of optical components, it performs as accurately as a large $50,000 spectrophotometer in the laboratory. So now, the device is not only portable, but also affordable for fieldwork in developing nations.
In a paper published in the journal Lab on a Chip (citation below), the team demonstrated sensing of an immune system protein, but the slide could be primed for any type of biological molecule or cell type. The researchers are working to improve the manufacturing process for the iPhone cradle and are working on a cradle for Android phones as well. They hope to begin making the cradles available next year.
Cunningham’s group is now collaborating with other groups across campus at the U. of I. to explore applications for the iPhone biosensor. The group recently received a grant from the National Science Foundation to expand the range of biological experiments that can be performed with the phone, in collaboration with Steven Lumetta, a professor of electrical and computer engineering and of computer science at the U. of I. They are also are also working with food science and human nutrition professor Juan Andrade to develop a fast biosensor test for iron deficiency and vitamin A deficiency in expectant mothers and children.
In addition, Cunningham’s team is working on biosensing tests that could be performed in the field to detect toxins in harvested corn and soybeans, and to detect pathogens in food and water.
“It’s our goal to expand the range of biological experiments that can be performed with a phone and its camera being used as a spectrometer,” Cunningham said. “In our first paper, we showed the ability to use a photonic crystal biosensor, but in our NSF grant, we’re creating a multi-mode biosensor. We’ll use the phone and one cradle to perform four of the most widely used biosensing assays that are available.”