Tag Archives: Science Week

How the brain recognises faces

This post was contributed by Dr Clare Sansom, Senior Associate Lecturer, Department of Biological Sciences 

The first of two evening lectures on the Wednesday of Birkbeck Science Week 2015 was given by Martin Eimer of the college’s Department of Psychological Sciences.

He, like the other Science Week lecturers, was introduced by the Dean of the Faculty of Science, Nicholas Keep; Professor Keep explained that Eimer, a native of Germany and a recently elected Fellow of the German Academy of Sciences, had built up his research lab at Birkbeck over the last fifteen years.

Language

His internationally recognised research concerns the relationship between brain function and behaviour in health and disease. The topic he selected for his lecture was a fascinating one: how our brains recognise human faces and what happens when this automatic process goes wrong.

Eimer began by outlining some reasons why we find faces so interesting to look at. When we look at a face we may be able to recognise that individual, either immediately or with difficulty, but – if our brains are working correctly – we will be able to tell what the person is feeling, or what they are looking at.

It seems that the facial expressions that are associated with basic emotions such as happiness, surprise, fear and disgust are common between most if not all cultures. And we also use faces to lip-read. People with hearing impairments are dependent on this, and learn to do it very well, but we all have some intrinsic lip-reading ability that we use automatically in noisy environments.

Next, he used perceptual demonstrations to illustrate that we process faces rather differently to other objects. If we look at a photo of a familiar or famous person that has been turned upside down we automatically think it looks odd, and we find the face hard to identify. This so-called ‘inversion effect’ is also seen with other objects but is much more pronounced with faces.

A stranger effect occurs if the photo of a face is altered so that only the eyes and mouth are upside down. This looks grotesque, but turning the altered photo upside down so that the eyes and mouth only are the right way up makes it look surprisingly normal. This was named the ‘Thatcher illusion’ by the scientists who discovered it in 1980, perhaps as an imaginative way of taking revenge for an early round of education cuts.

It is likely that we instinctively respond so differently to faces out of the normal upright orientation because our brains have an inbuilt ‘face template’. Even young infants respond to ‘face-like’ stimuli with two eyes, a nose and a mouth in approximately the right proportions and positions.

Face recognition, too, depends on small differences in these parameters between individuals (e.g. the height of the eyes above the nose and the distance between them). Contrast polarity is also important, and we find it much harder to identify face images if their contrast is inverted (as in a photographic negative). Interestingly, however, the task becomes easier if the eye region only is reverted to normal contrast. This suggests that we attach a particular importance to that region. It is also difficult to determine gaze direction if the contrast polarity around the eyes are inverted.

Eimer introduced another optical illusion in which half of each of the faces of George Clooney and Harrison Ford had been combined into a composite. The audience found it almost impossible to distinguish the two actors until the half-faces were separated. We had all instinctively formed a new face from the components and failed, for obvious reasons, to match it to an individual. This trick, which is known as holistic face processing, is also specific to faces.

The second half of the lecture dealt with the neuroscience of face recognition, and what happens when it goes wrong. When we look at a face (or any object) information from the image focused on the retina is initially transferred to a part of the back of the brain known as the primary visual cortex. It is then transferred to other parts of the brain, including the inferior temporal cortex, where objects are recognised.

Several types of experiments have been developed for measuring exactly what goes on in the brain. These include functional magnetic resonance imaging (fMRI), which generates brightly coloured images associated with changes in blood flow to parts of the brain, and electroencephalography (EEG) which records electrical activity on the scalp.

These techniques are complementary; EEG is faster but can only record signals from the surface of the brain. Between them, they have allowed scientists to identify several areas in the brain that are activated when faces, but not other objects, are perceived and a rapid, strong electrical impulse that seems to be a unique response to faces.

It is much easier to recognise the face of a familiar individual – family member, friend or celebrity – than to distinguish between the faces of unknown people. This task, however, is required in many professions: most often and most obviously passport officers and detectives, but also, for example, teachers at the beginning of each new school year. Some people are much better at doing this than others, but even the most skilled make mistakes, and the UK immigration service (and, no doubt, the equivalent bodies in other countries) is looking into ways of doing it automatically.

People at the other end of the spectrum – who find it particularly difficult to recognise faces – are said to have a condition called prosopagnosia, or ‘face blindness’. These people have a severe but very specific defect in recognising faces: their intellect and their vision are normal, and they can recognise individuals easily enough from their voice, gait or other cues.

This condition is divided into two types: acquired prosopagnosia, which arises after brain damage, and developmental prosopagnosia, which can be apparent from early childhood, without any obvious brain damage. The acquired type is typically more severe; the eponymous Man who Mistook his Wife for a Hat described in Oliver Sacks’ fascinating book suffered from this condition. The rapid brain response to faces is missing from an EEG of a person with acquired prosopagnosia, and other tests will show that the brain regions that are specifically associated with face processing have been damaged.

About 2% of the population can be said to have some degree of developmental prosopagnosia. There is no association with intelligence and it affects many successful professionals. Eimer showed part of a TV programme featuring an interview with a woman who is particularly badly affected. She explained the problems she has encountered throughout her life, ranging from following characters in films to telling her own daughter from other little girls with bunches in the school playground. Her father had also suffered from the condition, and she had been very relieved to receive a formal diagnosis.

The EEG patterns of individuals with developmental prosopagnosia are less different from normal than those of people with brain damage, but they are recognisable. Interestingly, differences in brain responses to upright as compared to inverted faces are not seen in people with developmental prosopagnosia.

Face recognition abilities form a continuum and many people who think of themselves as being ‘terrible’ at recognising faces will find that they are in the normal range. Eimer’s group has a website that includes an online test, the Cambridge Face Memory Test. Participants are asked to memorise a face and then pick it out from a group of three; the tests start easy but become more challenging. People with very high and very low scores will be invited to be involved in further research in the Brain and Behaviour Lab at Birkbeck

Interested? Find out more

Asteroids

This post was contributed by Paola Bernoni and Anja Lanin, students on Birkbeck’s BSc Geology.

What can asteroids tell us about the formation of the solar system about 4.6 billion years ago and how are we able to extract such information from objects that are located in a region, the Main Asteroid Belt, somewhere between Mars and Jupiter, several hundred million miles from the Sun? This was the subject of the lecture delivered by Professor Hilary Downes of Birkbeck’s Department of Earth and Planetary Sciences for this year’s Science Week on Thursday 3 July, her debut talk for the event despite Professor Downes’ long association with Birkbeck.

Asteroids: what, where, when?

First of all, what are asteroids?  Remnants of cosmic material unable to accrete and form a planet-sized object. In the Main Asteroid Belt this was due to the gravitational pull of the giant planet Jupiter: hence asteroids are a “failed planet”, not – as one might be led to believe – fragments of broken-up ones.   We are mainly interested in asteroids whose orbits cross that of the Earth and Mars as they are most likely to yield useful information about our own planet.

What do they look like? “Potato shaped”, or long and thin, but invariably irregularly shaped, their surfaces pock-marked with impact craters … not volcanic craters as, unlike the volcanically active Earth, asteroids are dead bodies that have lost all of their internal heat.

What do we know about asteroids and how do we know it?  

Near-Earth asteroids are occasionally knocked out of the Main Belt and can even end up colliding with the Earth: these are meteorites.  In 2008, for the first time ever, an asteroid was detected prior to impact and predicted to land in North Sudan, where researchers flocked to recover 600 fragments, after it had exploded in the atmosphere.  The more recent impact at Chelyabinsk in the Southern Urals, Russia, was even filmed.

Space missions have collected useful information: there has even been a landing on asteroid  Itokawa in 2005, which managed to collect some dust material.  The ongoing Dawn mission, departed in 2007, reached Vesta in the Main Belt in 2011, orbited around the asteroid for one year and then departed for Ceres, where it is expected to arrive in 2015.  Why the interest in Vesta and Ceres? These are two of the largest surviving protoplanetary bodies that nearly became planets and therefore can help us gain a better understanding of the evolution of the solar system and of the processes that led to the formation of differentiated, layered bodies like the Earth (and Vesta) and less differentiated bodies (Ceres).

Measurements of radioactive decay of different isotopes performed on meteorite fragments have yielded consistent results on their age: they are as old as the solar system (4.6 billion years), a result matched by the results on the oldest terrestrial zircons. Yet there are some younger meteorites and they come from the Moon or Mars.

How do we classify asteroids and why?

The traditional classification of meteorites based on composition – iron, stony and stony iron – does not really tell us much, a discrimination based on provenance might be a better option:  whether meteorites come from a layered body, such as the Earth, with a nickel-iron core, an olivine-rich mantle and a silicate feldspar-rich outer shell, the crust,  or not … hence the interest in the layered Vesta and the less layered Ceres, which is made of a rocky core,  a water-ice layer and a thin crust.  But many of the recovered meteorites, especially from Antartica, do not show signs of provenance from a layered body: called “chondrites” as they containing  small globules, chondrules, which are some of the earliest materials formed in our solar system, they are unfortunately not very useful in the quest for a better understanding of our layered Earth.  Iron meteorites, compositionally similar to the Earth’s core, are thought to represent the core of small asteroids that blew apart and lost the encasing mantle. We have some 50 specimens, but it is a biased sample: they are more resistant passing through the atmosphere and easier to detect on the ground. Stony-iron meteorites are very rare instead: as they also contain an iron-nickel alloy, and olivine, one of the main components of the Earth’s mantle, they are thought to represent the core-mantle boundary of the parent asteroid, which was hot enough to commence differentiation.

Asteroids and Research at Birkbeck

Professor Downes then gave some highlights on the research underway at Birkbeck where stony meteorite samples from a very old, unknown asteroid are studied to establish similarities with the Earth’s mantle. Their olivine and other silicates are surrounded by carbon, including tiny diamonds, and nickel-iron rims, whilst on Earth these metals have segregated into the core and carbon is found in organic matter. The meteorite minerals show evidence of shock from impact and the carbon component also shows that graphite has been shocked into diamond. Compositional analyses have shown the presence of a known mineral, Suessite and an unknown mineral made of 91% iron and 9% silica, which is the most likely composition of the Earth’s core whilst the composition of meteorites originated from the outer shell of layered asteroids is similar to that of the basaltic rocks we find at the Earth’s surface.

Professor Downes finally underlined the uniqueness of the Earth amongst the rocky planets with the continued presence of water – lost on Venus and Mars – and  especially of life, which is not known to have ever developed in any of the other terrestrial planets. The question of where Earth’s water came from is still open. A “meteor shower” of questions then followed, on the provenance of water and life on Earth, the age of meteorites found in Antartica and what drives differentiation: for some of these matters the audience was referred to courses offered by the Department of Earth and Planetary Sciences … for others to Birkbeck’s astrobiologists.  Finally, the talk and the Q&A session came to an end but the opportunity was available to carry on with discussions and queries helped by a nice glass of wine and nibbles.

Crystallography: past, present and future (Science Week 2014)

This post was contributed by Dr Clare Sansom, Senior Associate Lecturer in Birkbeck’s Department of Biological Sciences

Prof Paul Barnes sets the scene for one of the experiments he carried out in the Crystallography lecture

The second of the Science Week lectures from the Department of Biological Sciences, which was presented on 2 July 2014, was a double act from two distinguished emeritus professors and Fellows of the College, Paul Barnes and David Moss. Remarkably, they both started their working lives at Birkbeck on the same day – 1 October 1968 – and so had clocked up over 90 years of service to the college between them by Science Week 2014.

The topic they took was a timely one: the history of the science of crystallography over the past 100 years. UNESCO has declared 2014 to be the International Year of Crystallography in recognition of the seminal discoveries that started the discipline, which were made almost exactly 100 years ago; a number of the most important discoveries of that century were made by scientists with links to Birkbeck.

The presenters divided the “century of crystallography” into two, with Barnes speaking first and covering the first 50 years. In giving his talk the title “A History of Modern Crystallography”, however, he recognised that crystals have been observed, admired and studied for many centuries. What changed at the beginning of the last century was the discovery of X-ray diffraction. Wilhelm Röntgen was awarded the first-ever Nobel Prize for Physics for his discovery of X-rays in 1896, but it was almost two decades before anyone thought of directing them at crystals. The breakthroughs came when Max von Laue showed that a beam of X-rays can be diffracted by a crystal to yield a pattern of spots, and the father-and-son team of William Henry Bragg and William Lawrence Bragg showed that it was possible to derive information about the atomic structure of crystals from their diffraction patterns. These discoveries also solved – to some extent – the debate about whether X-rays were particles or waves, as only waves diffract; we now know that all electromagnetic radiation, including X-rays, can be thought of as both particles and waves.

Von Laue and the Braggs were awarded Nobel Prizes for Physics in 1914 and 1915 respectively, and between 1916 and 1964 no fewer than 13 more Nobel Prizes were awarded to 18 more scientists for discoveries related to crystallography. Petrus Debye, who won the Chemistry prize in 1936, showed how to quantify the thermal motion of atoms as vibrations within a crystal. He also invented one of the first powder diffraction cameras, used to obtain diffraction patterns from powders of tiny crystallites. Another Nobel Laureate, Percy Bridgman, studied the structures of materials under pressure: it has been said that he would “squeeze anything he could lay his hands on”, often up to intense pressures.

Scientists and scientific commentators often argue about which of their colleagues would have most deserved to win the ultimate accolade. Barnes named three who, he said, could easily have been Nobel Laureates in the field of crystallography. One, Paul Ewald, was a theoretical physicist who had studied for his PhD under von Laue in Munich, and the other two had strong links with Birkbeck. JD “Sage” Bernal was Professor of Physics and then of Crystallography here; he was famous for obtaining, with Dorothy Crowfoot (later Hodgkin) the first diffraction pattern from a protein crystal, but his insights into the atomic basis of the very different properties of carbon as diamond and as graphite were perhaps even more remarkable. He took on Rosalind Franklin, whose diffraction patterns of DNA had led Watson and Crick to deduce its double helical structure, after she left King’s College, and she did pioneering work on virus structure here until her premature death in 1958.

Barnes ended his talk and led into Moss’s second half-century with a discussion of similarities between the earliest crystallography and today. Then, as now, you only need three things to obtain a diffraction pattern: a source of X-rays, a crystalline sample, and a recording device; the differences all lie in the power and precision of the equipment used. He demonstrated this with a “symbolic demo” that ended when he pulled a model structure of a zeolite out of a large cardboard box.

David Moss then took over to describe some of the most important crystallographic discoveries from the last half-century. His talk concentrated on the structures of large biological molecules, particularly proteins, and he began by explaining the importance of protein structure. All the chemistry that is necessary for life is controlled by proteins, and knowing the structure of proteins enables us to understand, and potentially also to modify, how they work.

Even the smallest proteins contain thousands of atoms; in order to determine the position of all the atoms in a protein using crystallography you need to make an enormous number of measurements of the positions and intensities of X-ray spots. The process of solving the structure of a protein is no different from that of solving a small molecule crystal structure, but it is more complex and takes much more time. Very briefly, it involves crystallising the protein; shining an intense beam of X-rays on the resulting crystals to produce diffraction patterns, and then doing some extremely complex calculations. The first protein structures, obtained without the benefit of automation and modern computers, took many years and sometimes even decades.

Thanks to Bernal’s genius, energy and pioneering spirit, Birkbeck was one of the first institutes in the UK to have all the equipment that was needed for crystallography. This included some of the country’s first “large” computers. One of the first electronic stored-program computers was developed in Donald Booth’s laboratory here in the 1950s. In the mid-1960s the college had an ATLAS computer with a total memory of 96 kB. It occupied the basements of two houses in Gordon Square, and crystallographers used it to calculate electron density maps of small molecules. Protein crystallography only “took off” in the 1970s with further improvements in computing and automation of much of the experimental technique.

Today, protein crystallography can almost be said to be routine. The first step, crystallising the protein, can still be an important bottleneck, but data collection at powerful synchrotron X-ray sources is extremely rapid and structures can be solved quite easily with user-friendly software that runs on ordinary laptops. There are now over 100,000 protein structures freely available in the Protein Data Bank, and about 90% of these were obtained using X-ray crystallography. The techniques used to obtain the other 10,000 or so, nuclear magnetic resonance and electron microscopy, are more specialised.

Moss ended his talk by describing one of the proteins solved in his group during his long career at Birkbeck: a bacterial toxin that is responsible for the disease gas gangrene. This destroys muscle cells by punching holes in their membranes, and its victims usually have to have limbs amputated to save their lives. Knowing the structure has allowed scientists to understand how this toxin works, which is the first step towards developing drugs to stop it. But you can learn even more about how proteins work if you also understand how they move. Observing and modelling protein motion in “real time” still poses many challenges for scientists as the second century of crystallography begins.

Redesigning Biology. Birkbeck Science Week 2014

This post was contributed by Dr Clare Sansom, Senior Associate Lecturer in Birkbeck’s Department of Biological Sciences

Dr Vitor Pinheiro (right) and Professor Nicholas Keep, Dean of the School of Science

Dr Vitor Pinheiro (right) and Professor Nicholas Keep, Dean of the School of Science

The first of two Science Week talks on Wednesday 2 July was given by one of the newest lecturers in the Department of Biological Sciences, Dr Vitor Pinheiro. Dr Pinheiro holds a joint appointment between Birkbeck and University College London, researching and teaching in the new discipline of synthetic biology. In his talk, he explained how it is becoming possible to re-design the chemical basis of molecular biology and discussed a potential application of this technology in preventing contamination of the natural environment by genetically modified organisms.

Synthetic biology is a novel approach that turns conventional ways of doing biology upside down. Biologists are used to a “reductionist” approach to their subject, breaking complex systems down into, for example, their constituent genes and proteins in order to understand them. In contrast, synthetic biology is more like engineering, a “bottom-up” approach that tries to assemble biological systems from their parts. Pinheiro introduced this concept using a quotation from the famous US physicist Richard Feynman: “What I cannot create, I cannot understand”. Synthetic biologists often use vocabulary that is more characteristic of engineers or computer scientists: words like “modules”, “device” and “chassis”.

All life on Earth is dependent on nucleic acids and proteins; the former store and carry genetic information, and the latter are the “workhorses” of cells. They are linked through the Central Dogma of Molecular Biology which states, put somewhat simplistically, that “DNA makes RNA makes protein”. The information that goes to make up the complexity of cells and organisms is held in DNA and “translated” into the functional molecules, the proteins, via its intermediate, RNA. The mechanism through which the biology we see now arose – evolution – is well enough understood, but it is not yet clear whether evolution had to create the biology we see today or if it is a kind of “frozen accident”. There is, after all, only one “biology” for us to observe. But synthetic biologists are trying to build something different.

DNA is made up from three chemical components and structured like a ladder: the rungs are made up of the bases that contain the information, and sugar rings and phosphate groups make up the steps. All three components can be chemically modified, affecting the physical properties and the potential for information storage of the resulting nucleic acid. Any modifications that do not disrupt the natural base-pairing seen in DNA and RNA can be exploited to make a nucleic acid that can exchange information with nature. And if the enzymes that in nature replicate DNA or synthesise RNA can also be exploited to synthesise and replicate these modified nucleic acids, that process will be substantially more efficient than chemical replication. Modification of different components presents different re-engineering challenges and different potential advantages. Sugar modifications are not common in biology and are expected to be harder than nucleobases to engineer. On the other hand, they are expected to increase the resistance to biological degradation of the modified nucleic acid. These synthetic nucleic acids have been generically termed “XNA”.

Pinheiro, as part of a European consortium, led the development of synthetic nucleic acid in which the natural five-membered sugar rings had been replaced by six-membered ones. They are more resistant than DNA to chemical and biological breakdown, and have low toxicity, but are poor substrates for the polymerases that catalyse DNA replication and RNA synthesis. Further, he has harnessed the power of evolution to create “XNA polymerases” through a process called directed evolution. In this, hundreds of millions of variant polymerases are created and those that happen to be better able to synthesise the selected XNA are isolated. The process is repeated until the best polymerases are identified or isolated polymerases have the required activity.

These synthetic nucleic acids, however, still cannot be involved in cell metabolism and this is a current research bottleneck that prevents the development of XNA systems in bacterial cells. An alternative route towards redesigning biology would be to modify how information stored in DNA and RNA is converted to proteins: redesigning and replacing the genetic code. The exquisite fidelity of the genetic code depends on another set of enzymes, tRNA synthetases, which connect each amino acid to a small “transfer” RNA molecule including its corresponding three-base sequence or codon. This allows the amino acid to be incorporated into the right place in a growing protein chain. In nature, almost all organisms use the same genetic code. Synthetic biologists, however, are now able to build in subtle changes so that, for example, a codon that in nature signals a stop to protein synthesis is linked to an amino acid, or one that is rarely used by a particular species is linked to an amino acid that is not part of the normal genetic code.

Any organism that has had its molecular biology “re-written” using XNA and non-standard genetic codes should be completely unable to exchange its information with naturally occurring organisms, and, therefore, would not be able to flourish or divide outside a contained environment: it could be described as being contained within a “firewall”.  It would therefore lack the risks that are associated with more conventionally genetically modified organisms: that it might compete with naturally occurring organisms for an ecological niche, or that modified genetic material might spread to them. If, or more likely when, these “genetically re-coded organisms” are released into the environment (perhaps to remove or neutralise pollutants) they will not be able to establish themselves in a natural ecological niche and will therefore pose negligible long-term risk. The more such organisms deviate from “normal” biology, the safer they will become.