Elite bargainers refuse low offers more than regular bargainers and make more generous offers – study

Elite decisions less rational and self interested, offer more (2)
Share this
Share

In a first-of-its-kind study undertaken by University of California researchers, elite bargainers–those responsible for making today’s most important policy and business decisions–were examined to find out if they, like other people, reject low offers even when those offers involve benefits. The research is expected to offer increased understanding of some of the problems faced in global economic and environmental dialogues.

“Professionals, who had a lot of experience in high-stakes bargaining, played even further from the predictions of classic economic models. Concerns about fairness and equity aren’t expunged by experience, and persist in a group of very smart and successful professionals,” Dr. Brad LeVeck, Assistant Professor at the University of California, Merced, and lead author of the study, told The Speaker.

Elite bargainers refuse low offers more than regular bargainers, make higher - study
Dr. Brad LeVeck

“Most experiments in human behavior are conducted on convenience samples of university undergraduates. So, when experimental results go against the assumptions in classic models from economics, many researchers are skeptical about whether those results will translate to the real world,” LeVeck told us.

“Inexperienced students at a university might just be making mistakes that more experienced professionals would avoid. At least when it comes to bargaining, our study shows that this isn’t the case.”

LeVeck’s study used a unique sample of 102 US policy and business elites who had an average of 21 years of experience conducting international diplomacy or policy strategy.

The names of the participating elites were withheld in order to mitigate possible false behavior that could have resulted from concern about harm to their reputations.

When participants bargained over a fixed resource–in the study the samples played “ultimatum” bargaining games that involved the division of a fixed prize, but the researchers had global agreements on international trade, climate change, and other important problems in mind–the elites actually made higher demands and refused low offers (below 25 percent in the share of a prize) more frequently than non elite bargainers. But elite bargainers also offered more.

“In our study, it wasn’t just the case that elite policy makers rejected low offers more often than the general public,” LeVeck said. “It was also the case that they made more generous offers.

“So, to a certain extent, these individuals have the right intuition about how to conclude a successful bargain. This suggests that considerations of equity and fairness are already taken into consideration by real world policy makers.”

Elites with more experience and age were found to bargain for higher gains all around.

Elite bargainers refuse low offers more than regular bargainers, make higher - study

“Our best evidence indicates that this finding is related to their professional experience… This could be because policy makers accommodate the possibility that low offers will be rejected, and therefore also learn that it’s generally ok to reject low offers.”

The positions from which the most important policy and business decisions are made, the researchers concluded, are occupied by elites who have either changed towards high demand bargaining or have been selected by some process that favors this type of elite.

Why bargainers reject low offers and why elite bargainers play for higher stakes are questions that are still unanswered. Past research has given weight to arguments that bargaining actions are not due to motives such as fairness, equity, or toughness, but may have more to do with spite, culture and social learning.

“Our study wasn’t designed to disentangle these explanations,” LeVeck said. “So, it’s difficult to know whether the people who reject low offers are individuals that intrinsically care about fairness for everyone, or are simply individuals who spitefully reject low offers (but would take more for themselves if it were possible). In the later case, people would care about fairness for themselves, but not for everyone. I suspect both of these motivations exist and affect the behavior of different people.”

The researchers considered other motives for elite bargaining tactics, such as future opportunities, other bargaining partners and power relationships, but those did not play into the experiments.

“I do think these types of complex, real-world considerations shape professionals’ intuitions about how to bargain,” said LeVeck. “However, other parts of our study show that policy and business elites think carefully about strategic decisions. This makes it less likely that these individuals were misapplying a lesson from the real world when they played the bargaining game in our study.”

The researchers pointed out that the study encourages a reappraisal of aspects of international cooperation, such as bargaining with regards to trade, climate and other world issues.

“Analysts and researchers are understandably skeptical when leaders complain that an agreement is unfair. It’s very plausible that the complaint is just ‘cheap-talk’: When pressed, those leaders should actually accept any agreement that is inline with their self-interest.

“By contrast, our findings raises the possibility that these complaints are more than cheap talk. Policy and business elites have some willingness to reject inequitable offers.

“So, when formulating proposals on issues like global emissions reductions or trade policy, leaders should pay attention to whether the other side will reasonably regard the deal as fair.”

The report, “The role of self-interest in elite bargaining,” was completed by Brad L. LeVeck, D. Alex Hughes, James H. Fowler, Emilie Hafner-Burton, and David G. Victor, and was published on the PNAS website.

By Sid Douglas

Drought ended the Mayan civilization – Rice University Scientists

Mayan
Share this
Share

In an effort to understand why the Mayan civilization of Central America met its sudden demise, a new study at the underwater caves of the Great Blue Hole, located some 40 miles off the coast of Belize, has revealed that minerals found at the site indicate an extreme drought in the region between 800 and 900 AD, which may have forced the Mayans to adapt and relocate, reducing the plush region to deserted ruins.

A civilization that thrived for over 2,000 years across the area of modern-day southern Mexico, Belize, Honduras, El Salvador, and Guatemala, the Mayans are known to have been skilled astronomers, architects, masons, artists and mathematicians, chroniclers–as well as for creating calendar system and making doomsday predictions still referenced today. What spurred the team to investigate the lost civilization was the abrupt end of the once-thriving civilization, which continues to be widely referenced based on its pottery, artifacts and monolithic structures, as well as the desolate and ruined cities it left behind.

Andre Droxler from Rice University found that the mineral deposits in the caves of a 1,000-foot crater correlated with the period of the civilization’s demise.

Droxler’s team took core sediment samples and measured the ratio of titanium to aluminium. It is known that heavy rainfall deposit titanium from volcanic rocks into the Atlantic Ocean–ergo the Great Blue Hole. Over time, the deposits turn the crater into a “sediment trap”–a big bucket of titanium–leaving less titanium in the soil during dryer seasons.

A relief depicting the ancient Mayans. Source Flickr Dennis Jarvis.
A relief depicting the ancient Mayans. Source Flickr Dennis Jarvis.

With this information, Droxler compared the titanium levels in the soil to sediments dating to the Mayan era and found them to be significantly low. Live Science puts it in technical terms saying, “The team found that during the period between A.D. 800 and A.D. 1000, when the Mayan civilization collapsed, there were just one or two tropical cyclones every two decades, as opposed to the usual five or six.”

Although they wreak havoc, these cyclones were the only way the thirsty civilization was able to survive in the absence of a body of drinkable water. Besides water, the cyclones also redistribute titanium and other minerals to replenish the land of any minerals essential to make it inhabitable. The evidence recently observed corroborates a 2012 study published in Journal Science. A stalagmite from the caves in Belize dating to the Mayan era was analyzed and the observations are consistent with a sharp decrease in rainfall coinciding with the declining period of the Mayan culture.

By Rathan Harshavardan

Scientists discover new method of cell division that allows cells to correct for larger and smaller birth sizes within a few generations

Scientists discover new method of cell division that allows cells to correct for larger and smaller birth sizes within a few generations
Share this
Share

Light has been shed on the longstanding question of how cells regulate size and how they know when to divide. According to recent research at UC San Diego, some cells–billions of years divergent from each other–use a unique, robust and simple method that had not been observed by scientists. The research has ruled out both of the prevailing theories of cell division–the so-called “timer” and “sizer” theories. Instead, evidence points towards an “adder” paradigm that corrects for differences in birth size through reproduction.

Scientists discover new method of cell division that allows cells to correct for larger and smaller birth sizes within a few generations
Dr. Sattar Taheri

“Our experimental data and analysis of growth of Escherichia coli and Bacillus subtilis shows neither timer nor sizer are correct models,” Dr. Sattar Taheri, postdoctoral fellow in the Jun Lab in the Physics Department of the University of California, San Diego and first author of the report, told The Speaker. “Instead, cells ‘add’ a constant mass in since birth until division. That is, irrespective of the cell size at birth, cell grow by a constant size and then divide. This strategy automatically ensures that cell of larger/smaller than average size, correct their size within several generation.”

The new “adder” paradigm is a simple mathematical principle. Further mathematical model developed by the researchers helped understand fluctuations and distributions of cells’ growth parameters.

Read more: What causes cell division? Neither of the prevailing theories, but rather an extraordinarily simple quantitative principle of cell-size control, according to UC San Diego scientists

Time and size do not even factor into growth and division for “perfect adders.”

Taheri explained the problem approached by the research.

“In their life cycle, bacteria grow in size until they divide into two daughter cells. Scientists knew that cells have a ‘strategy’ to control their size–or, in other words, when to divide–but we did not know what that strategy is.

“In fact, this has been one of the long standing problems in biology.”

The research was conducted with a device that allowed the team to isolate individual genetic materials and observe the E. coli and B subtilus over hundreds of generations and under various conditions. Samples about a thousand times better than previous samples were derived from this process.

“Without a powerful technology to precisely acquire data on growth of live cells, people could only suggest theories. ‘Timer’ and ‘sizers’ were two major ideas. Based on the timer model, cells have a clock. The clock start when cells are born, and once a constant period of time passes, division is triggered–irrespective of the cell size. The sizer model suggests that growing cells divide once they reach a critical size. This requires cells continuously monitor their size.

The research, as Taheri stated, found that the previously posed models could not explain growth and division. Instead, a surprising new concept emerged: the “adder” paradigm that applied to most of the bacteria the team has so far studied–as well as the data coming out of other labs.

However, the solution is only a part of a greater picture. Taheri noted that cell division was much more complex than a single theory could explain.

In particular, higher organisms “care more” about size, and add more mass before dividing if they are born smaller. That said, those cells also reach target size in the same way that perfect adders do, according to the researchers.

“Note that this adder principle is not the only possible strategy to maintain size homeostasis. It was unexpected to find this, specially in both E. coli and B. subtilis–that are billion years apart in evolution. It’s a unique way. Robust and simple. However, some other higher organisms, including yeast, seems to use other strategies.”

The two reports that resulted from the research, “Cell-size maintenance: universal strategy revealed” and “‘Cell-size control and homeostasis in bacteria” were completed by Suckjoon Jun, Massimo Vergassola and Sattar Taheri-Araghi, and were published in the journal Current Biology. Both papers will be available at the Jun Lab webpage.

Personality at least as important as intelligence when it comes to doing well in school, research suggests

Personality at least as important as intelligence when it comes to doing well in school, research suggests
Share this
Share

According to new psychology research, personality is at least as important as intelligence when it comes to school. Some personality traits are more important than others, according to the findings, and the study has led researcher Dr. Arthur Poropat of Griffith’s School of Applied Psychology to suggest that educators may do better to target the fluid, teachable capacities of personality rather than rely on the more static capacity of intelligence alone.

“Personality is at least, if not more important than intelligence for education,” Poropat told The Speaker. “And unlike intelligence, we can help people to develop their personality to improve their academic performance and life outcomes.”

Arthur MGT photo 049(f3)08_008
Dr. Arthur Poropat

Poropat conducted the largest ever reviews of personality and academic performance based on the five fundamental personality factors–Conscientiousness, Openness, Agreeableness, Emotional Stability, and Extraversion. He found that Conscientiousness and Openness have the biggest influence on academic success, and helpfulness was found to also be involved in scoring grades.

“Students who scored highest on the three most relevant personality factors scored a full grade higher than students who scored lowest on those factors. The three factors are: Conscientiousness, which reflects things like making and carrying out plans, striving to achieve, and self-control; Openness (also called Openness to Experience and Intellect), encompassing being imaginative, curious, and artistic; and Emotional Stability, covering calmness and emotional adjustment (as opposed to being anxious, fearful or unstable). The two personality factors that are not so strongly linked with academic performance are Agreeableness (reflecting likability and friendliness), and Extraversion (talkative and socially-dominant).

“What my reviews of the research on personality and academic performance found was that Conscientiousness is at the very least just as important as intelligence for predicting academic performance.”

Who was doing the assessing was also a matter of the research. A students self-assessment was found to be as useful as a predictor of success in university as intelligence rankings, but the assessments of other students–those who knew the individual in question well–were found to be much more accurate than either.

“If someone who knows the student well rates the student’s personality, Conscientiousness is nearly four times as important.

“So, students who habitually manage their effort, make and stick to plans, and stay motivated regardless of set-backs, do substantially better, and this is more important than how smart the student is. Likewise, both Openness and Emotional Stability are much more useful for predicting grades and GPA when rated by someone who knows the student well. In other words, the creative and intellectually-curious students, and the calm and emotionally well-adjusted students, will do better at school and university.”

In general, personality was found to be more important than intelligence when it came to academic careers. Poropat explained why this might be.

“One way of thinking about this is that intelligence is a bit like horsepower for a car: it gives a student their basic capacity to learn. Conscientiousness, Openness and Emotional Stability is more like the way in which the car is driven. With respect to cars, a great driver in an average car will outperform a bad driver in a great car. Similarly, a student with average intelligence but who is high on Conscientiousness, Openness, and Emotional Stability will outperform an intelligent student who scores lowly on these factors.”

Poropat commented on some changes that could be made to education to improve its benefits to students.

“One thing that surprised me when I completed the first of my studies was that teachers already ‘knew’ what the results were. The many teachers I have spoken with typically say that hard-working, intellectually curious, and well-adjusted students perform better than smart students, in part because they are easier to teach. However, there is clear evidence from independent research–i.e., not mine–that students can be taught to change their personality in ways that help their studies. What I would like to see is education actively targeting personality development in ways that are closely linked to study and work. We already know this is possible and it produces good outcomes for students but we need more attention to this, and more research on how best to achieve this. Some of my postgraduate research students are already exploring this area.”

Not only can good personalities be taught to some degree, but students may be setting themselves up for failure by depending on the static capacity of intelligence, which is different from the fluid capacity of personality, according to Poropat.

“Professor Carol Dweck has done a lot of research on why teachers and parents should never tell a student they have done well because they are smart,” he explained. “The reason is that the students seem to know what research tells us: despite the mind-training software, it seems that it is not possible to truly improve someone’s intelligence. So, if a student thinks they have done well because they are smart, they conclude there is no point in making an effort so they stop trying and their performance gets worse.

“However, there is clear evidence that personality does change over time, and that it is possible to train people to change their personality–at least as far as changing how they consistently behave. In contrast with intelligence, students seem to know that they can learn new ways of managing themselves, and new ways of exploring ideas and skills, and new ways of managing their emotions. People typically develop higher levels of Conscientiousness with age, but they can also be taught this. And, people can also be taught to be higher on Openness and Emotional Stability. So, students of any age can develop their personality to improve their academic performance: the challenge is for educators to show them how.”

Poropat concluded that much of classroom success depends on how teachers bring out the best in students.

“Teachers need to help students develop their personalities in constructive ways. That is because, unlike intelligence, teachers can guide students to be more conscientious, open to experience, and emotionally-stable, which are the three personality factors that have the biggest effect on whether students learn well. Teachers should pay attention to whether students’ personalities support learning, and use that to guide teaching of individual students.

The report, “Other-rated personality and academic performance: Evidence and implications,” was completed by Arthur E. Poropat, and was published in the journal Learning and Individual Differences.

Microscopic steam engines new world’s smallest

Microscopic steam engines new world's smallest
Share this
Share

How small are Dr. Pedro A. Quinto-Su’s steam engines? Smaller than red blood cells and most bacteria–between one and three millionths of a meter. The significantly strong pistons are powered by a combined process of optical manipulation which bypasses microfabrication and draws strength from simplicity, taking scientists one step closer to the “lab on a chip” miniaturization of everything.

“The piston–a microsphere–is powered by light, which also heats the sphere inducing the vapor microexplosions. Similar to an internal piston combustion engine,” Dr. Pedro Quinto-Su, physics professor at the Universidad Nacional in Mexico, told The Speaker.

“This is the first time that a steam engine has been miniaturized to a length scale of a micrometer,” Quinto-Su told us. “Also, the engine works in an environment dominated by fluctuations (Brownian), since it is immersed in liquid. In the context of optical micromanipulation the report shows that it is possible to have impulsive forces in an optical tweezer, which could extend even more the wide array of applications that use that technique.”

Quinto-Su placed the microscopic piston in historical context.

“In the past, the improved steam engine design of Watt started the industrial revolution and understanding the mechanism initiated modern thermodynamics. Steam engines were the foundation for all the engines that we have today.

“Now steam engines are mainly used for energy conversion in power plants, where steam turbines convert mechanical energy into electricity.”

There has been much interest in miniaturizing heat engines, Quinto-Su explained.

“In the last few decades there has been a trend in trying to miniaturize everything. In science this concept has been called “lab on a chip” and the idea is to have everything that is needed to make an experiment in a small chip. The interest in miniaturized versions of heat engines is that they could be used to do work in very localized volumes. For example, periodically displacing small objects including nanomaterials.”

Quinto-Su explained the challenges to miniaturization past the 1mm scale–the lower limit until the recent invention–and how his steam engine bypassed the previous obstacles.

“The main problem with tiny heat engines is that the efficiency is very poor. A few heat engines have been demonstrated at the micrometer scale with different working mechanisms. However, traditional heat engines that work with the expansion and compression of gas had not reached scales below 1mm, perhaps because most designs involved the assembly of microfabricated moving parts which made it more challenging–in addition to the expected poor efficiency.

“The implementation of the reported micrometer-sized piston steam engine is very simple and there is no need for microfabrication, only optical access is required. It needs an optical tweezer setup which is a widely available tool.”

Quinto-Su explained how the project began–as an attempt to combine two methods of microscopic manipulation to create the micro-piston.

“The project started by trying to combine two techniques of optical manipulation of microscopic objects immersed in liquids: optical tweezers and microscopic explosions (cavitation bubbles).”

Microscopic steam engines new world's smallest

Quinto-Su explained how small the steam engines were, and put in layman’s’ terms  how the various elements of the engines work.

“In the reported engine a spherical microparticle (1 or 3 micrometers in diameter, the largest dimension of a human red blood cell is about 10 micrometers) is periodically displaced by light and microscopic explosions.

“Optical tweezers use a focused laser beam that attracts transparent microscopic objects towards the focused spot. The objects are usually immersed in liquid and they are called colloids, usually these objects are microscopic spheres. Once the microparticles reach the focus of the laser beam they stay trapped in there. This technique exerts very small controlled forces (pico Newtons) in the microscopic objects.

“In contrast, microscopic vapor explosions in liquids exert large impulsive forces in the vicinity where the explosions are created. A vapor explosion creates a rapidly expanding bubble that later collapses (cavitation bubble). The bubble displaces the liquid at a very fast speed which also displaces the objects in the vicinity, exerting impulsive forces several orders of magnitude larger than those of optical tweezers.

“In the reported work, a spherical microparticle that is not completely transparent to the laser beam is placed in an optical tweezer. In this Microscopic steam engines new world's smallest way the sphere is attracted towards the focus of the beam, but at the same time it is heated because it is not transparent to the light. Once the sphere is close to the focus it is heated at a very fast rate and the liquid in contact with the microsphere explodes, pushing the sphere close to the starting position. Then the light forces take over and start attracting the sphere towards the focus repeating the cycle.

“Hence the combination of optical tweezers and vapor explosions resulted in a microparticle (piston) that is periodically attracted towards the focused laser and then is pushed away at a fast speed by microscopic vapor explosions. In a sense it is similar to an internal piston combustion engine.”

Quinto-Su noted that the the microexplosions are not initiated by a spark, but by a sudden temperature increase, similar to a diesel engine.

The power created by the pistons is significant, Quinto-Su explained.

“The average power is about 0.3 pico Watts and the power density is about two orders of magnitude less than that typical of car engines. However the effects at the microscopic scale are significant and could be used to pump small volumes of liquid or exert impulsive forces in nearby objects.”

The micro-pistons can be compared with the effect of transducers when driven at acoustic and ultrasound frequencies.

“Because transducers can be used to induce oscillations in liquid-gas boundaries which produce flow. In this case, the operation of the engine is periodic displacements of the piston and periodic explosions which also produce flow, similar to the effect of transducers.”

The report, “A microscopic steam engine implemented in an optical tweezer,” was authored by Pedro A. Quinto-Su and was published in Nature Communications

Three videos from Dr Pedro A. Quinto-Su’s research – 1-3 μm particle engines.

[su_youtube url=”https://www.youtube.com/watch?v=CovgaA20Mmc&list=PLkY-3tl_jYeCgAXAOU74h3p1uoxba7ofc”][su_video url=”http://physics.aps.org/assets/0306a6a4-533e-4249-afbf-b6e49aaa6004/video-v1.mp4″ loop=”yes”][/su_youtube]

[su_youtube url=”https://www.youtube.com/watch?v=T6gRqpHU4iY&index=2&list=PLkY-3tl_jYeCgAXAOU74h3p1uoxba7ofc”][su_video url=”http://physics.aps.org/assets/0306a6a4-533e-4249-afbf-b6e49aaa6004/video-v1.mp4″ loop=”yes”][/su_youtube]

[su_youtube url=”https://www.youtube.com/watch?v=TisxDL5dP_g&index=3&list=PLkY-3tl_jYeCgAXAOU74h3p1uoxba7ofc”][su_video url=”http://physics.aps.org/assets/0306a6a4-533e-4249-afbf-b6e49aaa6004/video-v1.mp4″ loop=”yes”][/su_youtube]

Memories are stored in neurons, not synapses, and therefore can be restored, shows new research

Memories are stored in nuclei, not synapses, and therefore can be restored, shows new research (2)
Share this
Share

Does the entire library of memories stored throughout our lifetime remain dormant within our minds, and can these memories be restored? New research has led UCLA neurobiologists to conclude that the storage mechanism for memories is actually independent of synaptic change–the mechanism that mediates the expression of memories. Rather, memories are stored in persistent epigenetic changes within the nuclei of neurons, and therefore memories are extremely stable over time and could be restored.

“The idea, long believed by most neuroscientists, that memories are stored in synapses may be incorrect. This implies that the apparent loss of a memory due to synaptic elimination might be reversible. Memories that appear to be lost forever may, in fact, be able to be fully restored,” Dr. David Glanzman, professor of integrative biology, physiology and neurobiology at UCLA and lead author of the study, told The Speaker.

So if the memories remain stored intact, why can’t we access them? The synapses that complete the circuit to the memories are destroyed or eliminated, Glanzman suspects. Glanzman qualified that the problem of memory storage was extremely complex and that he did not possess a complete answer, but he explained how his conception of memory formation was changed by the research.

“At present I believe that memories are formed in the brain by a combination of posttranslational changes—protein phosphorylation, protein dephosphorylation, etc.—gene transcription, protein synthesis and structural changes in neurons.  Pretty much everyone in the field of learning and memory also believes this.

Memories are stored in nuclei, not synapses, and therefore can be restored, shows new research (2)
Confocal fluorescence micrographs illustrating the structural effects of 5X5HT training, reconsolidation blockade, and chelerythrine treatment

“Where I differ is that I now believe that the storage mechanism for long-term memories is independent from the mechanism that mediates the expression of the memories, which is synaptic change,” Glanzman continued. “The storage mechanism, I believe, is persistent epigenetic changes within the nuclei of neurons.  Given this, I think that long-term memories are actually extremely stable; as long as the cell bodies of the neuronal circuit that contains the memory are intact, the memory will persist.  The memory can appear to be disrupted, however, by destroying or eliminating the synapses among the neurons in the neuronal circuit that retains the memory.  But the apparent elimination of a memory due to synaptic elimination can be reversed and the memory restored.  The data in our eLife study support this idea, although they do not explain how this is actually accomplished.”

In their research, the UCLA team studied Aplysia, a marine snail–particularly the way Aplysia learns to fear a memorable source of harm.

The team trained the snail to defend itself by withdrawing to protect its gills from the harm of mild electric shocks. The snail retained the withdrawal response for several days, indicating long term memory of the stimulus.

The team found that the shock caused serotonin to be released in Aplysia’s central nervous system. New synaptic connections grow as a result of the serotonin, according to the team. The formation of memories can be disrupted by interfering with the synthesis of the proteins that contribute to the new synapses.

When a snail was trained on a task but its ability to immediately produce the proteins was inhibited, the animal would not remember the training 24 hours later, the researchers found, but if an animal was trained and protein synthesis was inhibited later–after 24 hours–the animal would retain the memory. Memories once formed last in long-term memory, the team found.

They also performed experiments with neurons in a Petri dish, and found the same results.

Then the team tested memory loss. Again in a Petri dish, they manipulated synaptic growth with a protein synthesis inhibitor and with serotonin. They found that when they stimulated synaptic growth some time after creating a memory, new synapses grew–not the old ones that would have been stimulated if prevailing theories of memory formation were accurate.

The researchers found that the nervous system appears to be able to regenerate lost synaptic connections–reconnecting memories with new synapses.

Does that meant that the entire library of a life’s memories could be dormant and could be restored? Possibly, but not in all cases, according to the study. Glanzman used an example to explain what his research had suggested.

“That is a fascinating question,” said Glanzman.”For example, what about the phenomenon of infantile amnesia, that is, the inability of adults to retrieve episodic memories before the age of 2–4 years?  Can those episodic memories from our earliest years be restored?  I frankly don’t know, but I suspect that they can be.  It is possible that the explanation for infantile amnesia is similar to that of the phenomenon we examined in our study, reconsolidation blockade.  There, we found that the apparent elimination of the memory was the result of the reversal of the synaptic growth and, perhaps, of some of the epigenetic changes, such as histone acetylation, that mediated the expression of the memory.

“Notice that some epigenetic changes are intrinsically more stable than others.

“What if infantile amnesia is also a consequence of synaptic loss and reversible epigenetic changes, but that the memory persists as persistent epigenetic changes?  Then, just as we were able to restore memory in the snail following its disappearance due to reconsolidation blockade, we might be able to reverse infantile amnesia.  Similarly, some memories lost in the early stages of Alzheimer’s disease due to the synaptic destruction might be restorable.  However, once the cell bodies of the neurons that make up the memory circuit die, I believe the memory is lost forever.”

Before the recent research, memory was believed to be stored in synapses. Glanzman explained the traditional belief and the remaining challenges to the new theory.

“First, until my work is confirmed by others, it is not fair to say that neuroscientists are mistaken in their belief that long-term memory is stored at synapses.  The idea that memory is not stored at synapses is going to be met with a great deal of skepticism, as any radical new scientific idea should be.

“Having said that, the idea that memory is stored at synapses grew out of the pioneering work of the Spanish neuroanatomist Ramon y Cajal (who was awarded the Nobel Prize in 1906).  Cajal was one of the first to propose that learning and memory involves the growth of new synaptic connections among neurons in the brain.  This idea is now overwhelmingly accepted by neuroscientists.  I certainly believe—and scientific research on Aplysia and other animals confirms—that when an animal learns, synapses in its brain (or nervous system, in the case of the snail) physically change; in some instances the result is synaptic growth, whereas in others it is synaptic retraction.  (The specific pattern of synaptic change depends on the specific type of learning and the particular part of the brain that mediates the learning.)

“Given this, it is natural to think that new memories will be stored, at least in part, as persistent molecular or structural changes in the synapses that grew as the memories formed. It is this part of the synaptic hypothesis of long-term memory that I disagree with.”

The research is expected to hold new hope for sufferers of Alzheimers. Because in the early stages of the disease, synapses are destroyed–not neurons–those memories still exist and could be regained, Glanzman suspects.

The report, “Reinstatement of long-term memory following erasure of its behavioral and synaptic expression in Aplysia,” was completed by Shanping Chen, Diancai Cai, Kaycey Pearce, Philip Y W Sun, Adam C Roberts, and David L Glanzman, and was publied in eLife.

Photos: Christelle Nahas/UCLA

What causes cell division? Neither of the prevailing theories, but rather an extraordinarily simple quantitative principle of cell-size control, according to UC San Diego scientists

adder cell division
Share this
Share

How do cells control their size? What causes them to divide? Contrary to what many biologists have expected, evidence supporting an answer to one of the most fundamental and longstanding problems of biology has been accomplished by UC San Diego researchers. The study surprised even the researchers: a simple quantitative principle explains the phenomena without regard for either of the currently prevailing theories.

“Life is very robust and ‘plastic,’ much more than what biology textbooks tell us. Bacteria probably do not care when they should start replicating their genomes or dividing,” Dr. Suckjoon Jun, assistant professor of physics and molecular biology at the University of California, San Diego and one of the lead authors of the study, told The Speaker.

adder cell division
Dr. Suckjoon Jun

“Simple mathematical principles help us understand fundamental biology, just like in physics.”

How do cells control their size? What causes them to divide?

Biologists had previously posited two possible solutions: either a cell reaches a certain size, at which it divides into two smaller cells; or after a certain time has passed, the cell divides. The two theories have been known as “sizer” and “timer.”

The results surprised the researchers as well: “adder.”

“The results were completely unexpected,” Jun told us.

Rather than either sizer or timer paradigms, cells were found to add a constant volume each generation, regardless of their newborn size.

“This ‘adder’ principle quantitatively explains experimental data at both the population and single-cell levels, including the origin and the hierarchy of variability in the size-control mechanisms and how cells maintain size homeostasis,” the researchers concluded, whereas in past research based on “sizer” and “timer” theories led to difficult-to-verify assumptions or population-averaged data and varied interpretations.

Time and size, while variable in some organisms, do not even factor into the existence of “perfect adders” in the newly found and “extraordinarily simple” quantitative principle of cell-size control.

“It seems most bacteria we have studied so far, and more data is coming out of other labs, appear to be perfect adder,” said Jun. “Some higher organisms, such as yeast, do care about size more than bacteria do. For example, small-born yeast cells add more mass than large-adder cell divisionborn cells to reach division. That is, how much mass they add since birth is sensitive to how big the baby cell was. Nevertheless, the way they reach the target size, generation after generation, works exactly same as the perfect adders such as bacteria, which is quite nice and surprising.”

The growth of cells follow the growth law, the researchers found, and grow exponentially at a constant rate.

Jun explained the challenge that had stood in the way of understanding this aspect of cell division in the past: “Two biggest obstacles have been (one) dogmas that cells somehow must actively sense space or time to control cell size, and (two) technology that did not exist until recently, which now allows monitoring the growth and division patterns of tens of thousands of individual cells under tightly controlled environment.”

The research team developed a tiny device that isolates individual genetic materials.

The tool allowed the researchers to observe thousands of individual bacterial cells–Gram-negative E. coli and Gram-positive B. subtilis–over hundreds of generations. The researchers manipulated the conditions in which the cells lived. A wide range of tightly controlled steady-state growth conditions were experimented with.

According to the researchers, the new method allowed them to produce statistical samples about a thousand times better than had previosly been available.

“We looked at the growth patterns of the cells very very carefully, and realized that there is something really special about the way the cells control their size,” explained Jun.

“No one has been able to answer this question,” Jun said in their press release, noting that this was even the case for the E. coli bacterium, possibly the most extensively studied organism to date.

The research holds the promise of better informing the fight against cancer, since one of the most important problems in the fight is the process of runaway cell division.

The reports, “Cell-size maintenance: universal strategy revealed” and “‘Cell-size control and homeostasis in bacteria” were completed by Suckjoon Jun, Massimp Vergassola and Sattar Taheri-Araghi, and were published in the journal Current Biology .

Images: the work of the researchers

Small farmers produce 80% percent of the world’s food, and the do it with less than 25% of the world’s farmland – study

Small farmers produce 80 % percent of the world's food, and the do it with less than 25% of the world's farmland - study
Share this
Share

The share of farmland tended by small farmers is shrinking. The land is changing hands. Although small farms are more productive than large farms and tend to grow food products–they produce 80 percent of the world’s food–they are being swallowed up by large corporate farms that grow high-profit crops for export markets. The land left to the largely food-producing small farms is currently only 24 percent of fertile land, and that number is declining sharply.

“Over the past decades, small farmers have been losing access to land at an incredible speed,” Henk Hobbelink, coordinator of GRAIN, told The Speaker. “If we don’t reverse this trend we will not only have more hungry farmers in the future, but the world as a whole will lose the capacity to feed itself.”

Small farmers produce 80 % percent of the world's food, and the do it with less than 25% of the world's farmland - studyGRAIN investigated land use data worldwide to understand the global and specific trends currently taking place with regard to farmland.

“What became very clear from our research is that increasingly fertile farmland is being taken over by huge industrial operations that produce commodities for the global market, not food for people,” Hobbelink told us. “Small farmers, who continue to produce most of the food in the world, are being pushed into an ever diminishing share of the world’s farmland.

“This trend has to be reversed if we want to be able to feed a growing population,” he said.

However, many governments and international organizations are offering grossly incorrect or misleading figures, according to GRAIN, such as those announced by representative’s of the UN Food and Agriculture Organization’s (FAO) recent “State of Food and Agriculture“–which was dedicated to family farming.

At this year’s inauguration of the International Year of Family Farming, UN FAO Director General Jose Graziano da Silva stated that 70 percent of the world’s farmland was managed by families, echoing previous conclusions by the UN and other world organizations.

The percentage of farm land currently in the hands of small farms (an average of 2.2 hectares) is, according to GRAIN, actually less than 25 percent. Excluding China and India–where about half of all small farms are located–the ratio is less than one-fifth.

Small farmers produce 80 % percent of the world's food, and the do it with less than 25% of the world's farmland - studySimilar findings were in evidence for every region of the world.

For example, in Belarus small farmers produced over 80 percent of fruits, vegetables, potatoes and vegetables with only 17 percent of the land. In Botswana, small farmers produced at least 90 percent of millet, maize and groundnuts with less than eight percent of the land.

“Because rural peoples’ access to land is under attack everywhere. From Honduras to Kenya and from Palestine to the Philippines, people are being dislodged from their farms and villages,” GRAIN found. “Those who resist are being jailed or killed. Widespread agrarian strikes in Colombia, protests by community leaders in Madagascar, nationwide marches by landless folk in India, occupations in Andalusia–the list of actions and struggles goes on and on.”

Small farmers produce 80 % percent of the world's food, and the do it with less than 25% of the world's farmland - studyEighty percent is also the figure given for the percentage of the world’s hungry people who live in rural areas. Many of these people are farmers or farmworkers.

Hobbelink explained this finding to us by saying that it was the result of small farmers simply not having enough land to produce food, and losing access to land at a rapid rate.

There were six general findings GRAIN found to be most compelling.

First, most of the world’s farms are shrinking. Second, the total of these farms account for less than 25 percent of the world’s farmland. Third, big farms are getting bigger, and small farms and farmers are losing to them. Fourth, despite this, small farms continue to be the world’s biggest food producers. Fifth, overall, small farms are more productive than big farms. Sixth, most small farmers are women.

Particularly surprising to the researchers was that land was becoming increasingly concentrated, despite extensive global agrarian reforms.

A “kind of reverse agrarian reform” is taking place in many countries, according to GRAIN. Most of this is happening through corporate land grabbing in Africa and foreign investment and massive farm expansion in Latin America and Asia.

Besides land concentration, among the forces causing small farms to collapse are population pressure and lack of access to land.

Even in India and Asia farms have been shrinking. In India, the size of the average farm is 50 percent of what it was in the 1970s, and in China farm sizes shrunk 25 percent between 1985 and 2000.

Small farmers produce 80 % percent of the world's food, and the do it with less than 25% of the world's farmland - studyIn Africa, where no official statistics for farmland concentration were available to GRAIN, researchers based their conclusions on research papers that indicated small farms were shrinking there as well.

Why small farms produce so much, and why they are losing to big corporate farms, was explained by GRAIN in their report: small farms tend to focus on food production, which is then bought from local markets and eaten. Large farms focus on return on investment, and tend to grow more export commodities such as animal feed, biofuels, and wood products. Thus, big farms, with maximized profits, are able to buy more land to produce high-profit commodities.

“Corporate farms a backed by big money, often from the finance industry, investment firms, etc.,” Hobbelink told us. “They are also able to access and influence political decisions at high level, and in this way often get handed over huge swaths of land at incredible low prices or for free. In the meanwhile, small farmers don’t have access to credit, and are up against agricultural policies that discriminate against them.”

Besides that, however, small farms tend to be more productive than large farms anyway, according to GRAIN. This phenomenon, which has been termed “the productivity paradox” because it seems contrary to what many people are told, is evinced in statistics. In nine EU countries, small farms have at least double the productivity of large farms, and the other countries show only slightly higher productivity for large farms. According to their findings, GRAIN calculated that if large farms in some Central American and African nations were as productive as small farms, national agricultural production would double.

Small farmers produce 80 % percent of the world's food, and the do it with less than 25% of the world's farmland - studyIn its findings, GRAIN also pointed out that big farms are less productive even with more resource consumption, the best land, most of the irrigation water, and better credit and technical assistance.

Much of the disparity is also due to differences in labor, GRAIN concluded. Big farms cut labor to maximize profits, and this labor is needed for better production.

“There are multiple factors at play,” Hobbelink told us. “In countries with big population growth, farmers without access to more land are forced to divide their land amongst their children. Expansion of urban areas into farmland, the same for mining, tourism, etc. But perhaps the most important factor is the global expansion of industrial plantation farming encroaching upon areas where small farmers and indigenous peoples live.

“The reason why small farmers still produce the majority of the world’s food is twofold. On one hand–as we show in our report–they are simply more efficient, more productive, than the large industrial plantations. And on the other hand they prioritize their land use towards producing food, while industrial plantations mostly produce commodities that no one can eat, or that need a lot of processing before they end up in our food: soybean, oilpalm, sugarcane, rapeseed, etc.”

“The bottom line is that land is becoming more and more concentrated in the hands of the rich and powerful, not that small farmers are doing well,” GRAIN concluded.

“Today, small farmers feed the world and we need them to continue to do so,” said Hobbelink. “If we don’t reverse the current trend of the corporate takeover of the worlds farmland to produce industrial commodities, they will not be able to do so and we will all lose out.”

Photos: all belong to the work of GRAIN

With English comes great power, and responsibility – new study maps global language potency

Share this
Share

Language facilitates the global flow of ideas, and elite languages in global communications are those that are both literate and online, according to new research that has mapped information flows across languages. Certain languages have been found to be much more powerful than others, because they are more and better connected within the communications networks of the world, while others are relatively weaker. Some smaller languages are stronger than languages spoken by much larger groups of people however, due to political and other reasons.

“The global influence of a language is determined by its connections to other languages, not by its number of speakers or their economic power, as these connections make possible the global transfer of ideas,” MIT’s Shahar Ronen, first author of the study, told The Speaker.

ScreenHunter_1921 Dec. 21 14.43
Shahar Ronen

Ronen explained how some language speakers–those who speak central languages–have a disproportionate amount of power and responsibility because their communications are “tacitly shaping the way in which distant cultures see each other,” while other language groups are handicapping themselves with policies that restrict or disconnect people from global communications networks (GLNs).

“Think about it this way: if the English-speaking world did not care about the 2014 events in Ukraine, the rest of the world would have a very hard time learning about it as well.”

“A government that disconnects its people from the internet–e.g., as China does with its Great Firewall–hamstrings its ability to gain global influence,” said Ronen. “Governments concerned with boosting international soft power should invest in translating more documents and encourage more people to tweet in their national language. Contemporary China and Russia produce very few international thought leaders, and leave little legacy for future generations.”

The study generated maps of connections between communications media–including 30 years worth of book translations in 150 countries, 550 million Tweets in 73 languages, and multiple language editions of Wikipedia pages.

“We mapped three global language networks from three sources: Twitter, Wikipedia and book translations,” Ronen explained. “These sources are by no means representative of the world’s population; rather, they represent the elites that generate and propagate ideas around the world.”

The team found that the more central a given language was to the network, the more famous its speakers were predicted to be–more so than other factors such as population and wealth.

Centrality was based on both strength and number of connections, the team found. The three GLNs identified by the team centered on English as a global hub. That hub is connected to secondary hubs–Spanish, French, German, Portuguese, Chinese and Russian.

“For example, it is easy for an idea conceived by a Spaniard to reach an Englishman through bilingual speakers of English and Spanish,” said Ronen. “An idea conceived by a Vietnamese speaker, however, might only reach a Mapudungun speaker in south-central Chile through a circuitous path that connects bilingual speakers of Vietnamese and English, English and Spanish, and Spanish and Mapudungun.”

Many of the world’s people are to a degree left out of the global conversation–those who do not communicate in an elite language–and these people face profound limitations. This extends to both languages that are not spoken by relatively large amounts of people and languages are spoken by significant populations but that are limited for other reasons, Ronen said.

“The truly disenfranchised, at least as global communication is concerned, are those whose language don’t even show up on our network. That said, speakers of languages such as Chinese or Arabic, both a with low centrality given their number of speakers, are disadvantaged as well, as they are less likely to be exposed to the latest and greatest ideas in their fields and well not be able to communicate theirs globally. Consider a researcher who speaks only Chinese who will not be exposed to work by her peers abroad, or a CEO who speaks only Arabic.”

Ronen highlighted the importance of learning more than one language–particularly top elite languages.

“Learning a new language opens up a new part of the world for the learner and broadens his/her perspective. Achieving reasonable fluency in English is one of the best investments an individual in any country and any occupation can make. If you already speak English, learning another language is still valuable. If you have a specific goal such as doing business in a certain country, learning that country’s language will be highly beneficial.

“If you’re looking for a language that can open new opportunities for you in general, our study can inform you which languages to consider–Spanish or French are great options. From a global point of view, more multilinguals mean more and stronger connections between languages, which in turn facilitate the global flow of ideas.

The report, “Links that speak: The global language network and its association with global fame” was completed by Shahar Ronen, Bruno Gonçalves, Kevin Z. Hu, Alessandro Vespignani, Steven Pinker, and César A. Hidalgo.

The mind has many rooms – their architecture is the architecture of memory – study

The mind has many rooms - their architecture is the architecture of memory - study (
Share this
Share

Memory was considered the greatest faculty of mind before the invention of printing. Greek philosophers, Roman lawyers and Medieval European priests practiced the art by storing memories within mentally constructed architecture and objects. Recently, a study has found that rooms may be just how memory is stored.

“Each place–or room in your house–is represented by a unique map or memory and because we have so many different maps we can remember many similar places without mixing then up,” Dr. Charlotte Alme, PhD student Kavli Institute for Systems Neuroscience/Centre for Neural Computation Medical Technical Research Centre at the Norwegian University of Science and Technology (NTNU) and lead researcher on the study, told The Speaker.

The mind has many rooms - their architecture is the architecture of memory - study (
Dr. Charlotte Alme

“[W]e just published a paper where we see that rats–and most likely humans–have a map for each individual place, and this is why the method of loci works.”

“Episodic memory is characterized by an apparently astronomical storage capacity,” the reserachers framed their study, commenting on the thousands of new experiences that are encoded in the mind every day. These memories are thought to be stored on neural network properties of the hippocampus.

Alme provided some basic context for understanding the work.

“Memories are thought to be stored in physical networks of neurons that are connected to each other. Memories consist of many sensory pieces of information that are associated–when our brain encodes a personal experience, something in common is: what happened where and when.

“The connections between the neurons involved in encoding an experience are modified and strengthened in order to make a memory for that event. This means that when we years later smell an odor for example, we can remember the whole episode just by getting a cue–the odor–presented.

“We know now that the hippocampus is involved in formation of episodic–autobiographical–and spatial memories, in both humans and rats. The hippocampus is an evolutionarily very old part of the brain, conserved across mammals. Accordingly, we can investigate the spatial component of memory in rats and at the same time figure out how our memory system works. We have known for a long time that there are place specific cells in the hippocampus.

The mind has many rooms - their architecture is the architecture of memory - study (9)
Experimental setup and procedure and photographs of all 11 rooms

“The place cells that are active in restricted areas of an environment are also reactivated if a rat–or a human–experiences a previously known location–the place cell fires at the exact same location meaning that a memory for that particular place is formed. When a rat is placed in a different room, the place cell changes its firing location and that tells us that the rat knows it is in a different room.”

The current research, Alme said, was a matter of conducting a test to validate one of two possible theories.

“We wondered, what happens if we introduce many very similar rooms to a rat while we look at the firing response of the place cells. Will we see a generalization over all–or some–of the environments so that many place cells fire at the same location across all the rooms? Alternatively, the rat will create unique maps for each location and thus be able to separate between many and very similar experiences. Indeed, we observed the latter scenario because we can combine the 11 rooms in 55 ways to compare the rooms with each other, and when none of the room representations or memories overlap, this tells us that we have an enormously large storage capacity for memories in the brain.

In the research, the team found a complete lack of any overlap between spacial maps despite exposing the animals to a range of rooms with similar sensory features, which means that CA3 place cells may well form unique spacial representations for every single environment.

The mind has many rooms - their architecture is the architecture of memory - study (4)
Method of Loci figure and place cell maps that we actually recorded from the rats to show that each room is represented by its own map

“We recorded from on average 50 neurons in each rat and the rat hippocampus consist of hundreds of thousands of cells, in other words we can create and store very many memories throughout our lifetime.”

The team did not in their research find evidence to make detailed guesses about the bounds of memory capacity in CA3.

The work has important implications for the traditionally practiced arts of memory. Throughout history, people accross cultures have developed and trained themselves in nmemonic arts that are based on mentally constructing physical architecture and objects. For example, Romans would search through a mental apartment until they found a piece of legislation, and Medeival monks would memorize sermons by organizing important aspects of those sermons as mental objects stored appropriately in mental architecture.

“The findings of our paper also help to explain how the ancient Method of Loci works. Because we are extremely good at remembering places and because we are very visual creatures, this combination can be utilized when you want to remember something.

Alme explained how this can work.

“Associate what you want to remember to a place that you know well, e.g. your own house. Your own house consists of several rooms, or The mind has many rooms - their architecture is the architecture of memory - study (3)loci. Create a mental path through your house and in each room you connect or associate what you want to remember with something you place in that room. When you have made all the associations, you can recall or remember everything by taking a mental walk through your house again.

“For example as this cartoon shows: in the kitchen there are chilies flying in the air, the country to remember is Chile. Next to the kitchen, in the dining room your friend Tina is very angry–Argentina. In the living room there is a seal playing bras, Brazil. Up the staircase there is a Globus with a red band around equator–Ecuador. In the attic Columbus is lying on a bed watching TV–Columbia.

The mind has many rooms - their architecture is the architecture of memory - study (2)“You create your own associations, the stranger the associations the easier they are to remember!”

The research has pushed forward our understanding of how memories are stored in place cells in the hippocampal area CA3. It had not been known whether the cells maintain independence when the number of memorized environments is increased.

The report, “Place cells in the hippocampus: Eleven maps for eleven rooms,” was completed by Charlotte B. Alme, Chenglin Miao, Karel Jezek, Alessandro Treves, Edvard I. Moser, and May-Britt Moser, a join-team of researchers from the Norwegian University of Science and Technology’s (NTNU) Kavli Institute for Systems Neuroscience and Centre for Neural Computation along with colleagues from the Czech Republic and Italy, and was published on the Proceedings of the National Academy of Sciences (PNAS) website.

Images: the work of the researchers, Nature Reviews, the Palace Project

Additional images

Dark matter signals, part two: Boiarskyi explains

Dark matter signals
Share this
Share

Arguably the biggest science story of the week was the discovery of material evidence of dark matter. In this two-part article, two lead researchers on the report explain their findings and the significance of the work. Their accounts are full of–besides beautiful explanations of cutting-edge physics research in layman’s terms–potent philosophy and enthralling sentiment about what these scientists are doing and what will come.


Never-before detected elements have been found by a group of European physicists looking for anomalies in signals emanating from several galaxies. The findings–pieces of “missing Lego” in the words of researcher Dr. Oleksii Boiarskyi–will validate some of the approaches currently being undertaken to understand the universe while invalidating others, and will bring us one step closer to the complete picture of physics.

Dark matter signals
Dr. Oleksii Boiarskyi

“If this signal is confirmed, we will have a completely new tool to study the structure of our Universe, it’s ‘dark side’ and also its history, how did it form,” Dr. Oleksii Boiarskyi of the Instituut-Lorentz for Theoretical Physics and and Leiden University told The Speaker.

“On the particle physics level, we will know more about the missing Lego elements that where used to build this Universe–and that we did not detect so far. This will support some approaches extending our knowledge and will disfavour the others.

“We would make one more step towards the complete picture of physics.”

Read more: Dark matter signals, part one: Ruchayskiy explains

The team detected anomalies in signals–photon emissions in X-ray spectra–using the European Space Agency’s (ESA) XMM-Newton telescope (feature image). The anomalies were something the the team had been seeking–they were acting on a hypothesis that dark matter occasionally decayed, and that they could pick up signals that represented that decaying dark matter. They found just that.

The findings, if confirmed, would be the first ever evidence of the heretofore undetectable material that accounts for an estimated 80 percent of our universe.

Boiarskyi explained to us what the signal was that they had detected, and what it was like reading the data.

“We study the spectra of galaxies and clusters of galaxies,” said Boiarskyi. “This is a function, a number of photons detected for each energy.

“It has a smooth–continuous–part and narrow lines. The lines come from various atomic transitions and continuum from just emission of accelerated charged particles.

“We can find a model that describes all these emissions and find a good fit for the data. If a statistically significant residual to this model exists, this means that there is another line, that is coming from some additional quantum transition.

“In our case the position and normalisation of this lines are not like you expect from atomic transition. Moreover, it changes over the sky as DM density–projected along the line of sight.

“That is why there is a conjecture that this could come from decay of DM particles. You can check this conjecture by comparing signals from various DM dominating objects. So far it is consistent.”

It is not certain whether the type of dark matter found by the team accounts for the full 80 percent of currently unknown matter expected to exist, or whether there were a variety of dark matters.

“Nobody knows,” said Boiarskyi, “both are possible. But of course one first tries a minimal model with one sort of DM.”

Confirmation could be a year away, Boiarskyi told us.

“[W]e still need to wait about a year, once new data are available, to check if this hypothesis is correct or not. If DM will be discovered once, most likely it will be a story similar to the one we are having now…”

The report, “An unidentified line in X-ray spectra of the Andromeda galaxy and Perseus galaxy cluster,” was completed by A. Boyarsky, O. Ruchayskiy, D. Iakubovskyi, and J. Franse, and was published on the Cornell University website.

Read the thoughts of another physicist on this study, Dr. Dr. Oleg Ruchayskiy: Dark matter signals, part one: Ruchayskiy explains

Dark matter signals, part one: Ruchayskiy explains

Dr. Oleg Ruchayskiy
Share this
Share

Arguably the biggest science story of the week was the discovery of material evidence of dark matter. In this two-part article, two lead researchers on the report explain their findings and significance of the work. Their accounts are full of–besides beautiful explanations of cutting-edge physics research in layman’s terms–potent philosophy and enthralling sentiment about what these scientists are doing and what will come.


Setting out with the hypothesis that dark matter occasionally though rarely decays, European physicists pointed the European Space Agency’s XMM-Newton telescope at faraway galaxies to seek evidence of the common, though never seen, material that makes up an estimated 80 percent of our universe–in  signal anomalies. In his explanation of the recent work, researcher Dr Oleg Ruchayskiy pointed to the era we are now entering, in which our vision and understanding will reach what we have not before known–both in terms of nature and of time.

Dark matter signals
Dr. Oleg Ruchayskiy

“Dark structures that surround us will become ‘visible,'” Dr. Oleg Ruchayskiy, physicist at the École Polytechnique Fédérale de Lausanne (EPFL) and one of the authors of the study, told The Speaker. “Dark matter distribution is not expected to be perfectly smooth and its clumps and streams–they carry important information about the past of the Milky way or the Local Group. We will be able to visualize and maybe even tell the story of ‘our Galactic past.’

“This will be the epoch of ‘astronomical archaeology–looking at the events that happened a billion years ago.”

“The fact that this can be possible–to explore by the power of our minds the regions of space and periods of time, incomparable with the lifespan of humans–isn’t it beautiful?”

Read more: Dark matter signals, part two: Boiarskyi explains

Ruchayskiy commented on the kinds of improvements that were in store for humanity as our ability to see our universe was expanded to the perception of dark matter.

“Another thing about our research: if this signal is real dark matter decay–we will soon be able to build ‘dark matter telescopes.’ We will be able to do 3D tomography of our own galaxy and of nearby Universe. We will even be able to look into the past of our Universe.

Ruchayskiy explained the teams recent work.

“We were testing the hypothesis that this particle is _the same_ in different galaxies and in the galaxy clusters. Otherwise it would be very difficult to cross-check this signal.”

The physicist provided details about what the signal was that they found–how it was characterized and where it was located–and about the course of the study, which happily coincided with similar results from a completely separate team of physicists.

“Just to give you some background info. We were looking at X-ray spectra of galaxies and galaxy clusters. X-rays do not pass through the atmosphere, therefore all X-ray satellites are in space. So, the data is just files. In essence, the file is very simple: it tells you how many X-ray photons of a given energy had arrive from a given direction.

“It is more or less known how the X-ray signal from a galaxy should look. That is, ‘given X photons of energy 1 keV, we expect Y photons of energy 2 keV,’ etc. This is called ‘an X-ray spectrum of a galaxy’–or any other object.

“So, we were analyzing the spectrum of the Andromeda galaxy we found that there are ‘extra photons’ at energy around 3.5 keV. Several hundreds of them.

“This was not an “accident”–we were looking for these photons. We had a hypothesis, that dark matter particles are not stable, but occasionally decay. This happens very rarely. Any given particle has a probability of something like 1/billion to decay. But in a galaxy like our there are 10^70 such particles, so the resulting signal may be sizable. We searched for such signal for many years–and so did other groups–so we knew that it should be small…

“We found such a signal in Andromeda.

“A signal in one particular object could be anything: fluctuation, emission of ions, instrumental error. So, we looked at the data from a different object–the Perseus galaxy cluster. The galaxies and galaxy clusters have very different X-ray emissions. But masses of both types of objects are ‘dark matter dominated.’ So, seeing a line there was necessary. And we did find the signal. Moreover, we saw that the signal is redshifted–as Perseus is farther away from us–which excluded the instrumental origin of the signal. We saw that the signal’s intensity scales correctly (because the dark matter mass in Perseus cluster is different from that of Andromeda galaxy). We saw that signal becomes weaker as one goes to the outskirts of the cluster because dark matter is more concentrated towards the center — all this was consistent with the decaying dark matter hypothesis.

“Later we did another work where we have found a similar line from the center of our own Galaxy, the Milky Way. And again, its intensity fell into the predicted range.

“Independently from us and at the same time–even couple of days earlier–another group from Harvard CfA/NASA–you probably had seen the press releases of their result–had found a signal at the same energy in various galaxy clusters–both nearby and distant. Their work is completely independent, uses different observations. This is of course extremely important, that this are the independent results of two groups. And that these groups confirm each other–we haven’t seen each other’s works prior to publication on arXiv.”

Next the team will probe space for signals that would corroborate their hypothesis.

“Our strategy now is: find this signal from many dark matter dominated objects, show that its intensity is proportional to the total amount of dark matter in each object.”

Ruchayskiy elaborated on the breadth of work that remained to fill in the full picture of dark matter–namely the possibility of various types of dark matter, which we asked him about.

“Testing a hypothesis that this is only one component of dark matter and there are other types of particles would be harder, because than we need to know whether a portion of decaying particles changes from object from object and this depends on the model of dark matter.

“[W]e were specifically looking for these particles and for this type of signal. We did not know the energy, so we were scanning the energies.

“So, this road, from an idea that dark matter particle could be unstable to seeing an actual signal–it is breathtaking. The fact that people can grasp with their minds something about the whole Universe–it is very fascinating for me. And any ‘predictions’ that becomes a ‘confirmed signal’–this is very impressive. It’s a feeling that is hard to express. Of course, our signal is not unique in this aspect. Every scientific discovery is like that. Which makes it even more fascinating.”

The report, “An unidentified line in X-ray spectra of the Andromeda galaxy and Perseus galaxy cluster,” was completed by A. Boyarsky, O. Ruchayskiy, D. Iakubovskyi, and J. Franse, and was published on the Cornell University website.

Read more: Dark matter signals, part two: Boiarskyi explains