Pesquisar Neste Blog

sexta-feira, 20 de maio de 2011

CENTRO DE ESTUDOS - Instituto Oswaldo Cruz

Brasileiros conseguem aumentar atividade luminescente de enzima

Divulgação
Cientistas descobrem um dos 'interruptores' que pode fazer com que enzimas se tornem luminescentes

Estudo foi conduzido pelos pesquisadores da UFSCar Rogilene Prado e Vadim Viviani

Agência FAPESP - Os pesquisadores do grupo de Bioluminescência e Biofotônica da Universidade Federal de São Carlos (UFSCar), campus de Sorocaba, deram um importante passo para, em um futuro próximo, possibilitar que algumas enzimas de interesse biomédico, biotecnológico e ambiental emitam luz.

A propriedade é importante para estudar doenças como o câncer ou infecções bacterianas, por exemplo. Os cientistas descobriram um dos principais "disjuntores" presentes na "caixa de força" de enzimas com baixa capacidade de luminescência da mesma classe das luciferases - responsáveis pela emissão de luz fria e visível em vaga-lumes -, que pode ser modificado para aumentar a intensidade de sua luz.

A descoberta do estudo, resultado de projeto de pesquisa apoiado pela FAPESP por meio de um Auxílio à Pesquisa - Regular, será publicada no fim deste mês na revistaPhotochemical and Photobiological Sciences.

Em 2009, o mesmo grupo clonou e isolou da larva de um inseto não luminescente (besouro) uma enzima da mesma família das luciferases (a AMP-CoA-ligases), fracamente luminescente e conhecida como protoluciferase, para estudar como as luciferases de vaga-lumes desenvolveram durante a evolução a capacidade de catalisar a reação de oxidação da luciferina - o composto responsável pela bioluminescência de insetos - e produzir intensa luz visível.

Nos últimos anos, ao comparar as sequências de aminoácidos da protoluciferase com a luciferase, os pesquisadores da UFSCar começaram a identificar partes da estrutura delas que poderiam estar envolvidas com a determinação da atividade de produzir luz.

Por meio de técnicas de engenharia genética, a estudante de doutorado Rogilene Prado e o pesquisador Vadim Viviani, coordenador do projeto, realizaram mutações de aminoácidos da protoluciferase. Agora, o grupo identificou que a mutação de um desses aminoácidos aumenta bastante a atividade luminescente da enzima, tornando-a muito semelhante à de uma luciferase.

"É como se a enzima protoluciferase fosse um circuito eletrônico, que tem uma bateria, representada pelo oxigênio, e uma lâmpada, que é a luciferina. Descobrimos agora um dos principais interruptores presentes na estrutura delas, que é responsável por ligar a bateria à lâmpada. Ou seja, fazer com que a reação da luciferina e do oxigênio ocorra, acendendo a luz", disse Viviani à Agência FAPESP.

Segundo ele, a descoberta abre a possibilidade de tornar outras enzimas da família AMP-CoA-ligases de interesse biomédico, biotecnológico e ambiental que não produzem luz em luminescentes.

Presente em todos os organismos, incluindo bactérias e o homem, as AMP-CoA-ligases desempenham as mais variadas funções metabólicas, como a biossíntese de pigmentos (em plantas), metabolismo de lipídeos, síntese de antibióticos e eliminação de substâncias tóxicas e compostos químicos estranhos a um organismo ou sistema biológico (xenobióticos).

Em comum, a primeira reação que elas catalisam é a ativação de ácidos orgânicos, como os aminoácidos, ácidos graxos e a própria luciferina do vaga-lume, que é oxidada pelas luciferases, produzindo luz. Em função disso, os pesquisadores pretendem utilizá-las como indicadores de determinados ácidos orgânicos de interesse biomédico, como os ácidos tóxicos, e biotecnológicos.

"A capacidade de servir como um indicador para selecionar determinados ácidos orgânicos de interesse farmacêutico e biotecnológico talvez represente o maior potencial de aplicações dessas enzimas", disse Viviani.

Evolução em laboratório

Segundo o coordenador do projeto, algumas poucas luciferases de vaga-lumes norte-americanos, europeus e japoneses são utilizadas como reagentes analíticos. São usadas para detectar o estado metabólico de uma amostra biológica e biomarcadores de expressão gênica, ou para marcar células de câncer em estudos biofotônicos, por exemplo.

Por meio das pesquisas com o protótipo da enzima luciferase que clonaram e aumentaram a luminescência, os pesquisadores brasileiros pretendem criar por engenharia genética uma nova enzima luciferase que tenha a propriedade de emitir luz comparável às luciferases empregadas atualmente no mercado.

"Com as condições ideais de evolução, essa protoluciferase poderá se transformar em uma luciferase. Estamos simulando a evolução dela em laboratório", disse Vanini.

O grupo de pesquisa da UFSCar é um dos únicos dedicados ao estudo de enzimas luciferases no Brasil. No Instituto de Química da Universidade de São Paulo (USP) há um outro grupo, coordenado pelo professor Cassius Stevani, com o qual eles colaboram, que estuda fungos luminescentes.

Já no mundo, os grupos de pesquisa na área estão estabelecidos nos Estados Unidos, Europa e Japão - esse último colabora com os pesquisadores brasileiros. E, segundo Viviani, nenhum deles ainda conseguiu clonar uma enzima protoluminescente com a capacidade de emitir luz semelhante à do grupo brasileiro.

O resumo do estudo Structural evolution of luciferase activity in Zophobas mealworm AMP/CoA-ligase (protoluciferase) through site-directed mutagenesis of the luciferin binding site, de autoria do professor Vivini e de outros pesquisadores, pode ser lido empubs.rsc.org/en/Content/ArticleLanding/2011/PP/c0pp00392a

Nanopatch for the Heart, for Heart Attack Victims

ScienceDaily (May 19, 2011) — Engineers at Brown University and in India have a promising new approach to treating heart-attack victims. The researchers created a nanopatch with carbon nanofibers and a polymer. In laboratory tests, natural heart-tissue cell density on the nanoscaffold was six times greater than the control sample, while neuron density had doubled.
Beating heart. Engineers at Brown University have created a nanopatch for the heart that tests show restores areas that have been damaged, such as from a heart attack.
Needless to say, this is a grossly inefficient way to treat arguably the single most important organ in the human body. The best approach would be to figure out how to resuscitate the deadened area, and in this quest, a group of researchers at Brown University and in India may have an answer.When you suffer a heart attack, a part of your heart dies. Nerve cells in the heart's wall and a special class of cells that spontaneously expand and contract -- keeping the heart beating in perfect synchronicity -- are lost forever. Surgeons can't repair the affected area. It's as if when confronted with a road riddled with potholes, you abandon what's there and build a new road instead.

The scientists turned to nanotechnology. In a lab, they built a scaffold-looking structure consisting of carbon nanofibers and a government-approved polymer. Tests showed the synthetic nanopatch regenerated natural heart tissue cells ­- called cardiomyocytes -- as well as neurons. In short, the tests showed that a dead region of the heart can be brought back to life.

"This whole idea is to put something where dead tissue is to help regenerate it, so that you eventually have a healthy heart," said David Stout, a graduate student in the School of Engineering at Brown and the lead author of the paper published in Acta Biomaterialia.

The approach, if successful, would help millions of people. In 2009, some 785,000 Americans suffered a new heart attack linked to weakness caused by the scarred cardiac muscle from a previous heart attack, according to the American Heart Association. Just as ominously, a third of women and a fifth of men who have experienced a heart attack will have another one within six years, the researchers added, citing the American Heart Association.

What is unique about the experiments at Brown and at the India Institute of Technology Kanpur is the engineers employed carbon nanofibers, helical-shaped tubes with diameters between 60 and 200 nanometers. The carbon nanofibers work well because they are excellent conductors of electrons, performing the kind of electrical connections the heart relies upon for keeping a steady beat. The researchers stitched the nanofibers together using a poly lactic-co-glycolic acid polymer to form a mesh about 22 millimeters long and 15 microns thick and resembling "a black Band Aid," Stout said. They laid the mesh on a glass substrate to test whether cardiomyocytes would colonize the surface and grow more cells.

In tests with the 200-nanometer-diameter carbon nanofibers seeded with cardiomyocytes, five times as many heart-tissue cells colonized the surface after four hours than with a control sample consisting of the polymer only. After five days, the density of the surface was six times greater than the control sample, the researchers reported. Neuron density had also doubled after four days, they added.

The scaffold works because it is elastic and durable, and can thus expand and contract much like heart tissue, said Thomas Webster, associate professor in engineering and orthopaedics at Brown and the corresponding author on the paper. It's because of these properties and the carbon nanofibers that cardiomyocytes and neurons congregate on the scaffold and spawn new cells, in effect regenerating the area.

The scientists want to tweak the scaffold pattern to better mimic the electrical current of the heart, as well as build an in-vitro model to test how the material reacts to the heart's voltage and beat regime. They also want to make sure the cardiomyocytes that grow on the scaffolds are endowed with the same abilities as other heart-tissue cells.

Bikramjit Basu at the India Institute of Technology Kanpur contributed to the paper. The Indo-U.S. Science and Technology Forum, the Hermann Foundation, the Indian Institute of Technology, Kanpur, the government of India and California State University funded the research.

Embryonic Cells: Predicting the Fate of Personalized Cells Next Step Toward New Therapies

ScienceDaily (May 19, 2011) — Discovering the step-by-step details of the path embryonic cells take to develop into their final tissue type is the clinical goal of many stem cell biologists. To that end, Kenneth S. Zaret, PhD, professor of Cell and Developmental Biology at the Perelman School of Medicine at the University of Pennsylvania, and associate director of the Penn Institute for Regenerative Medicine, and Cheng-Ran Xu, PhD, a postdoctoral researcher in the Zaret laboratory, looked at immature cells called progenitors and found a way to potentially predict their fate. They base this on how the protein spools around which DNA winds -- called histones -- are marked by other proteins.
Earliest cells that form the liver (blue) emerging from progenitor cells (yellow) in the early embryo (green).
This study appeared this week inScience.

In the past, researchers grew progenitor cells and waited to see what they differentiated into. Now, they aim to use this marker system, outside of a cell's DNA and genes, to predict the eventual fate. This extra-DNA system of gene expression control is called epigenetics.

"We were surprised that there's a difference in the epigenetic marks in the process for liver versus pancreas before the cell-fate 'decision' is made." says Zaret. "This suggests that we could manipulate the marks to influence fate or look at marks to better guess the fate of cells early in the differentiation process."

"How cells become committed to particular fates is a fundamental question in developmental biology," said Susan Haynes, PhD, program director in the Division of Genetics and Developmental Biology at the National Institutes of Health, which funds this line of research. "This work provides important new insights into the early steps of this process and suggests new approaches for controlling stem-cell fate in regenerative medicine therapies."

A Guiding Path

How the developing embryo starts to apportion different functions to different cell types is a key question for developmental biology and regenerative medicine. Guidance along the correct path is provided by regulatory proteins that attach to chromosomes, marking part of the genome to be turned on or off. But first the two meters of tightly coiled DNA inside the nucleus of every cell must be loosened a bit. Regulatory proteins help with this, exposing a small domain near the target gene.

Chemical signals from neighboring cells in the embryo tell early progenitor cells to activate genes encoding proteins. These, in turn, guide the cells to become liver or pancreas cells, in the case of Zaret's work. Over several years, his lab has unveiled a network of the common signals in the mouse embryo that govern development of these specific cell types.

Zaret likens the complexity of this system to the 26-letter alphabet being able to encode Shakespeare or a menu at a restaurant. Many investigators are now trying to broadly reprogram cells into desired cell fates for potential therapeutic uses.

The researchers had previously shown that a particular growth factor that attaches to the cell surface, gives a specific chemical signal for cell-type fate, promoting development along the liver-cell path and suppressing development along the pancreas-cell path. Liver and pancreas cells originate from a common progenitor cell type.

Zaret's group figured out which enzymes -- called histone acetyl transferases or methyl transferases (that add methyl groups or acetyl groups to histones) are relevant to the pancreas arm of the liver-pancreas fate decision. They used mice in which they knocked out the function for one enzyme type versus the other to induce the development of fewer liver cells and more pancreas cells.

The transferases mark genes for liver and pancreas fates differently before a cell moves into the next intermediate type along the way to becoming a mature liver or pancreas cell.

Investigators want to make embryonic stem cells for liver or pancreatic beta cells for therapies and research. To do this, they mimic the embryonic developmental steps to proceed from an embryonic stem cell to a mature cell, but have no way of knowing if they are on the right track. The hope is that the findings from this study can be applied to assess the epigenetic state of intermediate progenitor cells.

"By better understanding how a cell is normally programmed we will eventually be able to properly reprogram other cells," notes Zaret. In the near term, the team also aims to generate liver and pancreas cells for research and to screen drugs that repair defects or facilitate cell growth.

With regenerated cells, researchers hope to one day fill the acute shortage in pancreatic and liver tissue available for transplantation in cases of type I diabetes and acute liver failure.

The research was funded by the Institute of General Medical Sciences and the Institutes for Diabetes, Digestive, and Kidney Disorders.

Nuclear Magnetic Resonance With No Magnets

ScienceDaily (May 19, 2011) — Nuclear magnetic resonance (NMR), a scientific technique associated with outsized, very low-temperature, superconducting magnets, is one of the principal tools in the chemist's arsenal, used to study everything from alcohols to proteins to such frontiers as quantum computing. In hospitals the machinery of NMR's cousin, magnetic resonance imaging (MRI), is as loud as it is big, but nevertheless a mainstay of diagnosis for a wide range of medical conditions.
Hydrogen molecules consist of two hydrogen atoms that share their electrons in a covalent bond. In an orthohydrogen molecule, both nuclei are spin up. In parahydrogen, one is spin up and the other spin down. The orthohydrogen molecule as a whole has spin one, but the parahydrogen molecule has spin zero.
It sounds like magic, but now two groups of scientists at Berkeley Lab and UC Berkeley, one expert in chemistry and the other in atomic physics, long working together as a multidisciplinary team, have shown that chemical analysis with NMR is practical without using any magnets at all.

Dmitry Budker of Berkeley Lab's Nuclear Science Division, a professor of physics at UC Berkeley, is a protean experimenter who leads a group with interests ranging as far afield as tests of the fundamental theorems of quantum mechanics, biomagnetism in plants, and violations of basic symmetry relations in atomic nuclei. Alex Pines, of the Lab's Materials Sciences Division and UCB's Department of Chemistry, is a modern master of NMR and MRI. He guides the work of a talented, ever-changing cadre of postdocs and grad students known as the "Pinenuts" -- not only in doing basic research in NMR but in increasing its practical applications. Together the groups have extended the reach of NMR by eliminating the use of magnetic fields at different stages of NMR measurements, and have finally done away with external magnetic fields entirely.

Spinning the information

NMR and MRI depend on the fact that many atomic nuclei possess spin (not classical rotation but a quantum number) and -- like miniature planet Earths with north and south magnetic poles -- have their own dipolar magnetic fields. In conventional NMR these nuclei are lined up by a strong external magnetic field, then knocked off axis by a burst of radio waves. The rate at which each kind of nucleus then "wobbles" (precesses) is unique and identifies the element; for example a hydrogen-1 nucleus, a lone proton, precesses four times faster than a carbon-13 nucleus having six protons and seven neutrons.

Being able to detect these signals depends first of all on being able to detect net spin; if the sample were to have as many spin-up nuclei as spin-down nuclei it would have zero polarization, and signals would cancel. But since the spin-up orientation requires slightly less energy, a population of atomic nuclei usually has a slight excess of spin ups, if only by a few score in a million.

"Conventional wisdom holds that trying to do NMR in weak or zero magnetic fields is a bad idea," says Budker, "because the polarization is tiny, and the ability to detect signals is proportional to the strength of the applied field."

The lines in a typical NMR spectrum reveal more than just different elements. Electrons near precessing nuclei alter their precession frequencies and cause a "chemical shift" -- moving the signal or splitting it into separate lines in the NMR spectrum. This is the principal goal of conventional NMR, because chemical shifts point to particular chemical species; for example, even when two hydrocarbons contain the same number of hydrogen, carbon, or other atoms, their signatures differ markedly according to how the atoms are arranged. But without a strong magnetic field, chemical shifts are insignificant.

"Low- or zero-field NMR starts with three strikes against it: small polarization, low detection efficiency, and no chemical-shift signature," Budker says.

"So why do it?" asks Micah Ledbetter of Budker's group. It's a rhetorical question. "The main thing is getting rid of the big, expensive magnets needed for conventional NMR. If you can do that, you can make NMR portable and reduce the costs, including the operating costs. The hope is to be able to do chemical analyses in the field -- underwater, down drill holes, up in balloons -- and maybe even medical diagnoses, far from well-equipped medical centers."

"As it happens," Budker says, "there are already methods for overcoming small polarization and low detection efficiency, the first two objections to low- or zero-field NMR. By bringing these separate methods together, we can tackle the third objection -- no chemical shift -- as well. Zero-field NMR may not be such a bad idea after all."

Net spin orientation can be increased in various ways, collectively known as hyperpolarization. One way to hyperpolarize a sample of hydrogen gas is to change the proportions of parahydrogen and orthohydrogen in it. Like most gases, at normal temperature and pressure each hydrogen molecule consists of two atoms bound together. If the spins of the proton nuclei point in the same direction, it's orthohydrogen. If the spins point in opposite directions, it's parahydrogen.

By the mathematics of quantum mechanics, adding up the spin states of the two protons and two electrons in a hydrogen molecule equals three ways for orthohydrogen to reach spin one; parahydrogen can only be spin zero, however. Thus orthohydrogen molecules normally account for three-quarters of hydrogen gas and parahydrogen only one-quarter.

Parahydrogen can be enhanced to 50 percent or even 100 percent using very low temperatures, although the right catalyst must be added or the conversion could take days if not weeks. Then, by chemically reacting spin-zero parahydrogen molecules with an initial chemical, net polarization of the product of the hydrogenation may end up highly polarized. This hyperpolarization can be extended not only to the parts of the molecule directly reacting with the hydrogen, but even to the far corners of large molecules. The Pinenuts, who devised many of the techniques, are masters of parahydrogen production and its hyperpolarization chemistry.

"With a high proportion of parahydrogen you get a terrific degree of polarization," says Ledbetter. "The catch is, it's spin zero. It doesn't have a magnetic moment, so it doesn't give you a signal! But all is not lost…."

And now for the magic

In low magnetic fields, increasing detection efficiency requires a very different approach, using detectors called magnetometers. In early low-field experiments, magnetometers called SQUID were used (superconducting quantum interference devices). Although exquisitely sensitive, SQUID, like the big magnets used in high-field NMR, must be cryogenically cooled to low temperatures.

Optical-atomic magnetometers are based on a different principle -- one that, curiously, is something like NMR in reverse, except that optical-atomic magnetometers measure whole atoms, not just nuclei. Here, an external magnetic field is measured by measuring the spin of the atoms inside the magnetometer's own vapor cell, typically a thin gas of an alkali metal such as potassium or rubidium. Their spin is influenced by polarizing the atoms with laser light; if there's even a weak external field, they begin to precess. A second laser beam probes how much they're precessing and thus just how strong the external field is.

Budker's group has brought optical-atomic magnetometry to a high pitch by such techniques as extending the "relaxation time," the time before the polarized vapor loses its polarization. In previous collaborations, the Pines and Budker groups have used magnetometers with NMR and MRI to image the flow of water using only the Earth's magnetic field or no field at all, to detect hyperpolarized xenon gas (but without analyzing chemical states), and in other applications. The next frontier is chemical analysis.

"No matter how sensitive your detector or how polarized your samples, you can't detect chemical shifts in a zero field," Budker says. "But there has always been another signal in NMR that can be used for chemical analysis -- it's just that it is usually so weak compared to chemical shifts, it has been the poor relative in the NMR family. It's called J-coupling."

Discovered in 1950 by the NMR pioneer Erwin Hahn and his graduate student, Donald Maxwell, J-coupling provides an interaction pathway between two protons (or other nuclei with spin), which is mediated by their associated electrons. The signature frequencies of these interactions, appearing in the NMR spectrum, can be used to determine the angle between chemical bonds and distances between the nuclei.

"You can even tell how many bonds separate the two spins," Ledbetter says. "J-coupling reveals all that information."

The resulting signals are highly specific and indicate just what chemical species is being observed. Moreover, as Hahn saw right away, while the signal can be modified by external magnetic fields, it does not vanish in their absence.

With Ledbetter in the lead, the Budker/Pines collaboration built a magnetometer specifically designed to detect J-coupling at zero magnetic field. Thomas Theis, a graduate student in the Pines group, supplied the parahydrogen and the chemical expertise to take advantage of parahydrogen-induced polarization. Beginning with styrene, a simple hydrocarbon, they measured J-coupling on a series of hydrocarbon derivatives including hexane and hexene, phenylpropene, and dimethyl maleate, important constituents of plastics, petroleum products, even perfumes.

"The first step is to introduce the parahydrogen," Budker says. "The top of the set-up is a test tube containing the sample solution, with a tube down to the bottom through which the parahydrogen is bubbled." In the case of styrene, the parahydrogen was taken up to produce ethylbenzene, a specific arrangement of eight carbon atoms and 10 hydrogen atoms.

Immediately below the test tube sits the magnetometer's alkali vapor cell, a device smaller than a fingernail, microfabricated by Svenja Knappe and John Kitching of the National Institute of Standards and Technology. The vapor cell, which sits on top of a heater, contains rubidium and nitrogen gas through which pump and probe laser beams cross at right angles. The mechanism is surrounded by cylinders of "mu metal," a nickel-iron alloy that acts as a shield against external magnetic fields, including Earth's.

Ledbetter's measurements produced signatures in the spectra which unmistakably identified chemical species and exactly where the polarized protons had been taken up. When styrene was hydrogenated to form ethylbenzene, for example, two atoms from a parahydrogen molecule bound to different atoms of carbon-13 (a scarce but naturally occurring isotope whose nucleus has spin, unlike more abundant carbon-12).

J-coupling signatures are completely different for otherwise identical molecules in which carbon-13 atoms reside in different locations. All of this is seen directly in the results. Says Budker, "When Micah goes into the laboratory, J-coupling is king."

Of the present football-sized magnetometer and its lasers, Ledbetter says, "We're already working on a much smaller version of the magnetometer that will be easy to carry into the field."

Although experiments to date have been performed on molecules that are easily hydrogenated, hyperpolarization with parahydrogen can also be extended to other kinds of molecules. Budker says, "We're just beginning to develop zero-field NMR, and it's still too early to say how well we're going to be able to compete with high-field NMR. But we've already shown that we can get clear, highly specific spectra, with a device that has ready potential for doing low-cost, portable chemical analysis."

Neutrons Provide First Sub-Nanoscale Snapshots of Huntington's Disease Protein

ScienceDaily (May 19, 2011) — Researchers at the Department of Energy's Oak Ridge National Laboratory and the University of Tennessee have for the first time successfully characterized the earliest structural formation of the disease type of the protein that causes Huntington's disease. The incurable, hereditary neurological disorder is always fatal and affects one in 10,000 Americans.



Transmission electron microscopy demonstrates the fibrillar nature of huntingtin aggregates.

Huntington's disease is caused by a renegade protein "huntingtin" that destroys neurons in areas of the brain concerned with the emotions, intellect and movement. All humans have the normal huntingtin protein, which is known to be essential to human life, although its true biological functions remain unclear.
Christopher Stanley, a Shull Fellow in the Neutron Scattering Science Division at ORNL, and Valerie Berthelier, a UT Graduate School of Medicine researcher who studies protein folding and misfolding in Huntington's, have used a small-angle neutron scattering instrument, called Bio-SANS, at ORNL's High Flux Isotope Reactor to explore the earliest aggregate species of the protein that are believed to be the most toxic.

Stanley and Berthelier, in research published in Biophysical Journal, were able to determine the size and mass of the mutant protein structures―from the earliest small, spherical precursor species composed of two (dimers) and three (trimers) peptides―along the aggregation pathway to the development of the resulting, later-stage fibrils. They were also able to see inside the later-stage fibrils and determine their internal structure, which provides additional insight into how the peptides aggregate.

"Bio-SANS is a great instrument for taking time-resolved snapshots. You can look at how this stuff changes as a function of time and be able to catch the structures at the earliest of times," Stanley said. "When you study several of these types of systems with different glutamines or different conditions, you begin to learn more and more about the nature of these aggregates and how they begin forming."

Normal huntingtin contains a region of 10 to 20 glutamine amino acids in succession. However, the DNA of Huntington's disease patients encodes for 37 or more glutamines, causing instability in huntingtin fragments that contain this abnormally long glutamine repeat. Consequentially, the mutant protein fragment cannot be degraded normally and instead forms deposits of fibrils in neurons.

Those deposits, or clumps, were originally seen as the cause of the devastation that ensues in the brain. More recently researchers think the clumping may actually be a kind of biological housecleaning, an attempt by the brain cells to clean out these toxic proteins from places where they are destructive. Stanley and Berthelier set out to learn through neutron scattering what the toxic proteins were and when and where they occurred.

At the HFIR Bio-SANS instrument, the neutron beam comes through a series of mirrors that focus it on the sample. The neutrons interact with the sample, providing data on its atomic structure, and then the neutrons scatter, to be picked up by a detector. From the data the detector sends of the scattering pattern, researchers can deduce at a scale of less than billionths of a meter the size and shape of the diseased, aggregating protein, at each time-step along its growth pathway.

SANS was able to distinguish the small peptide aggregates in the sample solution from the rapidly forming and growing larger aggregates that are simultaneously present. In separate experiments, they were able to monitor the disappearance of the single peptides, as well as the formation of the mature fibrils.

Now that they know the structures, the hope is to develop drugs that can counteract the toxic properties in the early stages, or dissuade them from taking the path to toxicity. "The next step would be, let's take drug molecules and see how they can interact and affect these structures," Stanley said.

For now, the researchers believes Bio-SANS will be useful in the further study of Huntington's disease aggregates and applicable for the study of other protein aggregation processes, such as those involved in Alzheimer's and Parkinson's diseases.

"That is the future hope. Right now, we feel like we are making a positive contribution towards that goal," Stanley said.

The research was supported by the National Institutes of Health. HFIR and Bio-SANS are supported by the DOE Office of Science.

Curcumin Compound Improves Effectiveness of Head and Neck Cancer Treatment, Study Finds

ScienceDaily (May 19, 2011) — A primary reason that head and neck cancer treatments fail is the tumor cells become resistant to chemotherapy drugs. Now, researchers at the University of Michigan Comprehensive Cancer Center have found that a compound derived from the Indian spice curcumin can help cells overcome that resistance.
A primary reason that head and neck cancer treatments fail is the tumor cells become resistant to chemotherapy drugs. Now, researchers have found that a compound derived from the Indian spice curcumin can help cells overcome that resistance.
When researchers added a curcumin-based compound, called FLLL32, to head and neck cancer cell lines, they were able to cut the dose of the chemotherapy drug cisplatin by four while still killing tumor cells equally as well as the higher dose of cisplatin without FLLL32.

The study appears this week in theArchives of Otolaryngology -- Head and Neck Surgery.

"This work opens the possibility of using lower, less toxic doses of cisplatin to achieve an equivalent or enhanced tumor kill. Typically, when cells become resistant to cisplatin, we have to give increasingly higher doses. But this drug is so toxic that patients who survive treatment often experience long-term side effects from the treatment," says senior study author Thomas Carey, Ph.D., professor of otolaryngology and pharmacology at the U-M Medical School and co-director of the Head and Neck Oncology Program at the U-M Comprehensive Cancer Center.

That tumors become resistant to cisplatin is a major reason why head and neck cancer patients frequently see their cancer return or spread. It also plays a big role in why five-year survival for head and neck cancer has not improved in the past three decades.

FLLL32 is designed to sensitize cancer cells at a molecular level to the antitumor effects of cisplatin. It targets a key type of protein called STAT3 that is seen at high levels in about 82 percent of head and neck cancers. High levels of STAT3 are linked to problems with normal cell death processes, which allow cancer cells to survive chemotherapy treatment. STAT3 activation has been associated with cisplatin resistance in head and neck cancer.

Curcumin is known to inhibit STAT3 function, but it is not well-absorbed by the body. FLLL32 was developed by researchers at Ohio State University to be more amenable to use in people. The current study used the compound only in cell lines in the laboratory.

In the current study, researchers compared varying doses of cisplatin alone with varying doses of cisplatin plus FLLL32 against two sets of head and neck cancer cells: one line that was sensitive to cisplatin and one line that was resistant.

They found that FLLL32 decreased the activation levels of STAT3, sensitizing both resistant and sensitive tumor cells to cisplatin. Further, lower doses of cisplatin with FLLL32 were equally effective at killing cancer cells as the higher doses of cisplatin alone.

Separate studies suggest FLLL32 may not be well-absorbed by the body and researchers are developing a next generation compound that they hope improves on that. The U-M team plans to further study this newer compound for its potential as part of head and neck cancer treatment. Clinical trials using this compound are not currently available.

Additional authors include Waleed M. Abuzeid, M.B.B.S.; Samantha Davis, B.S., M.D.; Alice L. Tang, B.A., M.D.; Lindsay Saunders, B.S.; J. Chadwick Brenner, M.S.E.; Emily Light, M.S.; Carol R. Bradford, M.D.; and Mark E.P. Prince, M.D., all from U-M; Jiayuh Lin, Ph.D., from the Nationwide Children's Hospital, Columbus, Ohio; James R. Fuchs, Ph.D., from The Ohio State University.

Funding was provided by the National Institute of Dental and Craniofacial Research, Head and Neck Specialized Program of Research Excellence (SPORE) grant, National Cancer Institute, American Cancer Society

Head and neck cancer statistics: 36,540 Americans will be diagnosed with head and neck cancer this year and 7,880 will die from the disease, according to the American Cancer Society.

Animal Results May Pave Way to Treating Rare Mitochondrial Diseases in Children

ScienceDaily (May 19, 2011) — A human drug that both prevents and cures kidney failure in mice sheds light on disabling human mitochondrial disorders, and may represent a potential treatment in people with such illnesses.

Falk and colleagues published their study online May 5 in the journalEMBO Molecular Medicine."There are no effective cures for mitochondrial diseases, even in animals," said study leader Marni J. Falk, M.D., who cares for children in the Mitochondrial-Genetics Disease Clinic at The Children's Hospital of Philadelphia. "So these striking results in mice may suggest a novel therapy of direct relevance for humans."

Mitochondria are tiny structures that operate as powerhouses within human and animal cells, generating energy from food. As such, they are fundamental to life. Failures of proper mitochondria function impair a wide range of organ systems.

Individually, mitochondrial diseases are very rare. However, because there are hundreds of these disorders, they collectively have a broad impact, affecting at least 1 in 5,000 people, and possibly more. Malfunctioning mitochondria also contribute to complex disorders, including diabetes, epilepsy, Alzheimer's disease and Parkinson's disease.

The current study focused on an inherited genetic deficiency that prevents the production of coenzyme Q, a critical antioxidant and component of the energy-generating respiratory chain. In humans and in the mutant mice used to model this disease, the deficiency results in fatal kidney failure. The current treatment, which consists of providing regular supplements of the missing enzyme product, coenzyme Q10, is often ineffective.

Falk's team fed the mutant mice probucol, an oral drug formerly used to treat people with high cholesterol (since replaced for that purpose by statin drugs). The drug prevented the mice from developing kidney disease, and also reversed kidney disease in mice that had already developed it. It also raised the levels of coenzyme Q10 within the animals' tissues and corrected signaling abnormalities.

"This drug showed remarkable benefits in the mice, especially when compared to directly feeding the mice supplements of the missing co-factor -- coenzyme Q10," said Falk. "If this approach can be safely translated to humans, we may have a more effective treatment for mitochondrial disease than anything currently being used."

Primary coenzyme Q deficiency is vanishingly rare in humans -- only a few dozen people are known to have the disease. However, said Falk, the disease is representative of a more common group of inherited, hard-to-treat mitochondrial diseases called respiratory chain (RC) defects.

RC defects share a common cellular failure to properly consume oxygen for the purposes of generating energy. Such defects, caused by a wide range of genetic disorders that affect mitochondria, constitute a common culprit in human mitochondrial disease. "If using probucol or a similar drug can benefit patients with defects in the respiratory chain, this could be a significant advance in treating mitochondrial diseases," said Falk.

At the very least, added Falk, the current study increases basic understanding of the biology of mitochondrial disease. She noted that continuing research building on her team's findings may set the stage for eventual clinical trials using this approach.

The National Institutes of Health supported this study. Falk's collaborators were from The Children's Hospital of Philadelphia, the University of Pennsylvania School of Medicine, and the University of California Los Angeles.

New Level of Genetic Diversity Discovered in Human RNA Sequences

ScienceDaily (May 19, 2011) — A detailed comparison of DNA and RNA in human cells has uncovered a surprising number of cases where the corresponding sequences are not, as has long been assumed, identical. The RNA-DNA differences generate proteins that do not precisely match the genes that encode them.
DNA sequences.
Genes have long been considered the genetic blueprints for all of the proteins in a cell. To produce a protein, a gene's DNA sequence is copied, or transcribed, into RNA. That RNA copy specifies which amino acids will be strung together to build the corresponding protein. "The idea that RNA and protein sequences are nearly identical to the corresponding DNA sequences is strongly held and has not been questioned in the past," says Cheung, whose lab is at the University of Pennsylvania School of Medicine.The finding, published May 19, 2011, in Science Express, suggests that unknown cellular processes are acting on RNA to generate a sequence that is not an exact replica of the DNA from which it is copied. Vivian Cheung, the Howard Hughes Medical Institute investigator who led the study, says the RNA-DNA differences, which were found in the 27 individuals whose genetic sequences were analyzed, are a previously unrecognized source of genetic diversity that should be taken into account in future studies.

With recent advances in sequencing technology, however, it has become possible to perform the kind of analysis necessary to test that assumption. In their study, Cheung and her colleagues compared the sequences of DNA and RNA in B cells (a type of white blood cell) from 27 individuals. The DNA sequences they analyzed came from large, ongoing genomics projects, the International HapMap Project and the 1000 Genomes Project. They used high-throughput sequencing technology to sequence the RNA of B cells from the same individuals.

Within the sequences' protein-coding segments, they found 10,210 sites where RNA sequences were not the same as the corresponding DNA. They call these sites RNA-DNA differences, or RDDs. They found at least one RDD site in about 40 percent of genes, and many of these RDDs cause the cell to produce different protein sequences than would be expected based on the DNA. In the cells they studied, the sequences of thousands of proteins may be different from their corresponding DNA, the scientists say. "It is important to note that since these RDDs were found with just 27 individuals, they are common," Cheung points out.

To test whether the phenomenon was specific to B cells, the team also searched for RDDs in DNA and RNA sequences in human skin and brain cells. They found that most of the RDD sites occurred in at least some samples of all three cell types and were present in cells from both infants and adults, indicating that the RNA-DNA differences are not due to aging or specific to certain developmental stages.

Cheung says the particular RNA-DNA discrepancies they found appear systematic. There are four bases, or letters, that make up the DNA code: A, T, G, and C. The RNA equivalents are A, U, G, and C. In individuals who had RNA-DNA differences at a specific site in the genome, the mismatched bases were always the same. In other words, if the team found a C in the RNA sequence where they expected an A, all individuals who had an RDD at this point also had a C in their RNA sequence -- never a G or a U. "Such uniformity makes us believe that there is a 'code' or 'guide' that mediates the RDDs and they are not random events," Cheung explains.

In the 1980s, scientists found the first examples of RNA sequences that did not match the corresponding DNA. Today, many genes in humans and other organisms are now known to be targets of RNA editing. The known examples of such editing are mediated by enzymes called deaminases, which chemically modify specific As and Cs in the RNA sequence, converting the As to Gs and the Cs to Us. Cheung says abnormal RNA editing of glutamate and serotonin receptors has been associated with psychiatric disorders and resistance to certain drugs-evidence that traditional RNA editing is critical for maintaining normal cellular function.

Nearly half of the RDDs uncovered in the new study cannot be explained by the activity of deaminase enzymes, however, indicating that unknown processes must be modifying the RNA sequence, either during or after transcription. Cheung says there are several possibilities. For example, the DNA might be chemically or structurally modified so that certain bases look different to the enzyme that copies DNA to RNA, causing it to insert a mismatched RNA base during transcription. Alternatively, newly synthesized RNAs might be folded in such a way to signal enzymes to convert certain bases to other ones. The biological significance of these modifications remains to be determined, but since they are widespread among individuals and cell types, Cheung and her colleagues expect they have some function.

Although all of the individuals analyzed in the study had a large number of RDDs, there was a great deal of variability in the specific RDDs found in each person's genetic material. This variability likely contributes to differences in disease susceptibility, Cheung says. Scientists have generally searched for DNA sequence differences to explain why some people are more prone to certain diseases, whereas studies of RNA and proteins have considered levels of expression, but not sequences. But major genetic contributors for many diseases remain unknown, and Cheung says it will be valuable to begin to include RNA sequences in disease-association studies.

Cheung notes that her team's analysis would not have been possible without the large-scale genomics projects, which until now have focused on DNA. "Without these large-scale genome projects, we would not have the volume of DNA sequences for comparisons and would not have the technologies that enabled us to sequence our RNA samples," she says.

"Our study provides support for why large-scale data are important. Previously the focus was on DNA, now our results suggest that RNA sequences also need to be examined. Exploration of these data, when founded on fundamental biology, will lead to fruitful scientific discoveries."