This Page

has been moved to new address

Cancer Research 101

Sorry for inconvenience...

Redirection provided by Blogger to WordPress Migration Service
Cancer Research 101: 4/1/12 - 4/8/12

Friday, April 6, 2012

Science and Steamrollers: How Research Stories Can Go Off the Rails [UPDATE]

In a blog post earlier today I did a bit of a dissection, from my personal perspective of a big story that came out of AACR this week (see earlier post here).


No sooner had I hit the "upload" button, I became aware of a wonderfully written piece that was hitting the 'blogosphere' at the same time, from one of the authors already quoted in my earlier piece - Erika Check Hayden.

From the Blog "Last Word on Nothing" she wrote a piece entitled "What the ‘limits of DNA’ story reveals about the challenges of science journalism in the ‘big data’ age.


As I did in my earlier post, I will enumerate her key messages:

  1. Science consists of more and more “big data” studies whose findings depend on statistical methods that few of us reporters can understand on our own..
  2. Challenges in the news business are ratcheting up pressure on all of us. 
  3. We are only as good as our sources. 
  4. It’s becoming more difficult to trust traditional scientific authorities. 
  5. Beware the deceptively simple story line.
  6. Getting the story right matters more than ever. 
As much as I am tempted to do so, good practice prohibits me from just cutting and pasting her whole blog post here, but I do encourage you to go back and read what she says on each of these. Please do; it is worth the click and the read.

As you can see, a number of her points very much parallel my own, but rather than feel smug about that, it inspires me to keep doing this blog. And I aspire to writing on a subject as cogently and eloquently as she does.

Labels: , , , , ,

Science and Steamrollers: How Research Stories Can Go Off the Rails


Earlier this week I wrote a post about “personal genomics”. Actually I wrote 2 posts the same day (part 1 and 2) but who’s counting <smile>?

Short Recap:


In the second article (read it again, here) I talked about a new study that was announced at the Annual Meeting of the American Association for Cancer Research (AACR) in Chicago (meeting website here), and published simultaneously in the journal Science Translational Medicine.  This study, authored by a team at Johns Hopkins in Baltimore and led by world-famous researcher Dr. Bert Vogelstein, was entitled The Predictive Capacity of Personal Genome Sequencing. The researchers used disease registries from several countries, and looked at data from over 50,000 identical twins and tracked how often one twin or the other developed one of 24 different diseases.

Because twins have identical genomes, by comparing the results of one twin to the other, the question was asked: how much do their genes predict any increased chances of getting a disease. It was concluded that most of the twins were at average risk for most of the 24 diseases, pretty much the same as anyone from the general population. 

In other words, the authors suggested that widespread use of genome sequencing will likely provide very little useful information to enable  prediction of future disease.

Because of the fanfare around this paper, and not in the least because of the reputation of the research team in general and Dr. Vogelstein in particular, this study became instantly of interest in the mainstream  media, and in the social media universe, especially on Twitter.

One of the first to pick up on this story was Gina Kolata, writing for the New York Times. Ms Kolata is, to my mind, a seasoned, very well respected and well-known science journalist and her article in the Times (Study Says DNA’s Power to Predict Illness Is Limited) offered both a recap and this caveat:

“While sequencing the entire DNA of individuals is proving fantastically useful in understanding diseases and finding new treatments, it is not a method that will, for the most part, predict a person’s medical future.”

Another well-known journalist, Robert Bazell posted an article on MSNBC entitled "Gene tests: Your DNA blueprint may disappoint, scientists say" that carried pretty much the same cautionary message: 
“If everyone has a complete gene profile, a small number can learn they have a great risk for something.  But for most, the information is minimally significant.”

I confess that, given the reputations of Dr. Vogelstein and of the mainstream journalists covering this story, that I too felt a bit deflated in that moment, and said in my own blog post: 
“Bottom line, it seems to me, is that we really have to be more careful than ever about exposing ourselves to privacy, confidentiality, insurability and other legal and ethical dilemmas, especially if the risks might outweigh the gains in many, if not most, cases.”

What Happened Next?


At least I was open-minded (or prescient?) enough  to have ended my post with the caveat that:
 “Clearly there is no definitive pronouncement to be made one way or the other yet - it is far too early days for that. But it is good to have these debates with our eyes wide open.”

Indeed, as is so often the case, closer inspection with eyes wide open and sober second thought reveals that there is more to this story than meets the eye.

Actually, perhaps it might be better said that there may be “less” to this story than meets the eye...

Initially, I was rather surprised to see a very vigorous, but negative reaction from a number of other  journalists and scientists alike, especially in the ‘Twitterverse’ not only to the Vogelstein study, but to the media attention that it was getting. Some of the critiques explored how the study was flawed, or at least how its conclusions might have been flawed given the design of the study.

But the main critique that I read loud and clear from several independent sources was essentially that this result was to be 100% anticipated, and that geneticists and other molecular biologists  have been saying this for some time. In other words that ‘there is no news here’.  And so they were perplexed at why the study had been positioned to have been some brand new discovery. Worse, they were very concerned that the rush to judgment of the mainstream media, by lacking critical perspective, might seriously set back genomic research progress by unfairly damning this whole area of research without benefit of having asked many critical (and contrary) questions. 

The Other Side of the Story


I’m sure there must have been many more, but I will highlight 3 excellent pieces that have appeared since the original story broke and the original media attention flourished.

One of the very best was written by Erika Check Hayden in a blog post entitled “DNA has limits, but so does study questioning its value, geneticists say” published in Nature’s Newsblog. In that post she writes that:
 “Geneticists don’t dispute the idea that genes aren’t the only factor that determines whether we get sick; many  of them agree with that point. The problem, geneticists say, is not that the study ... arrived at a false conclusion, but that it arrived at an old, familiar one via questionable methods and is now being portrayed by the media as a new discovery that undermines the value of genetics.”

She went on to list 5 main critiques which I will enumerate here, but which you should go back to the original article to read the details:
  1. This study critiques the power of genomic medicine but does not contain any genome data. 
  2. This study is beating a dead horse.
  3. The mathematical model used in the study is unrealistic.
  4. The study doesn’t correct for errors that can affect twin studies.
  5. The media coverage of the study could weaken support for genetic research.
 To me, another  very well written and compelling “rebuttal” was penned by Luke Jostins on the Blog “Genomes Unzipped” in an article entitled “Identical twins usually do not die from the same thing”. 

In his post he ponders why “a not particularly original or particularly well done attempt to answer a question that many other people have answered before, got so much press (including a feature in the [New York Times]).”

He goes on to try to answer his own question, and the insight is commendable:

But of course, the reason is relatively obvious. All of the papers I linked to there are by statistical geneticists ... and never came with a press release or an attempt to talk to the public about them. The message, to those who can read them, is clear and well established – genetic risk prediction (or any form of risk prediction) will never be able to perfectly predict disease incidence, and will never replace diagnostic tests. But the fact that the results of Bert Vogelstein’s study seems to have come as a surprise to people, when it comes as no surprise to us, shows us that we have failed in one of our primary duties to keep the public informed about the results of our research. The paper’s failure as a work of statistical genetics stands in contrast to its success as a work of public outreach. If we are annoyed that a bad paper got the message across, then we should be annoyed with ourselves that we never communicated our own results properly”

And finally, a blog post yesterday from Paul Raeburn in the Knight Science Journalism Tracker entitled “What everyone should know about genome scans”, not only provides a very nice summary of the debate, but goes one step further to pose questions about the role of the press and of journalists who cover science and research that gets exceedingly complex. In some very insightful comments, Mr. Raeburn asks, for example:

“The question here is how reporters might have suspected these criticisms and produced better stories–or how their reporting might have done a better job of uncovering the potential pitfalls of the study. Few reporters are qualified to assess the statistical soundness of the study. But why did they not find out more about this in their reporting? Perhaps some were so interested in the contrarian nature of the story–genomes aren’t all they’re cracked up to be–that they didn’t push hard enough to discover potential problems with the study.

One tip-off was the many stories that have been written questioning the value of commercial genome scans. Reporters should have asked whether the findings were new. That would not necessarily have uncovered the statistical issues, but it might have led reporters to scale back their coverage.”
 

Sober Second Thoughts?



Paul Raeburn, in the article cited above, concluded:

“If I had covered this story, I fear I, too, would have missed the issues that Hayden presents so clearly. The main lesson I can draw from this is that reporters ought to be as skeptical and vigilant as they can be, especially when writing about subjects, such as this one, that they have written about many times before–enough to have formed opinions that might be getting in their way.”

 I myself have posted before about the “good, the bad and the ugly” of public engagement in research. Science can never again be an ivory tower exercise – much of the research, including that study in question, is done at public expense, whether that be taxpayers’ dollars or charitable donations. The public has a right to be informed as to how their dollars are used, and researchers have a responsibility and accountability to inform them. Very often, it is science journalists, health reporters, broadcasters and the like who have a central and trusted role to play smack in the middle, as a critical conduit to the public to make sure that happens.

But they have to get it right if they are to hold that public trust.

Still, as Mr. Raeburn said, how many reporters have the qualifications and expertise to really dissect increasingly specialized science and increasingly complex data sets? I *HAVE* some qualifications and I certainly can NOT keep up with, nor even understand, much of the highly complex, jargon-filled science that even I try to write about.

So, while it is easy to criticize the reporters who may have rushed to judgment and perhaps overly sensationalized what for many is a non-story, on balance one surely also has to hold accountable the very scientists themselves who may have allowed this steamroller to roll down the hill, and indeed, from what I can surmise, may have even given it quite a little push to get it going downhill in the first place, whether intentionally or not.

Labels: , , , , ,

Thursday, April 5, 2012

Guest Blogger Contribution from the Mesothelioma Center

I am delighted to welcome the following guest contribution in order to help spread awareness of mesothelioma, a very serious disease caused by exposure to asbestos.


The Author, Mr. Jensen Whitmer, has been writing for the Mesothelioma Center for more than three years and he has an interest in spreading awareness about the hazardous effects of asbestos exposure.

Mr. Whitmer and the Mesothelioma Center have prepared the article below; any any comments or questions should therefore be directed to them.
  

Findings on the Causes of Mesothelioma


Research for mesothelioma is a never-ending process because doctors haven’t found a cure for this rare cancer. Although it is clear that nearly all mesothelioma cases are caused by asbestos exposure, there is still some debate as to how asbestos actually triggers mesothelioma development. 


Asbestos is a naturally occurring mineral that has been utilized in thousands of products for its ability to insulate and fireproof materials. While there are no immediate side effects from asbestos exposure, serious conditions like mesothelioma can develop as much as 50 years after someone’s initial exposure occurred.
In March 2009, the International Agency for Research on Cancer reconfirmed that all forms of asbestos cause mesothelioma.

Mesothelioma is a cancer that is made up of four types, including pleural, peritoneal, pericardial and testicular mesothelioma. Pleural mesothelioma makes up about 75 percent of cases and develops in the lining of the lungs. Peritoneal mesothelioma accounts for about 20 percent of cases and arises in the lining of the abdominal cavity. Very few cases of pericardial (heart) and testicular mesothelioma have been reported. 
How Does Asbestos Cause Mesothelioma?
Exposure to asbestos can occur by either inhaling or ingesting microscopic asbestos fibers. Inhaling asbestos is the most common source of exposure, which is why there are more cases of pleural mesothelioma than any other type.

Once asbestos fibers are inhaled or ingested, the body has difficulty expelling them because they are typically jagged in structure. In many cases, the fibers become lodged in the lining of the lungs or abdomen and remain there for many years.

Over time, the irritation caused by these fibers creates enough damage to encourage cancerous development. There are several theories for how mesothelioma actually develops:

  • Asbestos causes mesothelial cells to become irritated and inflamed, which leads to irreversible scarring, cellular damage and cancer.
  • Asbestos fibers enter mesothelial cells and disrupt the natural functions of cellular division, resulting in genetic changes that lead to cancer.
  • Asbestos causes the production of free radicals, which are molecules that damage DNA and cause healthy cells to undergo cancerous mutations.
  • Asbestos can trigger cellular production of oncoproteins, which cause mesothelial cells to ignore normal cell division restraints and become cancerous.

All of these theories state asbestos causes cellular damage and disrupts the natural cell cycle, but such changes take many years before tumor formation occurs. Symptoms of mesothelioma typically become apparent once mesothelioma is fully developed and in a later stage of progression, at which time treatment options become limited.



Labels: , ,

Tuesday, April 3, 2012

Sloppy Science? System Failure? Or...?

This week, something on the order of perhaps 15,000 to 20,000 of the world’s top cancer researchers, from PhD students all the way to Nobel laureates, are gathering in Chicago for the Annual Meeting of the American Association for Cancer Research  (AACR). Although I am not attending this year, if the current program resembles those of years gone by, you can be sure that there will be hundreds, if not thousands, of scientific presentations that are describing potential new targets for cancer therapy. Some of these will be in huge plenary sessions, some will be oral talks in mini-symposia and many will be presented in concurrent poster sessions.

Why? As we learn more and more about what makes cancer cells tick, we are discovering more and more pathways that are implicated in the origin or the continuation of the cancer “state”. Every molecule that gets identified as being part of one of these pathways is, at least potentially, a target for intervention – either to turn it off (if it is implicated in the initiation or progression of cancers), or to turn it on (if it is implicated in some “protective” mechanisms against cancers), and so on.

And every one of these studies could be important in its own right since each one adds to our burgeoning understanding of the molecular basis of cancers.

And some of these might turn out to be even more important if the so called “target” can be validated as really being involved in cancer causation (as opposed to an incidental bystander). 

But the real “jackpot” comes when one of these targets is not only validated as being important and centrally involved in one cancer or another, but is also considered to be a so-called “druggable” target. By that, we mean that we expect to be able to discover or develop a drug, usually a small molecule or an antibody, that can then interfere with, or in some other way modulate, the cancer state and thus be an effective anti-cancer therapeutic.

The success of this model has been evidenced by drugs like Imatinib (Gleevec), a small molecule drug, or Trastuzumab (Herceptin), a monoclonal antibody. The search for the next “druggable targets” and the subsequent discovery of the next Gleevec or the next Herceptin continues to drive preclinical laboratory-based research the world over, since those avenues are where many of the next cancer therapeutics are expected to originate. 

Undoubtedly, only a small number of these putative targets will actually traverse that magic line from being a preclinical “observation” to actually being of demonstrated clinical utility. This is the realm of so-called “translational research” - to translate, or move forward, research from the lab so it can end up in the clinic for the care and treatment of real patients in the real world.

That path is often a long and arduous one, mind you, fraught with frustration, but every long journey starts with a single step, as they say. Still, there will be palpable excitement as more and more of these potential targets are described, understood and tested for clinical utility.

Last week, however, a bit of a bucket of cold water was thrown on a number of highly touted studies that presumably had shown great promise of such translation into the clinic.

In a Commentary entitled “Drug development: Raise standards for preclinical cancer research” published in the respected journal, Nature on March 29, 2012 authors C. Glenn Begley and Lee M. Ellis reported that, sadly, not only have the vast majority of such studies have NOT resulted in translation into the clinic, but worse, they said, reputable scientists working at pharmaceutical or biotech companies have not been able to replicate most of the results that had been lauded at one time as potential “breakthroughs” (italics mine). 

 

In total, they reported that at least 47 out of 53 publications – all from reputable researchers and published in reputable peer-reviewed scientific journals -  had not been able to be replicated during the time the one of the authors (Dr. Begley) had been the head of research at the biotech company Amgen. 

 

This rather shocking finding prompted the authors to make some specific recommendations to try to ensure that this situation did not persist.

 

And it prompted an Editorial in the same issue of Nature, entitled “Must Try Harder” that opined that “too many sloppy mistakes are creeping into scientific papers. Lab heads must look more rigorously at the data — and at themselves”.   

 

The Editorial went on to say:


[This] "Comment article ... exposes one possible impact of such carelessness. Glenn Begley and Lee Ellis analyse the low number of cancer-research studies that have been converted into clinical success, and conclude that a major factor is the overall poor quality of published preclinical data. A warning sign, they say, should be the “shocking” number of research papers in the field for which the main findings could not be reproduced. To be clear, this is not fraud — and there can be legitimate technical reasons why basic research findings do not stand up in clinical work. But the overall impression the article leaves is of insufficient thoroughness in the way that too many researchers present their data."

Please do note that, as the editorial says, no one is suspecting, suggesting nor accusing any fraudulent behaviour. And indeed there are many potential legitimate explanations why not all results can be reproduced. But the publication of this Commentary and the accompanying Editorial have certainly ignited a firestorm of subsequent comments, newspaper articles, blog posts and Twitter activity.

I found one online response to the Nature Editorial to be particularly telling, especially since it came from a friend and colleague whose opinions I respect immensely. Dr. Jim Woodgett, Director of Research at Toronto’s famed Samuel Lunenfeld Research Institute at Mount Sinai Hospital wrote:

"The issue with inaccuracies in scientific publication seems not to be major fraud (which should be correctable) but a level of irresponsibility. When we publish our studies in mouse models, we are encouraged to extrapolate to human relevance. This is almost a requirement of some funding agencies and certainly a pressure from the press in reporting research progress. When will this enter the clinic? The problem is an obvious one. If the scientific (most notably, biomedical community) does not take ownership of the problem, then we will be held to account. If we break the "contract" with the funders (a.k.a. tax payers), we will lose not only credibility but also funding. There is no easy solution. Penalties are difficult to enforce due to the very nature of research uncertainties. But peer pressure is surely a powerful tool. We know other scientists with poor reputations (largely because their mistakes are cumulative) but we don't challenge them. Until we realize that doing nothing makes us complicit in the poor behaviour of others, the situation will only get worse. Moreover, this is also a strong justification for fundamental research since many of the basic principles upon which our assumptions are based are incomplete, erroneous or have missing data. Building only on solid foundations was a principle understood by the ancient Greeks and Egyptians yet we are building castles on the equivalent of swampland. No wonder clinical translation fails so often."

As someone who ran the research operations of two major Canadian national cancer research funding agencies over the past two decades, I wonder if my own organizations have inadvertently been “complicit” in this. We always tried our very best not to “over-hype” any results from investigators we funded, but there is always a need, especially in a national health charity, to “excite” the public and the prospective donor, and to be accountable to previous donors by showcasing for them any success their generosity has won.

Perhaps we all need to take a closer look at the pressures we place on researchers globally to “publish or perish”. Are our incentives and the way we measure “success” all wrong? 

Perhaps, indeed, it is long overdue that we take a very hard look at how we conceive, fund, undertake, promote and analyse cancer research results, and how and what we value in cancer research and in cancer researchers.

Labels: , , , , , ,

Monday, April 2, 2012

"Personal" Genomics...Part 2

Rather than amend or append my post on "personal genomics" from earlier today, I thought I would provide a new one that expands on the "personal" genomics thread.

But this time, it is not about privacy, confidentiality or other kinds of ethical concerns, per se. Instead it goes right to the heart of the matter of "just how useful will this information be to the average person", average meaning a person at 'average' or 'slightly above average' risk for a disease.

Some details of a very interesting study were released today at the Annual Meeting of the American Association for Cancer Research (AACR) in Chicago (meeting website here), and published simultaneously in the journal Science Translational Medicine. Authored by a team at Johns Hopkins in Baltimore and led by world-famous researcher Dr. Bert Vogelstein, the study, entitled The Predictive Capacity of Personal Genome Sequencing suggests that the information that might be provided could actually be of limited usefulness.

Rather than me trying to paraphrase the whole thing, I refer you to an excellent article by Gina Kolata in the New York Times today (Study Says DNA’s Power to Predict Illness Is Limited) that offers an excellent recap.

For a second viewpoint, but same conclusions, you can check out this column from  Robert Bazell entitled "Gene tests: Your DNA blueprint may disappoint, scientists say" online also today at msnbc.com

Bottom line, it seems to me, is that we really have to be more careful than ever about exposing ourselves to privacy, confidentiality, insurabililty and other legal and ethical dilemmas, especially if the risks might outweigh the gains in many, if not most, cases.

Clearly there is no definitive pronouncement to be made one way or the other yet - it is far too early days for that. But it is good to have these debates with our eyes wide open.

Sure beats the alternative...






Labels: , , , , , ,

More About "Personal" Genomics

No, I didn't make a mistake in the title. I know we talk about genomics and personalized medicine a lot, but in this case I really was referring to "personal genomics". We need more conversations, discussions and debates about how we as a society are going to deal with the inundation of personal gene and DNA information that is coming our way in the brave new world of $1000 genomes.

I have already written some on the subject in a previous post about the promise and the concerns.

I have also provided some links last week to some further information on the subject.

This week saw (at least)  two new entries into this important discussion that I thought were worthy of passing on.

The first of these was a wonderful show on the series "Nova" from WGBH Boston called "Cracking Your Genetic Code".  I thought it was a particularly good treatment of the whole science behind genomics and personalized medicine. If you missed it, or can't get it, for now the show is available online at http://www.pbs.org/wgbh/nova/body/cracking-your-genetic-code.html.

I really encourage you to invest the hour watching this show - it is excellent in my view.

The other is a particularly good article written by Christine S. Moyer of the (American Medical Association) Amednews.com staff. In this piece she writes more about the issues of what are we going to do with all of the information that we might be gleaning very soon. Entitled "Genome sequencing to add new twist to doctor-patient talks", it is written from a health care, ethical and societal point of view - exactly the debate that I have been insisting that we have not been having  nearly enough of.

I am very glad to see that these discussions are expanding and that the issues are starting to come out on the table. We owe it to ourselves to become much better informed and to steer this brave new world or else it will swamp us!

Labels: , , , , , ,