Skip to content

Wanted – new designs for cardiovascular epidemiology studies that don’t break the bank!

March 23, 2012

Cubes that spell out Need New Ideas
Innovation is often stimulated by economic need. How do we design studies that further the science while being cost-conscious?

Many factors contribute to the high cost of epidemiology studies, such as larger sample sizes, new biomarkers, noninvasive imaging, and genomic sequencing.  Three recent commentaries published in the American Journal of Epidemiology (on austerity, innovation, and large studies) underscore the need to think differently, explore new models, and be more efficient.

Longitudinal cohort studies have been successful, but new cost realities are challenging us to explore new models.  What do you think? Are there alternative designs for innovative research? Can we redefine the “playbook” to achieve high value research at a lower cost?  We’re looking for your novel and “out of the box” ideas.

Posted by Epidemiology Branch, NHLBI

18 Comments leave one →
  1. fred permalink
    March 25, 2012 2:01 am

    Why do you need “alternative designs” to longitudinal cohort studies? Why not consider smart, cost-efficient things to do with these cohort studies – and their considerable data resources, which we’ve already paid for?

  2. NHLBI Moderator permalink
    April 4, 2012 9:36 am

    The comment from Fred suggests that the NHLBI cohort studies adequately serve the research needs of cardiovascular epidemiology. If you agree with that statement, then we would like to hear from you.

    But we are also envisioning other possible ways to collect epidemiology data, but need input from the epidemiology community.
    • How can we conduct meaningful research from electronic medical records?
    • Is it possible to use the internet in data collection?
    • Should we build the US Biobank like the UK Biobank?
    • How can these data collection techniques provide valid data, comprehensive data and innovative data or are they doomed to be inadequate for any sophisticated hypotheses?

    Remember that your comments can use a pseudonym and non-identifiable email.

  3. gadfly permalink
    April 4, 2012 11:12 am

    Fred suggests doing more with existing cohorts to take advantage of the investment that has already been made. The trade off he seems to be proposing is that we get more efficiency and value by leveraging the sunk costs of existing cohorts with new measures in the same cohorts than we would get by starting a new cohort designed to take advantage of such efficiencies as using the electronic medical records for research or starting virtual cohorts connected by email that allow multiple real time electronic data collections for highly variable participant characteristics such as diet and physical activity. Is the natural end of such thinking a huge single cohort with economy of scale in which we measure everything on the same people and have continuous surveillance for all morbidity and mortality? Does this approach lead to more innovation or less? How would we replicate findings or will there be no need if results are from a single huge study? Is the UK Biobank the answer as suggested in the NHLBI Moderator’s comment? Or is standardized data collection for accepted measures while keeping smaller more agile and innovative cohorts preferable? Those cohorts could always be combined in a consortium with harmonized data collection for most measures. This would seem to offer the best of both worlds. I don’t think innovation should be sacrificed for large numbers.

  4. Bozena permalink
    April 4, 2012 11:52 am

    I don’t know a lot about reasearch, So I might be totally wrong. Maybe we need to focus in research on more simple things. Like in a reasearch you can have few groups of people and helping them loose weight, each by slightly diferent method and see if it makes a difference. We know that basic things to promote health are: water, rest, exercise, meaningful job etc., loving relaitonship with families, diet… We could focus more on combining mental causes for diseases than just physical ones, because this doesn’t give you entire picture. Recently I read that depression slows down the wound healing. Why? Due to increased levels of stres hormones. Also you could involve students at universities, who could help with some data. Sometimes I feel like we focus so much on physical aspects of our health, forgetting that our mind and body are connected.

  5. Richey Sharrett permalink
    April 4, 2012 2:28 pm

    For some cardiovascular diseases which occur later in life, early intervention is likely to be particularly important. For coronary disease, observational studies have shown markedly reduced predictive value for a range of exposures or risk factors when they are measured at older rather than at younger ages. Nowhere is this more evident than for vascular cognitive impairments of the elderly and other dementias, where hypertension, smoking, diabetes, cholesterol and obesity all appear to be substantial risk factors when assessed in middle age, and where all of these (except diabetes) have lost much or all of their predictive value when assessed after age 65.

    This should influence study design. Prospective studies need to start at middle age or younger. They need accurate assessment of the vascular conditions of the elderly and detailed assessment of the risk factors in the same individuals at much younger ages (e.g. Framingham, ARIC). Eventually, perhaps, even CARDIA. Savings can be had by limiting the evaluation of exposures or risk factors at older ages.

    A related point: we do not yet know the value of primordial prevention, but to find out it is important to study the predictors of subclinical disease in the very young (e.g. Bogalusa, Muscatine, PDAY, Young Finns Study, CARDIA). In these obesity and elevated lipids seem of unmost importance.

    • gadfly permalink
      April 4, 2012 5:11 pm

      It appears that you are advocating even longer term cohort studies (start much earlier than current cohorts and continue just as long). How does this relate to value and efficiency? Are you proposing fewer but longer studies? Are you proposing early intense data collection with only surveillance in later years? Are you proposing only minimal exams in later years to assess subclinical disease? Or are you saying you get what you pay for and good studies cannot be done for less cost? Adding another 10-20 years of follow-up may be difficult to offset by fewer exam components. Perhaps rotating exam components over the life of the study would maintain contact, lengthen the time between assessments and answer more scientific questions with a single cohort.

  6. April 10, 2012 11:04 am

    Sitting and listening to talks at the recent AHA Council on Epidemiology and Prevention meetings in San Diego it struck me that most can be fit into two categories….evaluation (safe and solid) or discovery (new and out there)…and there was very little of the latter. When funding for grants is tight, taking a risk of proposing something new…something that is really in the spirit of discovery…and by that I mean really new and different ideas….. is discouraged. A reviewer’s enthusiasm to score something “way out there” would also be discouraged in tight budget time. This leads to a mind set that it is better to propose (and score high) something safe and solid. One way to stimulate new, really innovative discovery of knowledge or design methods for that matter would be to have a funding mechanism soley for discovery epidemiology. Let really new ideas compete against other really new ideas.

    • NHLBI Moderator permalink
      April 10, 2012 11:18 am

      Interesting thought. Do you think we would need a special study section, or a subset of CASE study section? Should the NHLBI set aside funds separately for “discovery epidemiology”?

      • April 10, 2012 2:45 pm

        I think set aside funds for “discovery epidemiology” would create more interest and produce better outcomes. This is not to say that things that get proposed and funded now are not new. They of course have to have “innovation” to get by. I just mean “really” new and discovery level stuff…stuff that we are not quite sure what it means right now…stuff that we don’t know exactly what the application is right now…stuff that brings a new way of thinking is by definition “discovery” in nature…and it gets lost amongst the established studies we all know and love. Give these new kids (i.e.new bold ideas) as chance in their own competition.

  7. David Couper permalink
    April 13, 2012 10:39 am

    Related to Richey’s comments, I think the value of long-term cohorts should not be underestimated. If broad phenotyping is done at the start (and at regular or at least occasional follow-up visits) a cohort may become useful for investigating diseases or conditions that were not envisioned for the cohort when it was created. For instance, consider the current work on cognitive function decline with age in the ARIC Study.

    There may be other study designs that are much cheaper in terms of addressing particular questions. But what makes them cheap also reduces the range of questions they can be used to address. So, overall, a few large “expensive” cohorts may actually cost less than many more cheaper, more focused study designs.

  8. Mark permalink
    April 30, 2012 4:13 pm

    In an effort to help investigators ask better questions, I wonder if the NHLBI might require formal systematic reviews at the start of any application for funding. I would think this requirement could lead to more refined research questions that could maximize the NHLBI’s funds for innovative research, particularly within epidemiology.

  9. May 22, 2012 4:10 pm

    It might be helpful to reframe the question. Instead of thinking about a less expensive study design, we might think about how technology will affect the study designs we have, and potentially reduce costs.
    I think it would be very worthwhile to begin cataloguing the early efforts to incorporate social media into epidemiologic research and projects, and think about strengths and weaknesses, as well as implications for future research strategies.
    I see potential applications of social media in a number of areas of epidemiology. The first is patient engagement – developing online communities, responding to patient concerns, fostering commitment (in the private sector, this may be to a product, but in epidemiology it might be commitment to a study, or adherence to a health regimen). So much of this is already taking place, across the board with medications, diets, disease groups, exercise, etc.
    Closely related, are the many applications for data collection – food and exercise diaries, physiologic measurements (heart rate, for example), images, video. These data collection methods should decrease the cost of most studies, although there will be different costs. The applications are wide – cohort studies, interventions, life style changes, surveillance, adverse events, etc.
    Methodologically, we need to understand how the use of technology may introduce selection biases into study samples, and the potential direction of such biases. We also need to consider the rapid changes in technology and the need to revise data collection strategies as technology develops, perhaps every 2 years. This creates a challenge for studies planned for a longer time period, but I think we need to build in an ability to respond to technological change.
    I don’t think we have a good grasp of the innovations currently available to modify risk factors (in diet and exercise) and their efficacy – these could be quick and feasible studies to conduct. There are also many innovations appearing to improve medication adherence and patient self-management – all moving onto the marketplace, adapted by a self-selected group, without much study.
    At a minimum, it would be helpful to have a catalogue of the various efforts and to include efforts occurring in commercial as well as research environments.

  10. May 24, 2012 9:55 am

    To illustrate my point about technology (May 22, 2012), we can look at the example of crowdsourcing as a potential survey mechanism and more.
    I came across a paper by two computer scientists at Stanford (“Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design”
    Jeffrey Heer, Michael Bostock ACM Human Factors in Computing Systems (CHI), 203–212, 2010 vis.stanford.edu/papers/crowdsourcing-graphical-perception). They used Amazon’s Mechanical Turk, a service that matches “workers or Turkers” with “requestors” for small tasks and asked “workers” to assess different visualization designs. They got 186 respondents to asses the visualizations for a cost of $367.77, and they were also able to compared their study results to published literature and conclude that the method was viable. If “Turkers” can assess visualizations, then they presumably they may be willing to respond to other types of questionnaires or surveys about a range of issues – which paragraph is presents medical information more clearly, what did you eat this morning, and on and on.

    This piqued my curiosity, and before writing this blog post I searched for more information on this issue. One of the first questions of concern to any epidemiologist would be the degree and types of selection biases associated with using crowdsourcing and related approaches to obtain what are essentially survey samples and responses. I was surprised to see a growing body of literature in this arena, the most recent of which is Jennifer Jacquet’s article in the July 7, 2011 issue of Scientific American, “The Pros and Cons of Amazon Mechanical Turk for Scientific Surveys” (isn’t that what we do?).

    Some other studies characterizing the demographics of samples of respondents on Mechanical Turk:

    “The New Demographics of Mechanical Turk”, March 9, 2010 Panos Ipeirotis, NYU School of Business, http://www.behind-the-enemy-lines.com/2010/03/new-demographics-of-mechanical-turk.html

    Ross, Irani, Silberman, Zaldivar and Tomlinson, “Who are the Crowdworkers? Shifting Demographics in Mechanical Turk” CHI 2010, http://www.ics.uci.edu/~jwross/pubs/RossEtAl-WhoAreTheCrowdworkers-altCHI2010.pdf

    And this article, “Using Mechanical Turk as a Subject Recruitment Tool for Experimental Research”, Berinsky, Huber and Lenz, October 7, 2011, huber.research.yale.edu/materials/26_paper.pdf
    Some very bright minds are exploring the potential of this mechnism. I would think that we epidemiologists will have a lot to contribute in this arena, and will see great benefits, as well.

  11. June 1, 2012 6:50 pm

    This question must have really struck a chord with me because I find myself continuing to think about these issues. I went to AJE and looked at the article abstracts on austerity, innovation and large studies. What strikes me about the commentaries is that issues are discussed without addressing the social organization of epidemiology in academia (in the US).
    I will leave aside the issue of large studies, for the moment. Our current system of academic medicine and health research (including epidemiology) provides incentives for high spending (the more money an academician brings to the university, the better), and no incentives for an academician who does research frugally. In a hypothetical extreme, an academician who can do research for very little money will not be able to advance in his or her career, and may not even be able to fulfill the terms of his or her contract. There are very clear disincentives to the academician who does low-cost research and brings in smaller amounts of funds.
    Innovation also requires a specific social environment, one in which it is possible to fail. Again, our academic social organization does not protect failure, in any format, partly because austerity creates pressures for higher “return on investments”, and the high cost of large studies creates a risk adverse atmosphere.
    The NIH NHLBI may promote innovation and lower cost studies by funding a larger number and proportion of small grants (such as R03s). This will create a space in which it is possible for researchers to do smaller projects, and to take more risks with ideas that might not work out as expected.
    Of course, some things can only be done in large studies, as discussed in the third of the AJE articles (Manolio et al., 2012). But they note, “A key lesson from UK Biobank and similar studies is that large studies are not simply small studies made large but, rather, require fundamentally different approaches in which “process” expertise is as important as scientific rigor.” Activities such as the UK Biobank might be conceptualized as services, akin, even, to libraries, animal facilities, etc. In this case, the argument for funding something like the UK Biobank would not be posed against the argument for small, innovative studies; it would be contrasted with other important, centralized resources.
    In the end, the question of small vs. large studies may be one of balance, but the discussion needs to address the organizational changes that will provide incentives for austerity and innovation.

  12. June 4, 2012 5:02 pm

    In keeping with the theme of technology and its potential effect on research costs, I noticed this tweet last week (May 30,7:51 PM).

    Mayo Clinic @MayoClinic We’re gathering data on the use of #sleepingpills onboard flights. Help us by completing this brief survey.Thank you! https://redcap1.mayo.edu/surveys/index.php?hash=6c4b761a28b734fe93831e3fb400ce87

    The tweet then directs the respondent to the survey site and the survey is introduced with the following paragraph.

    Registry of In-Flight Incidents Associated with Sleeping Pills

    We are seeking individuals who have experienced or observed an incident believed to be related to a sleeping pill taken aboard an airplane flight. There are no risks for participants in this survey. The benefits which may reasonably be expected to result from this research study will be developing a better understanding of the factors that may have lead up to this situation as well as the consequences. Our goal is to understand methods to reduce the risk of in-flight incidents related to sleeping pills.

    Please understand that your current and future employment, education and/ or medical care at Mayo Clinic will not be affected by whether or not you participate. Specifically, your care will not be jeopardized if you choose not to participate. Your information will not include information so that I or my colleagues involved with this project can identify you.

    If you have any questions about this research study you can contact me at 480-301-8297. If you have any concerns, complaints, or general questions about research or your rights as a participant, please contact the Mayo Institutional Review Board (IRB) to speak to someone independent of the research team at 507-266-4000 or toll free at 866-273-4681.

    The respondent is then directed to 20 survey questions. Epidemiologists and fellow researchers might have at least 20 questions for them! Some of the questions might relate to sample size, assessment of selection biases, time frame, cost, IRB concerns. I look forward to learning more about this.

    At this point it is unclear what the benefits and costs of this approach will be, but definitely an area to explore.

  13. June 7, 2012 9:05 am

    The County Health Rankings and Roadmaps presentation at Health Datapalooza was very informative, and it looks like they have developed a very robust tool for summarizing county level data http://www.countyhealthrankings.org/roadmaps. The data are intended to act as an intervention (providing actionable information), but can provide data for prospective studies with counties as their study population. A range of county level measures are produce, and the amount will grow – these data can supplement data collected within a study framework.

    It may also lead to quick and inexpensive ecologic studies (with the limitations of ecologic studies).

  14. Carlton A. Hornung, PhD, MPH permalink
    August 7, 2012 10:27 am

    We need to be creative in designing clinical research—develop trial designs that move beyond what Bradford Hill proposed in the 1960s. New designs are necessary given the high cost of data collection, the need to study multiple markers of disease (i.e., risks, biomarkers, hard endpoints AND important patient reported outcomes) and the need to evaluate endpoints at multiple intervals (i.e., frailty models). I would suggest that (clinical) epidemiologists work with biostatisticians and clinicians to developed nested trial desgns in which all sites/investigators collect primary prediction measures and the primary outcome while (randomly?) selected sites collect additional prediction data and secondary outcome measures while maybe a third set of sites/investgators collect still other prediction variables and tertiary outcome data, etc. This is in essence what is done in the NHANES data–(e.g., interview, exam, lab) with analyzes based on weighted statistics.
    We should also explore with biostatisticians the use of minimization to replace or augment randomization and move beyond the pitfalls of two-tail testing of one-tail thinking and concentrate instead on Bayesian approaches to assessing the impact of risk factors as well as interventions on outcomes.

Trackbacks

  1. What you have told us. «

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s