A new article by PhD student Melanie Firestone discusses using root cause analysis during foodborne illness outbreaks and how to communicate their findings to a broad food safety audience.
Associate Professor Matt Simcik developed a process to keep hazardous PFCs — now called PFAS (perfluoroalkyl substances) — from traveling through aquifers to drinking water sources and ecosystems.
PhD student Joe Servadio and Adjunct Professor Matteo Convertino developed a new method for identifying the most important data to use in creating risk factors and health scores.
PhD student Kimberly Bonner is the lead author of a commentary on developing strategies for providing HPV vaccine to young girls who are not in school.
Regents Professor Michael Osterholm and CIDRAP are working with the WHO to develope R&D roadmaps targeting Ebola/Marburg, Nipah, and Lassa viruses.
A study by Assistant Professor Julian Wolfson tested two popular cardiovascular risk calculators using patient electronic health data and found that they maintain their accuracy at predicting cardiovascular risk when they are used in a clinical setting.
Nutrition studies raise the quality of our diets and increase our understanding of how our bodies use and respond to food.
The modern age of nutrition research began in the early 1900s with the discovery of essential nutrients, like vitamins, and their importance in preventing crippling diseases. In recent decades, nutrition studies have continued to be fundamentally important in protecting health by showing how we can avoid cancers, heart attacks, and other dangerous conditions by eating well.
Since 1974, the School of Public Health’s Nutrition Coordinating Center (NCC) has been making much of this research possible. The NCC’s flagship product is its Nutrition Data System for Research (NDSR), a software program researchers use to analyze the composition of foods found in recipes and menus, or eaten by study participants. It’s a popular tool used by more than 100 institutions ranging from Johns Hopkins University to NASA.
Stanford University Professor of Medicine Christopher Gardner calls the NDSR “the ultimate tool for nutrition researchers.”
“So many researchers use NDSR because it’s linked to a comprehensive food database — and the program is really easy to use too,” says NCC director and SPH Professor Lisa Harnack.
In a lot of nutrition studies, a study participant describes to a researcher what they ate through a process known as “food recall.” Researchers then use this data to study everything from calorie intake to nutrient consumption.
But food recall research can be tricky — often people forget what and how much they ate. So NDSR intelligently guides interviewers while they ask dietary recall questions, prompting them to gather more information based on a study participant’s answers. For example, a participant may be asked what they ate for breakfast. If they respond that they had toast and peanut butter, the program will prompt the interviewer to further investigate the exact type of bread and peanut butter (regular or reduced-fat? 1 tablespoon or two?).
“People think that they can describe what they eat accurately, but the prompts factor in the forgetfulness of human nature,” says Gardner. “It can even anticipate the condiments people would typically eat with certain foods.”
Gardner used the NDSR for a recent NIH weight loss study of 609 participants and says it helps his research be more accurate and credible.
“It raises the validity of my research,” says Gardner. “Any reviewer familiar with the challenges of diet assessment knows I’m choosing the gold standard if I use NDSR.”
Based on Accurate Data
Any software is only as good as the data that drives it and the strength of the NDSR comes from its Food & Nutrient Database. This database feeds the NDSR software with information for approximately 18,000 foods. The database is also often licensed to software developers and researchers for variety of purposes, such as supporting diet and nutrition smartphone apps.
The strength of the database is found in how completely it accounts for the nutrients contained in foods. Each food in the system has up to 165 nutrients listed, compare that to the 10 nutrients found on an average food label.
The database is so comprehensive and accurate because it’s kept current through weekly updates by a team of scientists who carefully gather and scrutinize nutrition data.
“The food market is always changing. We check product manufacturer websites or actually go to the supermarket and pull the info right from the box or label,” says Harnack. To gather complete nutrition information, Harnack and her team even pull detailed information from ingredient lists. “That’s where our database is different,” says Harnack. “Other databases just take information from the nutrition label and the rest of the nutrients are missing. We take all of the ingredients listed and figure out all 165 nutrients in the product.”
Sometimes, researchers want not only the benefits of using the NDSR and its database, but also the NCC’s expertise in conducting nutrition studies. In those instances, the center offers a service to conduct the food recall research on behalf of institutions.
Eric Rimm, a professor of epidemiology and nutrition from Harvard, recently used this service to validate a 25-year-old food frequency questionnaire produced by his school. Rimm was interested in comparing the quality of the questionnaire’s data to that gathered by the NCC using NDSR.
“We knew the NCC would give very good quality data and to do the study [at Harvard] would’ve taken a lot longer and have been much more expensive,” says Rimm. “Minnesota’s Nutrition Coordinating Center was the first place we went to because of its reputation — it was our number one choice.”
HealthNewsReview.org, based at the School of Public Health, launches a podcast to help consumers better understand health news.
Assistant Professor Julian Wolfson creates a smartphone app that allows researchers to understand how people move throughout their day.
A paper published in JAMA offers new guidelines in health care cost-effectiveness analysis (CEA), replacing recommendations published in 1996. Professor Karen Kuntz was a member of the panel charged with drafting the new guidelines and led the writing of an additional section on developing computer models for conducting CEAs.
In the two decades since the first guidelines were written, computer models have become a common tool used in completing a CEA.
“Enough time has passed, and there have been a lot of changes in the cost-effectiveness analysis field — like the increase in the use of decision-analytic models — so we knew an update was needed,” says Kuntz.
A CEA is a decision-making tool in which the costs and effects of tests, therapies, and prevention techniques are calculated. The results of CEAs are used by governments in making health care coverage decisions. Guidelines help ensure that the results from any one analysis can be compared to another.
The computer models used in CEAs, such as decision-analytic models, can take evidence from multiple sources and use it to extend results to different populations or to make projections beyond the time horizon of collected data.
Kuntz sought to provide recommendations for modeling that produce relevant, reliable, and useful projections.
“We emphasized transparency and the importance of users being clear in describing the assumptions they make in creating a model’s structure,” says Kuntz. “We recommended incorporating all the available data, and being clear about why you did or did not include certain sources. We also discussed the importance of adopting a lifetime horizon to capture all of the costs and effectiveness that may be relevant to decision makers.”
Kuntz said the new CEA guidelines will hopefully be used as the method of analysis by researchers and advance best-practice discussions within the field.