Online Program
|
||
Statistical considerations in evaluating imaging-based devices |
|
|
Organizer(s): Jincao Wu, FDA/CDRH; Jingjing Ye, FDA |
||
Chair(s): Jincao Wu, FDA/CDRH; Jingjing Ye, FDA |
||
Imaging devices are valuable technologies for primary diagnosis or as an aid to diagnosis for disease screening, diagnosis work-up, monitoring, quantitative biomarker measurement, etc. These imaging devices include radiological techniques to identify abnormalities (e.g., mammography for breast cancer) and digital slides in pathology that allow the pattern recognition and visual search (e.g., tissue slide stained with Her2 for gastric cancer). The evaluation of these devices requires unique analytical and clinical studies. For example, study design and analysis typically needs to address variability in image interpretation by reader. Also, when reading cases in two modalities, the second reading of a case can be affected by memory of the case from the first reading. In this session, statistical considerations on study design and analysis will be discussed among academic, industry and FDA researchers. We have invited three speakers (all have been confirmed): 1. Qin Li, FDA/CDRH “Statistical considerations in Multi-Reader Multi-Case study for medical imaging devices” 2. Yuying Jin, FDA/CDRH, “Challenges in Digital Pathology and its recent development” 3. Nancy Obuchowski, Quantitative Health Sciences, The Cleveland Clinic Foundation, “Comparison of ROC methods in Multi-Reader Multi-Case study” |
||
Mon, Sep 22 |
||
Short Course 1: Patient-Reported Outcomes: Measurement, Implementation and Interpretation |
09/22/14 |
|
Organizer(s): Joseph C Cappelleri, Pfizer Inc |
||
Instructor(s): Joseph C Cappelleri, Pfizer Inc |
||
This short course will provide an exposition on health measurement scales – specifically, on patient-reported outcomes. Some key elements in the development of a patient-reported outcome (PRO) measure will be noted. The core topics of validity and reliability of a PRO measure will be discussed. Exploratory factor analysis and confirmatory factor analysis, mediation modeling, item response theory, longitudinal analysis, and missing data will among the topics considered. Approaches to interpret PRO results will be elucidated in order to make results useful and meaningful. Illustrations will be provided mainly through real-life examples and also through SAS. Biography of Speaker Joseph C. Cappelleri earned his M.S. in Statistics from the City University of New York, his Ph.D. in Psychometrics from Cornell University, and his M.P.H. in Epidemiology from Harvard University. He is Senior Director of Statistics at Pfizer Inc and Fellow of the American Statistical Association. He has delivered numerous conference presentations and published extensively on clinical and methodological topics, including regression-discontinuity designs, meta-analysis, and health measurement scales. He is the lead author of the recently published book Patient-Reported Outcomes: Measurement, Implementation and Interpretation. ![]() |
||
Short Course 3: Multiple Testing Procedures with Gatekeeping and Graphical Applications |
09/22/14 |
|
Organizer(s): Frank Bretz, Novartis; Dong Xi, Novartis |
||
Instructor(s): Ajit C Tamhane, Northwestern University; Dong Xi, Novartis |
||
Modern phase III confirmatory clinical trials often involve multidimensional study objectives which require simultaneous testing of multiple hypotheses with logical relationships among them. Examples of such study objectives include investigation of multiple doses or regimens of a new treatment, multiple endpoints, subgroup analyses, non-inferiority and superiority tests, or any combination of these. This short course will provide practical guidance on how to construct multiple testing procedures (MTPs) for such hypotheses while taking into account the logical relationships among them and controlling the appropriate Type I error rate. Course Outline: 1: Introduction to multiple testing procedures In the first part of this short course, we will introduce the basic principles of multiple testing such as error rates and methods for constructing MTPs [1]. We will then illustrate these principles with some standard p-value based MTPs, such as Bonferroni, Holm, Hochberg, Hommel and fallback. These procedures form the building blocks of the gatekeeping procedures which will be discussed in the second part of the course. 2: Gatekeeping procedures Gatekeeping procedures will be introduced with simple serial and parallel gatekeeping [2-3]. More complex gatekeeping procedures such as tree-structured [4] and k-out-of-n gatekeeping [5] will be developed from these simple procedures. Implementation of these procedures will be illustrated using the MultXpert package in R. 3: Graphical approaches In the last part of the course, we will introduce the graphical approach [6-7] with which one can construct and explore different test strategies and thus tailor them to the given study objectives. In this approach the resulting MTPs are represented by directed, weighted graphs, where each node corresponds to an elementary hypothesis, together with a simple algorithm to generate such graphs and sequentially test the individual hypotheses. The approach will be illustrated with several weighted Bonferroni tests and common gatekeeping strategies. Case studies will be presented to show how the approach can be used in clinical practice. The methods will be illustrated using a SAS macro and the gMCP package in R. Key References [1] Dmitrienko, Tamhane, Bretz (2009) Multiple Testing Problems in Pharmaceutical Statistics. Taylor & Francis/CRC Press: Boca Raton, Florida [2] Westfall, Krishen (2001) Optimally weighted, fixed sequence, and gatekeeping multiple testing procedures. Journal of Statistical Planning and Inference; 99:25-40 [3] Dmitrienko, Offen, Westfall (2003) Gatekeeping strategies for clinical trials that do not require all primary effects to be significant. Statistics in Medicine; 22:2387-2400 [4] Dmitrienko, Wiens, Tamhane, Wang (2007) Tree-structured-structured gatekeeping tests in clinical trials with hierarchically ordered multiple objectives. Statistics in Medicine; 26:2465-2478 [5] Xi, Tamhane (2014) A general multistage procedure for k-out-of-n gatekeeping. Statistics in Medicine; 33:1321-1335 [6] Bretz, Maurer, Brannath, Posch (2009) A graphical approach to sequentially rejective multiple test procedures. Statistics in Medicine; 28:586-604 [7] Bretz, Maurer, Hommel (2011) Test and power considerations for multiple endpoint analyses using sequentially rejective graphical procedures. Statistics in Medicine; 30:1489-150 ![]() |
||
Short Course 2: An Overview of Structured Benefit-Risk Analysis |
09/22/14 |
|
Organizer(s): Weili He, Merck & Co., Inc.; John Scott, Division of Biostatistics, FDA / CBER / OBE |
||
Instructor(s): Telba Irony, FDA/CDRH; Qi Jiang, Amgen; George Quartey, Genentech |
||
It has long been understood that the benefits of a medical product can only be understood in the context of the risks or harms associated with that product, and vice-versa. Until recently, however, drug development and regulatory decisions were usually based on informal, qualitative weighing of benefits and risks, often leading to opaque decisions. Now, pharmaceutical companies are increasingly using structured benefit-risk assessments, sometimes including sophisticated quantitative methods, as part of their internal decision-making processes. Also, partly in response to a 2006 Institute of Medicine report on drug safety, the 2012 Food and Drug Administration Safety and Innovation Act (FDASIA) calls on the FDA to develop a structured approach to benefit-risk assessment in regulatory decision-making. FDA has already begun to develop such an approach, both for drugs/biologics and for medical devices. In the wake of these initiatives, the field of structured benefit-risk assessment has blossomed, with major advances in methodology and implementation. This short course will introduce workshop attendees to the basic concepts of and emerging work in benefit-risk assessment along with regulatory perspectives on benefit-risk evaluations in regulatory filings. Specifically, the following topics will be covered: 1. The current status of benefit-risk assessment, including an overview of key approaches and methods, and a global look at the regulatory environment for incorporating benefit-risk assessments into decision-making. 2. Emerging aspects of benefit-risk assessment, including approaches for endpoint selection, visualization, weight selection, quantifying uncertainty and subgroup analyses. 3. Benefit-risk considerations in study design, conduct, analysis, and interpretation / presentation. This includes considerations for study design incorporating benefit-risk endpoints, using benefit-risk assessment for regulatory filings and advisory committee presentations, post-marketing benefit-risk and periodic benefit-risk evaluation reports (PBRERs). Throughout, case studies will be used to illustrate the use of structured benefit-risk methods in both the pre- and post-market settings. The instructors for this course will include two benefit-risk experts from industry (leaders in the Quantitative Sciences in the Pharmaceutical Industry [QSPI] Benefit-Risk Working Group), as well as a regulatory expert on benefit-risk assessment for medical devices. ![]() |
||
Short Course 5: Propensity Score Methods for Estimating Causal Effects in Pharmaceutical Research: The Why, When, and How |
09/22/14 |
|
Organizer(s): Rima Izem, FDA/CDER/Office of Biostatistics/Division of Biometrics 7 |
||
Instructor(s): Elizabeth Stuart, Johns Hopkins Bloomberg School of Public Health |
||
Propensity scores are an increasingly common tool for estimating the effects of interventions in observational (“non-experimental”) settings and for answering complex questions in randomized controlled trials. They can be of great use in pharmaceutical and health services research, for example helping assess broad population effects of drugs, devices, or biologics already on the market, especially investigating post-marketing safety outcomes, or for answering questions regarding the outcomes of long-term use using claims data. This course will discuss the importance of the careful design of observational studies, and the role of propensity scores in that design, with the main goal of providing practical guidance on the use of propensity scores to estimate causal effects. The short course will cover the primary ways of using propensity scores to adjust for confounders when estimating the effect of a particular “cause” or “intervention,” including weighting, subclassification, and matching. Topics covered will include how to specify and estimate the propensity score model, selecting covariates to include in the model, diagnostics, and common challenges and solutions. Software for implementing analyses using propensity scores will also be briefly discussed. The course will also discuss recent advances in the propensity score literature, with a focus on topics particularly relevant for pharmaceutical contexts, including prognostic scores, covariate balancing propensity scores, methods for non-binary treatments (such as dosage levels of a drug or when comparing multiple drugs, devices, or biologics simultaneously), and approaches to be used when there are large numbers of covariates available (as in claims data). ![]() |
||
Short Course 4: Subgroup Analysis in Clinical Trials |
09/22/14 |
|
Organizer(s): Alex Dmitrienko, Quintiles |
||
Instructor(s): Alex Dmitrienko, Quintiles; Ilya Lipkovich, Quintiles |
||
Abstract This half-day course focuses on a broad class of statistical problems arising in subgroup analysis. The course begins with a discussion of issues related to confirmatory subgroup analysis, i.e., analysis of pre-specified subgroups in the context of confirmatory Phase III clinical trials. This will include a summary of general principles of confirmatory subgroup analysis (interpretation of findings in several patient populations based on the influence and interaction conditions) introduced in Millen at el. (2012) and Dmitrienko, Millen and Lipkovich (2014). In addition, a review and comparison of multiplicity adjustment methods used in confirmatory subgroup analysis will be provided, including non-parametric and parametric procedures, and gatekeeping procedures used in the analysis of complex multiplicity problems involving subgroups (Dmitrienko et al., 2009; Dmitrienko, D’Agostino and Huque, 2013). The second part of the course deals with exploratory subgroup analysis, i.e., subgroup search/identification methods that can be applied in late-phase clinical trials. The discussion of exploratory subgroup analysis methods begins with a review of common approaches to subgroup identification in the context of personalized medicine and then focuses on the SIDES method (Subgroup Identification based on Differential Effect Search) introduced in Lipkovich et al. (2011), Lipkovich and Dmitrienko (2014a, 2014b). SIDES is based on recursive partitioning and can be used in prospective and retrospective subgroup analysis. Key elements of SIDES will be discussed, including (1) generation of multiple promising subgroups based on different splitting criteria, (2) choice of optimal values of complexity parameters via cross-validation, (3) evaluation of variable importance and using variable importance indices for pre-screening covariates, and (4) addressing Type I error rate inflation using a resampling-based method. Multiple case studies will be used to illustrate the principles and statistical methods introduced in this course, including design and analysis of Phase III trials with target subgroups and biomarker discovery in Phase III development programs. Software tools for implementing the subgroup analysis methods in clinical trials will be presented, including the SIDES package developed by the authors. Outline Confirmatory subgroup analysis (Alex Dmitrienko) 1. Subgroup analysis in clinical trials. - Exploratory versus confirmatory analysis. 2. Confirmatory subgroup analysis in multi-population clinical trials. - General considerations and principles. - Multiplicity problems in subgroup analysis. - Review and comparison of commonly used multiple testing methods. 3. Decision-making process in multi-population trials (influence and interaction conditions). 4. Case studies and software tools. Exploratory subgroup analysis (Ilya Lipkovich) 1. Introduction: Recent approaches to subgroup search and biomarker discovery in the context of personalized medicine. 2. SIDES method: Subgroup identification method based on recursive partitioning. - Direct identification of predictive variables vs. indirect evaluation of prognostic variables and potential covariate-by-treatment interactions. - Resampling-based multiplicity control and complexity control. - Variable importance and its use in efficient subgroup identification procedures (SIDEScreen procedures). 3. Case studies and software demonstration. References Dmitrienko, A., Bretz, F., Westfall, P.H., Troendle, J., Wiens, B.L., Tamhane, A.C., Hsu, J.C. (2009). Multiple testing methodology. Multiple Testing Problems in Pharmaceutical Statistics. Dmitrienko, A., Tamhane, A.C., Bretz, F. (editors). Chapman and Hall/CRC Press, New York. Dmitrienko, A., D'Agostino, R.B., Huque, M.F. (2013). Key multiplicity issues in clinical drug development. Statistics in Medicine. 32, 1079-1111. Dmitrienko, A., Millen, B., Lipkovich, I. (2014). Statistical considerations in subgroup analysis. Statistics in Medicine. To appear. Lipkovich, I., Dmitrienko, A., Denne, J., Enas, G. (2011). Subgroup identification based on differential effect search (SIDES): A recursive partitioning method for establishing response to treatment in patient subpopulations. Statistics in Medicine. 30, 2601-2621. Lipkovich, I., Dmitrienko, A. (2014a). Strategies for identifying predictive biomarkers and subgroups with enhanced treatment effect in clinical trials using SIDES. Journal of Biopharmaceutical Statistics. To appear. Lipkovich, I., Dmitrienko, A. (2014b). Biomarker identification in clinical trials. Clinical and Statistical Considerations in Personalized Medicine. Carini, C., Menon, S., Chang, M. (editors). Chapman and Hall/CRC Press, New York. To appear. Millen, B., Dmitrienko, A., Ruberg, S., Shen, L. (2012). A statistical framework for decision making in confirmatory multipopulation tailoring clinical trials. Drug Information Journal. 46, 647-656. ![]() |
||
Short Course 6: Group sequential design and sample size re-estimation in R |
09/22/14 |
|
Organizer(s): Keaven Martin Anderson, Merck Research Laboratories |
||
Instructor(s): Keaven Martin Anderson, Merck Research Laboratories |
||
Group sequential design is the most widely-used and well-accepted form of adaptive design for confirmatory clinical trials. It controls Type I error for multiple analyses of a primary endpoint during the course of a clinical trial and allows early, well-controlled evaluation of stopping for strong efficacy results or futility. This course will review the basics of group sequential theory and demonstrate common applications of the method. The R package gsDesign and its graphical user interface will be demonstrated to provide the user with an easy-to-use, open source option for designing group sequential clinical trials. The user should leave the course with an ability to propose effective group sequential design solutions to confirmatory clinical trial design. Topics covered include: • application of spending functions for selection of appropriate timing and levels of evidence for early stopping • confidence intervals • conditional power, predictive power and prediction intervals • time-to-event endpoints, including stratified populations and power for meta-analyses • binomial endpoints • superiority and non-inferiority designs • information-based sample size re-estimation and conditional power designs for sample size re-estimation • generation of publication-quality tables, figures and documents describing designs ![]() |
||
Tue, Sep 23 |
||
Plenary Session 1 - Statistics in the pharmaceutical industry and regulatory sciences: we learn from the past, celebrate today, and invigorate our tomorrow |
09/23/14 |
|
Organizer(s): Shiowjen Lee, FDA; Cristiana Mayer, JNJ |
||
|
||
Celebrate Our Past AND Energize Our Future
|
||
Statistics and Drug Regulation in the US: Where We’ve Been and Where We Are Today
|
||
Plenary Session 2 - Modeling and Simulations in Adaptive Designs for the Development of Drugs and Devices |
09/23/14 |
|
Organizer(s): Shiowjen Lee, FDA; Cristiana Mayer, JNJ |
||
Panelist(s): Frank Bretz, Novartis; Greg Campbell, FDA CDRH; Alex Dmitrienko, Quintiles; James Hung, FDA; Jose Pinheiro, Johnson & Johnson; Martin Posch, Medical University of Vienna; Bob Temple, FDA |
||
The panelists are: Bob Temple (FDA) Greg Campbell (FDA) James Hung (FDA) José Pinheiro (Johnson & Johnson) Frank Bretz (Novartis) Martin Posch (Medical University of Vienna) And Alexei Dmitrienko as moderator (Quintiles) |
||
Adaptive trial designs: Complexity versus Efficiency
|
||
Roundtable Discussions |
09/23/14 |
|
|
||
TL1: Best Practices for Adaptive Trial Designs
|
||
TL2:Use of Adaptive or Group Sequential Designs in Phase 1 Studies
|
||
TL3: High placebo response, what is your story?
|
||
TL4: Considerations for clinical development plans of oncology drugs with a companion diagnostics
|
||
TL5: Strategies to Mitigate Against Operational Bias in Adaptive Designs
|
||
TL6: Recent Advances in Turning Adaptive Designs Theory for Phase I Oncology Trials into Practice
|
||
TL7: Adaptive Designs in Medical Device Studies
|
||
TL8: Utility-Weighted Endpoints in Clinical Trials
|
||
TL9: Benefit-Risk Assessment via Responders in the Absence of Established Definition
|
||
TL10: Statistical Issues in Balancing Pre-market and Post-market Studies for Medical Devices
|
||
TL11: Statistical methods in benefit-risk assessment
|
||
TL12: Biomarkers for Rare Events
|
||
TL13: Surrogate Endpoints in Clinical Trials: A Statistical Perspective
|
||
TL14: Sample Size and Subgroup Analyses
|
||
TL15: Evaluation of prognostic biomarkers
|
||
TL16: Comparative effectiveness in off-label indications
|
||
TL17: Repeated Measures in Diagnostic Tests
|
||
TL18: Development and implementation of objective performance criteria
|
||
TL19: Measuring Interval
|
||
TL20: Avoiding Bias by Outsourcing
|
||
TL21: Safety Data Meta-Analysis
|
||
TL22: A Comprehensive Review of the Two-Sample Independent or Paired Binary Data: with or without Stratum Effects along with Homogeneity Testing and Estimation of Common Risk Difference.
|
||
TL23: Clinical Trial Design with Presence of Long-Term Survivors
|
||
TL24: Challenges in Sample Size Planning for Randomized Clinical Trials
|
||
TL25: Recent Development in Dynamic Treatment Regime: Theory and Implementation
|
||
TL26: Missing data in medical device studies
|
||
TL27: Bayesian Missing Data Analysis – Methods and Case Studies
|
||
TL28: Multi-regional Clinical Trials (MRCT): Challenges and Opportunities
|
||
TL29: Confounder adjustment for emergent treatment comparisons and safety assessment: The use of propensity scores and disease risk scores.
|
||
TL30: Interval Censoring in Time to Event data
|
||
TL31: Design of MTD Trials
|
||
TL32: Randomization needs in oncology trials
|
||
TL33: Real-Time Data Analysis for Early Hematology and Oncology Trials
|
||
TL34: Evaluating Effect of Intrinsic Factors on Pharmacokinetics
|
||
TL35: Data poolability
|
||
TL36: Cross-Industry Safety Analysis Recommendations for Clinical Trials and Submissions utilizing a Platform for Sharing Code
|
||
TL37: Data Transparency/Stewardship: The Why's, Where's and How's of the Secondary Use of Patient Data
|
||
TL38: Challenges faced by Oncology Statisticians in the area of innovation
|
||
TL39: How to promote/advocate the statistics leadership in pharmaceutical development: chemistry, manufacturing and control (CMC) area?
|
||
TL40: Drug approvals for narrow but critical unmet medical needs and diagnostic performance studies: how to harmonize and expedite both
|
||
TL41: Extracting information from observational electronic health and claims data to enhance post-approval medical product safety surveillance
|
||
TL42: Revisiting PSAPs: Sharing experience, How it's evolving
|
||
TL43: Safety Monitoring of Events of Interest and Alert Rules
|
||
TL44: Modeling the dissimilarities of Test-Reference concentration curves in post-marketing safety/ surveillance studies of generic drugs
|
||
TL45: Continuous Safety Signal Monitoring with Blinded Data
|
||
TL46: Ensuring Success in Mood Disorder Trials
|
||
TL47: Analyzing subjective measurements of pain
|
||
TL48: Experiences with BICR (Blinded Independent Central Review) of Progression-Free Survival (PFS) type endpoints
|
||
TL49: Celebrate Our Past, Energize Our Future
|
||
Parallel Session: Bayesian Missing Data Analysis – Methods and Case Studies from DIA Bayesian Working Group |
09/23/14 |
|
Organizer(s): Shiowjen Lee, FDA; Frank Liu, Merck & Co. Inc.; Cristiana Mayer, JNJ |
||
Chair(s): Cristiana Mayer, JNJ |
||
Since FDA commissioned the panel in National Research Council (NRC) issued the report on Prevention and Treatment of Missing Data in clinical trials, the missing data issues have gained another wave of considerable interest in pharmaceutical industry. Missing data can be inevitable in longitudinal clinical trials. While there is no “best” method for handling missing data, it is recommended by the NRC report to conduct sensitivity analysis to check the robustness of the analysis results. Considering the advantages of Bayesian approach on incorporating the uncertainty of missing data, DIA Bayesian Scientific Working Group (BSWG) established a team to look into Bayesian methods for analysis of trials with missing data. This session will provide an overview of Bayesian methods, and present case studies with a real clinical trial data to illustrate the applications of Bayesian methods in various sensitivity analysis models. A discussant from FDA will also share the regulatory perspectives with respect to the Bayesian methods for analysis of missing data in clinical trials. Potential Speakers including: Joe Hogan Brown University, G Frank Liu, Merck Sharp & Dohme, and Greg Soon from FDA (discussant). |
||
Bayesian Inference and Sensitivity Analysis with Missing Data
|
||
Bayesian Approach for Missing Data Analysis – a Case Study
|
||
Discussant(s): Guoxing (Greg) Soon, FDA |
||
Parallel Session: Statistics in the Race Against Alzheimer’s Disease |
09/23/14 |
|
Organizer(s): Steven Edland, UCSD; David Li, Pfizer; Hong Liu-Seifert, Eli Lilly and Company; Tristan Massie, FDA; Stephen E. Wilson, FDA/CDER/OTS/OB/DBIII |
||
Chair(s): Nandini Raghavan, Janssen R&D |
||
Alzheimer's disease (AD) is an emerging global public health crisis. Over 30 million individuals worldwide suffer from AD and this number is projected to quadruple by 2050. In 2010, the global cost of treating dementia, including dementia due to AD and other causes, was greater than $600 billion. Despite the looming crisis, there are currently no approved drugs that affect the underlying pathology and slow the progression of AD. One of the exciting new opportunities is the pursuit of clinical trials in the early stages of AD. The field increasingly believes that patients with earlier stages of AD may be more likely to benefit from potential disease modifying treatments. In fact, recently initiated clinical trials in Alzheimer’s disease will test treatments on subjects at the earliest pre-clinical stages of disease. At this stage of disease, clinically meaningful functional effects of treatment are difficult or impossible to measure. However, as recently outlined in a draft FDA Guidance document, positive findings on strictly cognitive endpoints or composite cognitive/functional endpoints may be used to support accelerated approval under 21 CFR subpart H provisions, with subsequent post-marketing surveillance to confirm clinically meaningful functional effects of treatment. The presentations in this session will span the spectrum of the disease, from the early preclinical stages to later stages to: (i) survey a robust statistical literature that has evolved to develop instruments sensitive to change in these early stages of disease; (ii) describe the composite cognitive endpoint to be used in the first ever prevention trial of cognitively normal subjects with biomarker indications of Alzheimer’s disease; (iii) provide a better understanding of the relationship between cognition and function in earlier stages of AD based on recently published large phase 3 clinical trials and (iv) present data-driven approaches to characterize clinical meaningfulness of proposed composite endpoints. The session will consist of three presentations and a panel discussion with industry and FDA representatives. There are still many challenges in the fight against AD ahead of us. But this also presents an excellent opportunity to statisticians at FDA, academia and the industry to work together and join force to conduct ground breaking research and advance the field of AD. |
||
Cognitive Impairment Precedes and Predicts Functional Impairment in Mild Alzheimer’s Disease
|
||
The Preclinical Alzheimer Cognitive Composite: measuring amyloid-related decline
|
||
Practical relevance of slowing decline on composite cognitive scales
|
||
Discussant(s): Nick Kozauer, FDA |
||
Parallel Session: Seamless Adaptive Designs – Success Stories and Challenges Faced |
09/23/14 |
|
Organizer(s): Freda Cooner, FDA/CDER; Weili He, Merck & Co., Inc.; Inna Perevozskaya, Pfizer; Jack Zhou, FDA |
||
Chair(s): Eva Miller, Senior Director, Biostatistics inVentiv Health Clinical |
||
One of the most commonly considered adaptive designs for clinical trials is a two-stage seamless adaptive design that combines two separate studies into a single one. Although seamless adaptive designs were not characterized as A&WC designs in the FDA draft guidance on adaptive designs, much experience with seamless designs has been gained in the past few years since the release of the draft guidance. When designed and executed properly, seamless adaptive designs could significantly shorten product development time and bring them to the market faster. In this session, we plan to share a few examples of seamless adaptive designs that were successfully conducted. Speakers and discussants will also discuss challenges faced by with these designs as well as related regulatory issues. |
||
ADVENT: An Adaptive Phase 3 Trial Resulting in FDA Approval of Crofelemer
|
||
Adaptive Design Studies: Operational and Regulatory Challenges
|
||
A seamless Phase IIB/III adaptive outcome trial: design rationale and implementation challenges
|
||
Discussant(s): Sue-Jane Wang, FDA |
||
Parallel Session: Considerations after stopping a trial early for overwhelming efficacy based on the primary outcome |
09/23/14 |
|
Organizer(s): Joshua Chen, Merck; Shanti Gomatam, FDA; Jeffrey Joseph, Theorem Clinial Research; Yun Wang, FDA/CDER |
||
Chair(s): Jeffrey Joseph, Theorem Clinial Research |
||
The decision to stop a trial early for overwhelming efficacy observed for the primary outcome entails a multi-disciplinary discussion among the IDMC members, designated sponsor personnel, and the relevant regulatory agencies. In a sequential designed trial, the stopping boundary is usually specified for the primary outcome. To meet ethical and public health implications, the criterion for stopping is typically stringent (i.e., requires very strong evidence of a treatment effect indicated by a small p-value) for the decision to stop the trial early. Nonetheless, even when the prespecified boundary is crossed, whether actually to stop early poses challenges to the decision-makers because the implications for the drug are not limited to the primary outcome. When a trial is stopped early, statistical issues revolve around the relevance of the secondary endpoints since they are often important, but underpowered even for the final analysis. Secondary outcomes often are tested at the full alpha level generally follows the stopping of the trial. However, when a trial is conducted in a group sequential setting, this traditional testing strategy may not control the overall type I error rate in the strong sense [Hung et al. 2007, Glimm et al. 2010, Tamhane et al. 2010]. Some other related topics will also be discussed in the session, e.g., sufficiency of relatively short safety/efficacy follow-up based on the interim results, necessity and appropriateness of a longer term follow-up after the interim analysis, and estimation of the treatment effect after early termination. |
||
Statistical Considerations on Secondary Endpoints after Trial is Stopped at an Interim Analysis
|
||
Testing Secondary Endpoints in Group Sequential Trials
|
||
Revenge of the alpha-Police (or close but no cigar)
|
||
Parallel Session: Innovative Designs for Cardiovascular Outcome Safety Trials in Type 2 Diabetes |
09/23/14 |
|
Organizer(s): Aloka Chakravarty, FDA; Janelle Charles, FDA, CDER/OTS/OB/DB7; Brenda Gaydos, Eli Lilly and Company |
||
Chair(s): Mary-Jane Geiger, Regeneron Pharmaceuticals Inc |
||
To ensure that a new therapy does not increase cardiovascular risk to an unacceptable extent, the FDA issued a Guidance to Industry in 2008 subtitled "Diabetes Mellitus -- Evaluating Cardiovascular Safety in New Antidiabetic Therapies to Treat type 2 diabetes". Under this guidance sponsors are required to rule out an excess amount of cardiovascular risk: an 80% relative increase in risk pre-marketing and a 30% relative increase in risk post-marketing, in terms of time to major adverse cardiac event (MACE). A number of strategies have emerged for designing studies that comply with the FDA Guidance. These include meta-analysis of the CV events obtained from ongoing phase 2 and 3 trials, possibly combined with events from a separate CV outcome trial (CVOT), events gathered exclusively from a CVOT, and events from two CVOTs in series. Due to relatively low event rates, CVOTs typically require very large sample sizes (on the order of 5000 patients) and long study durations (at least 5 years). Efficiency gains are, however, possible through the implementation of group sequential boundaries for early stopping. Additionally, adaptive methodologies can be incorporated into the design of a CVOT. For example, if an interim analysis shows promise, the targeted number of CV events may be increased, and the objective of the trial may be switched from ruling out a 30% relative increase in CV risk to actually demonstrating CV benefit. This approach reduces the risk associated with making an up-front commitment to a large (10,000+ patient) superiority trial, given that no CVOT has demonstrated CV benefit to date. The Cardiac Safety Research Consortium (CRSC) (www.cardiac-safety.org), a public-private partnership developed to advance scientific knowledge on cardiac safety based on the principles of the FDA's Critical Path Initiative, has prepared a White Paper to discuss the above approaches and evaluate the merits and drawbacks of each. This session presents key concepts from this white paper through three presentations and a discussion. Presentation 1: Development Approaches and Statistical Considerations to Assess the CV Risk of New Therapies for Type 2 Diabetes. Mary Jane Geiger, MD, PhD, Regeneron Pharmaceuticals, Inc. Presentation 2: Meta-analysis Approach to Establish Cardiovascular Safety. Stefan Hantel, Ph.D. Boehringer Ingelheim Pharma, Germany. Presentation 3: Adaptive Designs to Demonstrate Risk Reduction in CV Outcome Trials: Case study of EXAMINE. Cyrus Mehta, Ph.D. Cytel Inc. (Member of EXAMINE steering committee) Discussant: Mat Soukup, Ph.D. Team Leader, Division of Biometrics VII, FDA. |
||
Meta-analysis Approach to Establish Cardiovascular Safety
|
||
Adaptive Designs to Demonstrate Risk Reduction in CV Outcome Trials
|
||
Development Approaches and Statistical Considerations to Assess the CV Risk of New Therapies for Type 2 Diabetes
|
||
Discussant(s): Mat Soukup, FDA/CDER |
||
Parallel Session: Design and Analysis Challenges for Cancer Clinical Trials with Non-Proportional Hazard |
09/23/14 |
|
Organizer(s): Chia-Wen Ko, FDA; Kyung Lee, DBV/OB/LRS/CDER/FDA; Sharon Eileen McDermott, PPD, Inc; Steven Sun, Janssen pharmaceutical & development |
||
Chair(s): Sudhakar Rao, Janssen pharmaceutical & development |
||
Statisticians often face unique challenges arising from cancer clinical trials. Designs using classical proportional hazard model assumption may not be adequate or efficient for certain cancer types or for some types of new drugs. Interpretation using hazard ratio as a treatment benefit could be very difficult when proportional hazard is no long valid. Disease assessment frequency and clinical cutoff selection could have huge impact on characterizing the treatment benefit. Many of these issues need to be thoughtfully considered during the design stage to avoid costly mistake. In this session, each invited speaker will provide different views from academic, industrial and regulatory perspectives. Real clinical trial data will be used to illustrate the issues and solutions. We believe the session will be useful and inspiring to oncology Biostatisticians who seek better understanding of the practical challenges in the fields |
||
Moving beyond the hazard ratio in quantifying the between-group difference in survival analysis
|
||
Cure Rate Survival Data: Practical Issues and Recommendations
|
||
Issues in analyzing time-to-event endpoint involving non-proportional hazard
|
||
Parallel Session: Emerging Statistical and Data Issues in Expedited Review for Breakthrough Therapies |
09/23/14 |
|
Organizer(s): Xiaoyun (Nicole) Li, Merck & Co.; David L Raunig, ICON Medical Imaging; Vivian Yuan, FDA |
||
Chair(s): C V Damaraju, Janssen Research & Development, LLC |
||
June 25, 2013, the FDA issued draft guidance on expedited programs for serious conditions including breakthrough therapy designation, accelerated approval and priority review. Special statistical and data issues are encountered for this series of expedited review programs. For example, most trials in the expedited review programs are single arm studies with surrogate endpoints. At the same time, due to the much shortened review period, it is more critical for sponsors to provide data and statistical review aid in the most compliant way so that it can help FDA expedite the review process. In this session, we invite speakers from FDA to share perspectives on statistical consideration in expedited review of breakthrough therapies, experience and issues encountered during the BLA/NDA review for breakthrough therapies. We will also invite speakers from industry to share the statistical and logistical challenges and opportunities on this topic. |
||
The first oncology drug receiving three breakthrough designations –The IMBRUVICA story
|
||
Review of Breakthrough Therapy Designations
|
||
Cost-effective design strategies for development of breakthrough personalized medicines
|
||
Discussant(s): Xiaoming Li, Gilead Sciences, Inc. |
||
Parallel Session: Evolution of the Role of Safety in Regulatory Science and Current Thoughts and Developments in Safety Analyses |
09/23/14 |
|
Organizer(s): Brent Burger, AstraZeneca; Yahui Hsueh, FDA/CDER; Caiyan Li, Takeda Pharmaceuticals; Melvin Munsaka, Takeda Pharmaceuticals; Rongmei Zhang, FDA/CDER |
||
Chair(s): Melvin Munsaka, Takeda Pharmaceuticals |
||
A comprehensive characterization of the safety profile of a drug has become an increasingly important consideration in drug development. Drug recalls and safety warnings in drug labels have become more common with the increased scrutiny of safety data, both pre- and post-approval. Additionally, much is written in older and recent literature where questions have been raised regarding completeness and inadequacies in the analysis and reporting of safety data. It widely acknowledged that there is some room for improvement in the analysis and reporting of safety data from clinical trials and that safety data needs to be given a more rigorous treatment similar to efficacy. This evolution in safety data analysis needs and reporting has resulted in a shift in the roles within regulatory science and sponsor companies, with both parties needing to allocate more resources to look into safety data in more systematic way in an attempt to appropriately provide a comprehensive assessment of the safety profile of a drug. The objective of this session is to discuss the evolution of the role of safety data in drug development and its impact on regulatory science from both regulatory and industry perspectives. Important landmark safety drug events will be highlighted. Some methodological developments and regulatory developments within safety analysis will be highlighted along with current developments and thoughts. |
||
New Developments in Premarketing Signal Detection and Evaluation
|
||
Developments in Quantitative Drug Safety Evaluation, Perspectives from a Regulator
|
||
The Role of Quantitative Science in Medicine Safety and Pharmacovigilance: A focus on Post-authorization challenges
|
||
Parallel Session: Big Data - Statistical Analysis and its Computation Environment |
09/23/14 |
|
Organizer(s): Steven Bai, FDA; Vijay Chauhan, Alpha Stats Inc; George Chu, FDA |
||
Chair(s): Grace Liu, Janssen Research & Development |
||
The opportunities for creating value through marketing analytics are growing in large part due to technological innovation. In clinical trial, from regulatory perspective, information drawn based on integrate the existing data for many studies can help to make reasonable decision for considering both benefit and risk. But the time for performing the statistical analysis that need to handling the large dataset is always a hot topic. With the web-based sharing and cloud computing technology, it provides the opportunity to collaborate the large dataset into the statistical analysis. For example, in timing dependent Cox model analysis, the martingale associated with the counting process calculation is time consuming, even for a reasonable size of single study. For meta-analysis involving multiple studies, the standard computer capacity is unrealistic. The new cloud computing technology makes this calculation possible for statistical analysis incorporating large database. The objective of this proposal is to discuss the meta-analysis that handles large amount of data or “Big Data” and how this can be done through the web-based sharing and cloud computing. The real case example will be presented for utilizing the cloud computing. Speaker 1: Nanxiang Ge, Senior Director, Biostatistics, Daiichi Sankyo, Inc.; Speaker 2: Sudhakar Rao, Senior Director, Janssen Research & Development; Speaker 3: Fei Chen, Associate Director, Janssen Research & Development. |
||
Understanding Big data
|
||
Big Data – Applying Genomics and Metagenomics to Food Safety
|
||
Big Data Simulation for Better Clinical Trial Design
|
||
Discussant(s): Ron Wasserstein, American Statistical Association |
||
Parallel Session: Modern Approaches for Rare Disease and Pediatric Drug Trials |
09/23/14 |
|
Organizer(s): Steve Bird, Merck; Brad P Carlin, University of Minnesota; Gene Pennello, FDA; Laura Thompson, FDA/CDRH |
||
Chair(s): Gene Pennello, FDA |
||
Conventional statistical approaches to evaluating drug efficacy and safety require relatively large numbers of subjects, with each drug requiring a sequence of costly clinical trials. Attaining adequate sample size for studies involving rare disorders or pediatric diseases is often impossible, blocking the development of new therapies for particularly vulnerable populations. Using Bayesian methods to adaptively change an ongoing study or leverage historical information would facilitate learning about the utility of experimental therapies, reducing trial sample size while vastly improving efficiency. However, regulators have historically been reluctant to permit the use of Bayesian methods in late-phase confirmatory trials, largely due to worries that the uncritical use of information external to the trial will inflate the trial’s Type I error rate, the control of which is a fundamental tenet of regulatory science. This session will enlist 3 speakers to discuss both Bayesian and non-Bayesian approaches in this challenging area, with an eye toward methods and software useful in orphan drug development. Speakers from industry (such as Huyuan Yang, Takeda Pharmaceuticals), government (such as Lisa Lavange, FDA) and academics (such as the organizer or his coauthors) will be invited to participate. |
||
Modern Bayesian Adaptive Methods for Clinical Trials, with Application to Orphan Diseases
|
||
Considerations of Practical Study Designs for Rare Disease Drug Development
|
||
Discussant(s): Lisa LaVange, FDA/CDER |
||
Parallel Session: Subgroup Identification and Analysis in Clinical Trials |
09/23/14 |
|
Organizer(s): Chul H Ahn, FDA-CDRH; Alex Dmitrienko, Quintiles; Volha Tryputsen, Janssen R&D; Lilly Yue, FDA/CDRH |
||
Chair(s): Alex Dmitrienko, Quintiles |
||
Analysis of clinical trials with multiple subpopulations defined based on a variety of markers (demographic, clinical and genetic variables) has attracted much attention in the clinical trial community. The U.S. and European regulatory agencies are working on guidance documents on subgroup analysis in clinical trials. This session will serve as a forum for discussing key statistical topics in subgroup analysis. The topics will include: the presentation of new methodologies for exploratory subgroup analysis (subgroup identification, signal detection in large databases, benefit-risk analysis); and the industry current common practices in conducting subgroup analyses based on the survey developed by the Subgroup Analysis Working Group sponsored by QSPI (Quantitative Sciences in the Pharmaceutical Industry) operating under the auspices of the Society for Clinical Trials. A discussion of regulatory considerations on subgroup analysis in clinical trials with the perspective on the challenges regulators and industry are sharing will conclude the session. The confirmed speakers (Ilya Lipkovich, Quintiles; Cristiana Mayer, Johnson & Johnson; Sue-Jane Wang, FDA) will engage the audience in the different facets of subgroup analysis in clinical trials. |
||
Common Practices for Subgroup Identification and Analysis: The Results of a Survey Conducted Within the Pharmaceutical Industry
|
||
Exploratory subgroup analysis: Subgroup identification approaches in clinical trials
|
||
Some New Approaches to Design and Analysis of Subgroups in Randomized Controlled Trials
|
||
Parallel Session: Benefit-Risk considerations for drug and therapeutic device/diagnostic combinations |
09/23/14 |
|
Organizer(s): Gerry W Gray, FDA, CDRH; Telba Irony, FDA/CDRH; Camille Orman, JNJ; Richard Zink, JMP |
||
Chair(s): Gerry W Gray, FDA, CDRH |
||
In addition to the usual issues of uncertainty, patient preferences, availability of treatments for the disease (or lack thereof), etcetera, some unique considerations arise when evaluating the risks and benefits of a drug or therapeutic device when used in combination with a companion diagnostic device. The overall risk/benefit of the therapeutic/diagnostic combination can depend on: • The intended population of the therapeutic/diagnostic combination. • The nature of the companion diagnostic test (predictive vs selective, continuous vs discrete). • Cutpoints or algorithms used in the diagnostic to define positive and negative categories. • Whether the patient subgroup defined by the companion diagnostic shows improved safety or increased effectiveness or both. • The differing risks and benefits of the therapy in the diagnostic positive vs. negative groups. • Potential interactions between the performance characteristics of the companion diagnostic and the risk/benefit profile of the drug or therapeutic device. In this session we will explore these issues in detail and provide specific examples for how risk/benefit determinations can be formulated for therapeutic/diagnostic combinations. Potential speaker: Rebecca Noel, Eli Lilly Potential speaker: Norberto Pantoja-Galicia, FDA, CDRH. Potential speaker: Stuart Walker, CIRS Potential speaker: FDA CDER representative |
||
Assessment of the Benefit Risk tradeoff for diagnostic devices
|
||
FDA’s Benefit-Risk Framework for Human Drug Review
|
||
Use of a Structured Framework for Evaluating Benefits and Risks of a Drug-Companion Diagnostic Combination
|
||
Parallel Session: Adaptive design trials: Can best practices get us over “the hump”? |
09/23/14 |
|
Organizer(s): Yeh-Fong Chen, Food and Drug Administration; Weili He, Merck & Co., Inc.; Eva Miller, Senior Director, Biostatistics inVentiv Health Clinical; John Scott, Division of Biostatistics, FDA / CBER / OBE |
||
Chair(s): Weili He, Merck & Co., Inc.; John Scott, Division of Biostatistics, FDA / CBER / OBE |
||
Panelist(s): Greg Campbell, FDA CDRH; Paul Gallo, Novartis; Kenneth Getz, Tufts CSDD; David Michael Moriarty, Janssen Research & Development; Marc Walton, FDA CDER |
||
Why have there been relatively few major successful adaptive design trials even though adaptive trial design is thought to save costs, time and ultimately get drugs to patients sooner? According to a survey conducted by Tufts CSDD, the adoption of adaptive designs in clinical development has only been at approximately 20% in recent years. The chief reason may be due to the complexity of adaptive design trials as compared to traditional trials and the lack of consensus best practices for planning and documenting these trials. Barriers, some perceived and some real, to the use of clinical trials with adaptive features still exist, and these may include but are not limited to the concerns about the integrity of study design and conduct, the risk of regulatory acceptance, the need for an advanced infrastructure for complex randomization and clinical supply scenarios, change management for process and behavior modifications, extensive resource requirements for the planning and design of adaptive trials, and the potential to relegate key decision makings to outside entities. Invited speakers and panelists who come from FDA, Industry, and Academia will share their views and solutions on some key issues in the implementation of adaptive design trials. |
||
ADSWG Best Practice Sub Team: Objectives to support increased use of Adaptive Trial Designs
|
||
Getting the most from adaptive design: Pruning the Thorns
|
||
Parallel Session: Statistical Methods and Roles in an Evolving Field of Risk Management in Clinical Research Promoting Innovation and Cost Savings |
09/23/14 |
|
Organizer(s): Ruth Grillo, Theorem Clinical Research; Rakhi Kilaru, PPD; Changhong Song, FDA; Ying Yang, FDA |
||
Chair(s): Theodore Lystig, Medtronic, Inc. |
||
The 2013 FDA guidance on risk based monitoring (RBM) and the Clinical Trials Transformation Initiative’s (CTTI) proposed recommendations on centralized monitoring provide an opportunity for a more holistic and proactive approach through off-site and central monitoring and a targeted approach to on-site Monitoring. A similar RBM guidance was released by the EMEA in May 2013. Companies or organizations such as TransCelerate BioPharma Inc., International Drug Development Institute (IDDI) and ECRIN (European Clinical Research Infrastructures Network) continue to develop and build on a methodology that shifts monitoring processes from an excessive concentration on source data verification to comprehensive risk-driven monitoring to further support data quality. Expert panel recommendations and results from large cardiovascular trial simulation experiments published by Eisenstein et al in 2005 and 2008 suggest that it is possible to significantly reduce the costs of clinical trials without adversely impacting their scientific objectives. The statistician is now an even more integral player in developing the risk-based monitoring strategies and plans for clinical trials. The strategies and plans must take into account the reporting of unusual patterns in these data such as variation of selected key variables within a subject and between trial centers, missing critical data, protocol deviations/violations, adverse events and inclusion/exclusion criteria. The statistician employs varying methods at his/her disposal to detect abnormal trends and unusual patterns in the data by comparing each center in terms of individual variables (univariate approach) or combination of variables (multivariate approach). Recent publications in Centralized Statistical Monitoring have also suggested use of risk models to further substantiate the evolving frontier in risk management through statistical approaches. The objective of this session is to provide multiple vantage points to approaches in risk based monitoring. The session will emphasize the use of statistical methods in risk management and monitoring in order to reduce the impact of rising clinical trial costs to encourage innovation and improve trial conduct. |
||
A statistical approach to central monitoring of clinical trial data
|
||
Risk-Based Monitoring: A FDA Statistical Reviewer’s Perspective
|
||
Central Statistical Monitoring: Modelling Foibles to Fraud
|
||
Advancing Oncology Drug Development using Innovative Designs and Methods |
09/23/14 |
|
Organizer(s): Rong (Rachel) Chu, Agensys, Inc; Kun He, FDA; Vivian Yuan, FDA |
||
Chair(s): Rong (Rachel) Chu, Agensys, Inc |
||
Oncology drug development is a costly and lengthy process. The high failure rate of cancer clinical trials [1] warrants more robust clinical development strategies to improve the drug development decision-making process. A paradigm shift from traditional cytotoxic agents to molecularly targeted therapies has occurred. In recent years, a rich body of statistical literature has focused on improving efficiency and flexibility of oncology trials while maintaining scientific validity and ethical standards of studies [2]. Despite their superior statistical properties, many investigators are reluctant to employ recent innovations in oncology drug development. This session will focus on a few methodological innovations in oncology drug development, in particular, reviewing their current applications, discussing their challenges and practicality in the real world. A case study for determining a target expression cut-point based on phase 2 pancreatic cancer trial data will also be presented. References. 1. Sutter S, Lamotta L. Cancer drugs have worst phase III track record. Internal Medicine News. http://www.internalmedicinenews.com/specialty-focus/oncology-hematology/single-article-page/cancerdrugs-have-worst-phase-iii-track-record.html. Published February 16, 2011. 2. Ivanova, A., Rosner, G. L., Marchenko, O., Parke, T., Perevozskaya, I., & Wang, Y. (2013). Advances in Statistical Approaches to Oncology Drug Development. Therapeutic Innovation & Regulatory Science, 2168479013501309. |
||
Recent developments in oncology Phase 1 and Phase 2 studies
|
||
A Case Study for Determining a Target Expression Cut-Point
|
||
Improving Oncology Clinical Programs by Use of Innovative Designs
|
||
Discussant(s): Somesh Chattopadhyay, FDA |
||
Parallel Session: Bayesian methods in drug development: an era of synthesizing evidence |
09/23/14 |
|
Organizer(s): Freda Cooner, FDA/CDER; Meg Gamalo, Office of Biostatistics, CDER/FDA; Satrajit Roychoudhury, Novartis Pharmaceutical; Xia Xu, Merck |
||
Chair(s): Satrajit Roychoudhury, Novartis Pharmaceutical |
||
In recent years, Bayesian design and analysis have generated extensive discussions in the literature of clinical trial. While developing a new drugs or medical devices where data are often scanty, accessing to large historical global databases can increase power and decrease sample. Bayesian methods have emerged as particularly helpful in combining the disparate sources of information while maintaining reasonable traditional frequentist characteristics. There are several areas of clinical trial e.g., using historical data, Noninferiority trials, safety analysis, safety signal detection etc., where Bayesian method has been very useful. Bayesian framework often provides the primary way to respond to questions raised in different phases of clinical trial. It helps to borrow information strategically under different heterogeneity (e.g., different disease subgroups, different regions etc.). This is particular useful in planning and executing successful global clinical trials. Bayesian statistical methods make it possible to combine data and mechanistic knowledge from previous studies with data collected in a current trial. The combined information may provide sufficient justification for smaller or shorter clinical studies without sacrificing the goal of evidence-based medicine. Until recently, wide-scale use of Bayesian methods was infeasible because of the intractable mathematics. However, modern computing power and algorithms now make it possible to take advantage of Bayesian continuous knowledge building. Also recent FDA guidance (Guidance for the Use of Bayesian Statistics in Medical Device Clinical Trials, February 5, 2010) reflects Bayesian methods as a scientifically rigorous and safe experimental approach to clinical trials plus a statistically sound way of incorporating prior knowledge to make better decisions. This session will focus on different areas of application of Bayesian statistics in clinical trial e.g. use of historical data, subgroup analysis, benefit-risk assessment non-inferiority etc. Speakers: Dr. Beat Neuenschwander (Novartis), Dr. Kert Viele (Berry Consultants), Dr. Ram Tiwari (FDA) Discussant: Dr. Sujit Ghosh (NCSU & NSF). |
||
Bayesian Approach for Benefit-Risk Analysis
|
||
Tailored Multi-Strata Designs for Early Phase Clinical Trials
|
||
Properties of Historical Borrowing in Clinical Trials
|
||
Discussion: Bayesian Methods in Drug Development: An Era of Synthesizing Evidence
|
||
Discussant(s): Sujit Ghosh, NCSU & NSF |
||
Parallel Session: Big Data: Challenges and Opportunities in Drug Development and Patient Care |
09/23/14 |
|
Organizer(s): C V Damaraju, Janssen Research & Development, LLC; Jane Fridlyand, Genentech; Qin Li, FDA/CDRH; Elena Rantou, FDA/CDER |
||
Chair(s): Jane Fridlyand, Genentech |
||
Rapidly growing volume of real world healthcare data (Big Data) poses a number of challenges to statisticians, health scientists and policy makers today. Establishing appropriate data structure and developing robust analytical tools are essential when learning from real world data. Emerging body of real world data includes FDA’s JANUS and SENTINEL data surveillance projects aimed at detecting safety signals, claims databases from health insurance companies as well as clinical trials and registry data from academia and major pharmaceutical companies. Recently, an ambitious effort called ‘CancerLinQ’ was unveiled at ASCO 2103 aimed to “learn health systems to transform cancer care” by collecting data on the care of hundreds of thousands of cancer patients and using it to help guide treatment of other patients across the health-care system. Statisticians are being increasingly tasked to formulate valid requirements related to structure and analytics of big data for data mining efforts. In this session, we will explore some of the challenges and opportunities presented by big data in the structure, analysis and integration in the context of drug development, approval and post-approval long-term risk/benefit surveillance. In this session, invited speakers representing academia, industry and the FDA respectively are going to share their views on big data challenges in drug development and patient care and statistical methods to deal with the real world evidence. The first talk will discuss approaches to designing observational studies so to minimize possible interpretation pitfalls by prospectively specifying the specific questions and analysis methods at the study design stage. The second talk will focus on the post-marketing drug safety monitoring in the age of big data. The final talk will present statistical methods for consideration in the analysis of observational data from a causal inference perspective. Time-allowing, the talks will be followed by a panel discussion by the speakers to address audience questions potentially including discussion of minimal evidentiary standard for real world data, and regulatory and academic views on the future of evidence-driven treatment pathways. |
||
Design and analysis approaches with focus on long-term observational studies in oncology’
|
||
Post-market Drug Safety Monitoring in the Age of Big Data
|
||
Methods consideration in the analysis of observational data: a causal inference perspective
|
||
Town Hall Session: Diagnostics Town Hall Meeting |
09/23/14 |
|
Organizer(s): Hope Knuckles, Abbott; Estelle Russek-Cohen, US FDA CBER |
||
Chair(s): Shanti Gomatam, FDA |
||
This session will comprise panelists of both FDA and industry with diagnostics experience. Companion diagnostics, IVD products, biologics, imaging as well as some other diagnostic products will be discussed. The audience will get to ask questions in an open-mic forum, and the panelists will have a very short presentation to get the discussion going. |
||
Wed, Sep 24 |
||
Parallel Session: Within-trial and between-trial predictive inference in drug development |
09/24/14 |
|
Organizer(s): Annie Lin, FDA/CBER; Feng Liu, GlaxiSmithKline |
||
Chair(s): Wei Zhao, MedImmune LLC; Boguang Zhen, FDA/CBER |
||
Predictive Inference has gained much attraction as the pharmaceutical researchers are interested in the inference involving the observables rather than parameters (Seymour, 1993). In other words, the prediction will help drug developers to make better decisions during the study design stage. Predictive inference emphasizes the prediction of future observations based on past data and can be used to make predictive statements about unobserved data, incorporating what we have already learnt, to quantify the probability of achieving various outcomes. In drug development, it is of great interests, given all the data observed so far, to determine whether to continue, stop or modify the clinical development, to manufacture more drugs for clinical research etc. There are at least two types of predictions: within-trial predictions and between-trial prediction. Within-trial prediction, such as seamless phase II/III adaptive design (e.g. Korn et al., 2012) and Bayesian two-stage design (e.g. Dong et al., 2012), can make probability statement of future data at the interim analysis based on the accrued data while the trial is on-going. Between-trial prediction incorporates trial-trial variability and quantitatively summarises the development program to provide a predictive probability of success in later phase using early clinical trial data. The development of methodology in predictive inference will not only further help drug developer to make go/no-go decision, but facilitate efficient regulatory review process. This session will focus on building and evaluating prediction inference methodologies and their applications in clinical trial design. We intend to address the challenging issues from regulatory, industrial and academic perspectives. Potential speakers are Dr. Susan Halabi at Duke University, Dr. Telba Irony at the FDA and Dr. Dave Burt at GSK and one discussant is Dr. Estelle Russek-Cohen at the FDA. Reference: Geisser, Seymour (1993) Predictive Inference: An Introduction, CRC Press. Korn EL, Freidlin B, Abrams JS, Halabi S., Design issues in randomized phase II/III trials. J Clin Oncol. 2012 Feb 20;30(6):667-71 Dong G, Shih WJ, Moore D, Quan H, Marcella S. A Bayesian-frequentist two-stage single-arm phase II clinical trial design. Stat Med. 2012 Aug 30;31(19):2055-67 |
||
Using Historic and/or Current Trial Results to Predict the Success of a Clinical Program
|
||
Making predictive inference using Bayesian methods – two case studies
|
||
Discussant(s): Estelle Russek-Cohen, US FDA CBER |
||
Parallel Session: Statistical Evaluation of Cardiovascular Biomarker Tests |
09/24/14 |
|
Organizer(s): Bipasa Biswas, FDA/CDRH; Kristen Meier, Statistical Consultant; Vicki Petrides, Abbott Diagnostics; Yuqing Tang, FDA |
||
Chair(s): Bipasa Biswas, FDA/CDRH |
||
Biomarkers that assess cardiovascular related conditions or function encompass a wide spectrum of biochemical and physiological measurements, from protein biomarkers to anatomical images or video of physiological processes. Medical tests that measure these biomarkers are typically evaluated for measurement validation and clinical performance in the context of their intended use. While the types of tests and how they are used clinically are diverse, there are many similarities in statistically evaluating these diverse tests. This session will provide a survey of statistical considerations for evaluating cardiovascular biomarker tests in general highlighting the different clinical uses. It will also explore in greater detail statistical issues associated with reference ranges and risk prediction for cardiac biomarkers. Please join us in discussing these issues as we hear the perspectives of FDA, industry and academia. |
||
An Overview of Statistical Considerations for Cardiovascular Biomarker Tests
|
||
A comparison of methods for determining reference values for quantitative assays
|
||
Combined measures of diagnostic model performance - misleading or helpful?
|
||
Parallel Session: Dynamic randomization applied in clinical trials: controversies, current practice, and future perspectives |
09/24/14 |
|
Organizer(s): Chunrong Cheng, CBER/FDA; David Li, Pfizer; Grace Liu, Janssen Research & Development |
||
Chair(s): Zhen Jiang, DB/OBE/CBER/FDA |
||
Dynamic randomization, in which patients are randomized to treatment group by checking the allocation of similar patients already randomized, can help ensure balance on important prognostic factors among treatment groups. There is much literature on the methodological development and analysis of dynamic randomization procedure in recent years. The use of methods may, however, impact the trial inference, credibility, and even validity of the trial analysis and ultimately cause substantial debate. A mixture of opinions have been expressed from regulatory agencies such as FDA and EMEA regarding the usefulness and validity of this method. In this session, we aim to provide a general overview of the dynamic randomization methodologies, the statistical controversy surrounding these procedures, and the potential issues on clinical trials interpretations. In this session, one speaker will focus on an overview of dynamical randomization applied to clinical trials, including strengths and weaknesses, scenarios where it can and cannot be applied, points to be considered in analysis, regulatory standpoints, etc. Another speaker will provide a literature review of the methodologies and introduce an innovative method in details. The third speaker will present real clinical trial examples with dynamic randomization adopted and the lessons learned from these trials. This session will help the audience with better understanding on dynamic randomization. |
||
On the use of Minimization Design
|
||
Rerandomization in Clinical Trials
|
||
Covariate-adaptive randomization -- a preferred randomization method in clinical trials
|
||
Parallel Session: DMCs Beyond Borders: Statistical Challenges for DMC’s in the Era of Globalization |
09/24/14 |
|
Organizer(s): Yoko Adachi, FDA; Joshua Chen, Merck; Yeh-Fong Chen, Food and Drug Administration; Stephine Keeton, FDA/CDER |
||
Chair(s): Yoko Tanaka, Eli Lilly and Company |
||
Panelist(s): Susan Ellenberg, University of Pennsylvania; Steven Snappin, Amgen; Masa Takeuchi, Kitasato University; Bob Temple, FDA |
||
It is now very common to conduct clinical trials for medical products (drugs, devices, biologics) worldwide including developing or developed countries. Consideration needs to be given to the multiregional clinical trials with particular emphasis on how cultural and other ethnic differences between countries may affect safety and efficacy data and study conduct, thus the importance of monitoring of such data by the DMC (Data Monitoring Committee). This session will discuss guidance related to role and logistics of DMC particularly the constraints for the site and sponsor under DMC and the decision-making process while protecting subject safety, study conduct and the validity of the trial. Recommendation for DMC to consider in monitoring trials conducted in the developing or developed world will also be discussed. Speakers: Susan Ellenberg, University of Pennsylvania Steve Snapinn, Amgen Panelists: Robert Temple, Food and Drug Administration Masahiro Takeuchi, Kitasato University Organizers: Yeh-Fong Chen, FDA Yoko Adachi, FDA Joshua Chen, Merck Stephine Keeton, PPD Chair: Yoko Tanaka, Eli Lilly |
||
DATA MONITORING OF GLOBAL TRIALS: SOME GENERAL ISSUES AND A CASE STUDY
|
||
Update from the Multi-Regional Clinical Trials (MRCT) Center at Harvard
|
||
Parallel Session: Sensitivity analysis for clinical trials with missing outcome data |
09/24/14 |
|
Organizer(s): Xiaohui (Ed) Luo, PTC Therapeutics; Camille Orman, JNJ; Ying Yang, FDA; Yu Zhao, FDA |
||
Chair(s): Yun-ling Xu, FDA |
||
It is unavoidable that some outcome data are missing from confirmatory clinical trials. It is not an acceptable option to ignore them when planning, conducting or interpreting the analysis of a confirmatory clinical trial. It is possible to reduce the amount of missing data by careful planning. Moreover, it is also important to specify a plausible approach to handling missing data in the statistical analysis plan. There is no universal approach that adjusts the analysis to take into account the missing data. Different approaches may lead to different conclusions. Whatsoever, all approached rely on unverifiable assumptions when there are missing data. Therefore it is necessary to investigate the robustness of study results through a range of sensitivity analyses based on different assumptions. |
||
Handling of Missing Data for Composite Endpoints in Clinical Trials
|
||
Sensitivity analysis in medical device clinical trials: Case Studies
|
||
An Intermittent Missing Data Analysis Strategy for Clinical Trials with Death-Truncated Data
|
||
Parallel Session: Novel Cross-pharmaceutical Collaboration Models |
09/24/14 |
|
Organizer(s): Meg Gamalo, Office of Biostatistics, CDER/FDA; Nandini Raghavan, Janssen R&D; Jinping Wang, Eisai; Peng Yu, Eli Lilly |
||
Chair(s): Scott Andersen, Eli Lilly |
||
In this session we present novel models for statisticians across pharma and academia to collaborate pre-competitively on important scientific questions. We explore the rich opportunities and potential challenges that can arise in such collaborations. We use as a case study, the Clinical Endpoints Working Group (CEWG) efforts in the ADNI Private Partner Scientific Board (ADNI-PPSB). The ADNI-PPSB is an independent, pre-competitive forum, convened by the Foundation for the National Institutes of Health, that provides the opportunity for scientific exchange among industry partners related to the Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. Therapeutic trials in Alzheimer’s disease are actively pursuing trial designs in patients with earlier stages of the disease. Some of the biggest challenges in designing such trials are identifying the right population and determining the right efficacy endpoint to capture change. Several novel endpoints have been recently proposed independently by various companies. Many of these companies came together in a pre-competitive fashion to share their internal work within the ADNI-PPSB. The CEWG working group was subsequently formed to move forward collaboratively to validate the various proposed endpoints on a number of company-proprietary datasets. Presentations will discuss the data-sharing, as well as statistical, collaboration models developed in this working group. The session will consist of two presentations and a discussion by Dr. Stephen Wilson (FDA). |
||
A Collaborative Cross-Pharma Validation of Novel Composite Outcome Measures for Pre-Dementia Alzheimer’s Disease
|
||
Statistical Evaluation of Novel Composite Endpoints in Early Alzheimer’s Disease
|
||
Discussant(s): Stephen E. Wilson, FDA/CDER/OTS/OB/DBIII |
||
Parallel Session: Recent Issues and Statistical Approaches in Vaccine Clinical Trials |
09/24/14 |
|
Organizer(s): Stephine Keeton, FDA/CDER; Deepak B. Khatry, MedImmune; Barbara Krasnicka, FDA/CBER |
||
Chair(s): Zhixue Maggie Wang, UCB Biosciences, Inc; Lihan Yan, FDA/CBER |
||
Evaluation of pre-market vaccine efficacy and safety has unique challenges in addition to those seen in usual clinical trials. The uniqueness arises from many characteristics associated with preventive vaccine studies including: the intended use in a widely-covered population, typically healthy subjects; the small incidence rate associated with the clinical and safety endpoints; the induced immunological effect which may be mediated for evaluation of vaccine effect on reducing the risk of the targeted disease. Consequently, consideration of more innovative designs, including adaptive designs, cluster randomized trials, large simple safety trials, and/or those that provide for an evaluation of a possible correlate of protection, may help address some of these unique challenges. This session aims for information sharing and discussion among the industry, academic, and regulatory participants on new statistical developments in evaluating vaccine effectiveness and safety via both design and analytical methods. We anticipate three speakers [e.g., Dean Follman of NIAID/NIH; Larry Moulton of Johns Hopkins University; and someone from industry (e.g., Andrew Dunning or Ivan Chan)], followed by a panel and floor discussion with an additional representative from the Vaccine Evaluation Branch of CBER/FDA. |
||
Incorporating Founder Virus Information in Vaccine Field Trials
|
||
Topics in the Design of Multivalent Vaccine Studies
|
||
Statistical and non-Statistical Challenges in a Meta-Analysis of Ad5-vector HIV Vaccine Efficacy Trials
|
||
CMC Session 1: Bridging Studies for Clinical Assays and CMC |
09/24/14 |
|
Organizer(s): Tsai-Lien Lin, FDA; Chuck Miller, Merck; Charles Tan, Pfizer; Harry Yang, MedImmune; Jingjing Ye, FDA |
||
Chair(s): David Christopher, Merck |
||
With rapidly evolving science and technology in the pharmaceutical industry, the need for bridging studies is increasing. Method replacement may be due to new technology or to better process understanding, both of which may require different analytical methods. This session will explore CMC statistical considerations for designing and analyzing such studies. The speakers and the session organizers will engage the audience in Q&A after the presentations. |
||
Introduction / Overview to Bridging Studies
|
||
Bridging Studies for Assays used in Vaccine Clinical Trials
|
||
Bridging Studies for WHO Pn ELISA
|
||
Parallel Session: New Hope: The Development of Disease-modifying Drugs for Neurological Diseases |
09/24/14 |
|
Organizer(s): Min Chen, Celgene; Julia Luan, FDA; Tristan Massie, FDA; Kelvin Shi, Astra Zeneca |
||
Chair(s): Julia Luan, FDA; Tristan Massie, FDA |
||
Panelist(s): Ralph D'Agostino, Boston University; Vladimir Dragalin, Aptiv Solutions; James Hung, FDA; Nick Kozauer, FDA; Richard Kryscio, University of Kentucky |
||
Session Abstract: To date, since the vast majority of approved neurological drugs could only provide symptomatic benefit, developing drugs that modify the course of diseases has the potential to be extremely rewarding but is also quite challenging. The traditional study design and analysis would not differentiate between a symptomatic drug effect and a disease-modifying drug effect. Therefore, it is imperative to solve the issues in the design, implementation, analysis and interpretation of disease-modifying clinical trials. This session will include two presentations followed by panel discussion. Speakers and panelists from agency, industry and academia will present and discuss novel statistical approaches in the area of disease-modification. Speakers: Chengjie Xiong, Washington University (confirmed); Qing Liu, Quantitative & Regulatory Medical Science, LLC (confirmed) Panelists: H.M. James Hung, FDA (confirmed) Ralph D'Agostino, Boston University (confirmed) Nicholas Kozauer, FDA (confirmed) Richard Kryscio, University of Kentucky (confirmed) Vladimir Dragalin, Aptiv Solutions (confirmed) This session includes the following two presentations: Presentation 1: Title: Disease-Modifying Trials on Alzheimer Disease: What They Really Mean and How They Can Be Designed; Speaker: Chengjie Xiong, Washington University (confirmed); Abstract: Despite major scientific advance in understanding the clinical and neuropathological characteristics of Alzheimer disease (AD), its therapeutic intervention has been a major disappointment over more than a decade. Although disease-modifying (DM) therapies have been a popular topic for at least the same duration of time in AD research community, it remains unclear about what they really mean and how clinical trials for testing the DM efficacy can be optimally designed. This talk will discuss some of the most fundamental issues in AD research and the statistical consequences in designing DM trials. It will first review some of the recent development in AD research including the clinical and neuropathological correlation, the amyloid hypotheses, the concept of preclinical AD, and the role of imaging and fluid biomarkers. It will then discuss statistical issues in designing clinical trials to test the DM efficacy of a therapy, including the time window for intervention, the efficacy endpoints, optimum design parameters, and statistical power. Presentation 2: Title: Doubly Randomized Matched-Control Design for Disease Modification Drug Development; Speaker: Qing Liu (Quantitative & Regulatory Medical Science, LLC (confirmed); Abstract: Development of disease modification drugs is extremely challenging for neurodegenerative diseases. Often it is necessary to show clinical evidence for disease modification via a randomized delayed-start design, especially when there are no established biomarkers for use to directly measure the biological or physiological progression of the disease. There has been a broad regulatory support for this design in areas of Alzheimer’s disease and Parkinson disease. Following this design, a trial would consist of two periods where in the first period patients are randomized to a new drug or a placebo, and in second period, placebo patients would switch to treatment with the new drug. The objectives of the trial are to establish that the new drug slows down the disease progression in the first period and that patients who have delayed-start with the new drug would not achieve the level of clinical response compared to those who started with the new drug early. While this design is conceptually sound, it is extremely difficult to implement due to high dropout rates. In a recent advisory committee meeting on a Parkinson’s disease drug, it was concluded that the second period is essentially an observational study and any analysis intended for establishing disease modification is not interpretable. To resolve this problem, we propose a randomized matched-control design. In addition to the two periods of the randomized delayed-start design, the new design also includes a prospective run-in period where both static and treatment related outcomes are used to classify patients into matched-cohorts. Upon completion of the delayed-start period, a matched-cohort analysis is performed to establish design modification. |
||
Disease-Modifying Trials on Alzheimer Disease: What They Really Mean and How They Can Be Designed
|
||
Randomized Matched-Control Design for Disease Modification Drugs
|
||
Parallel Session: The future is here: modeling and simulation for designing clinical studies for drugs and devices |
09/24/14 |
|
Organizer(s): Shiowjen Lee, FDA; Xuefeng Li, FDA; Cristiana Mayer, JNJ; Richard Zink, JMP |
||
Chair(s): Cristiana Mayer, JNJ |
||
Clinical trial modeling and simulations (M&S) methodologies are becoming more widely used in the pharmaceutical industry as they are playing an increasingly important role in designing studies and modelling the clinical outcome of an intervention more efficiently. Simulations help visualize potential clinical trial results, explore the design space more fully, and compare scenarios more accurately to reduce time and costs in drug development and maximize probability of success of clinical trials. This session will address case studies of success and examples of collaboration between industry and regulators at the FDA on the appropriate use of M&S. Future challenges and opportunities for innovative and collaborative approaches will be touched upon. Additional attention will be paid to regulatory perspectives on M&S aspects to be included into a submission as well as FDA opinions on the device-related assessment of M&S credibility and an example of the “CHMP Qualification Opinion” for a model-based methodology in designing dose-finding studies. Confirmed speakers: Kyle Wathen (Johnson & Johnson), Frank Bretz (Novartis), Telba Irony FDA). |
||
MCP-Mod: A statistical approach to design and analyze Phase II dose finding studies
|
||
Simulations for Adaptive Designs in the Regulatory Setting
|
||
Utilizing Simulation to Guide Clinical Trial Design
|
||
Parallel Session: Issues in the Design and Analysis of Studies with Composite Endpoints |
09/24/14 |
|
Organizer(s): Rafia Bhore, Novartis; Feng Liu, GlaxiSmithKline; Estelle Russek-Cohen, US FDA CBER; Guoying Sun, FDA/CDER |
||
Chair(s): Estelle Russek-Cohen, US FDA CBER |
||
Many studies would be underpowered for a therapeutic study in which the endpoint is survival and it is not uncommon to create a composite endpoint that captures multiple serious events e.g. stroke, myocardial infarction or death. However, if the study is successful it is important to capture what is contributing to the success at the end of the trial. This session would include considerations for developing composite endpoints and alternative analysis strategies as well as how to best capture the contribution of each component to the whole. Possible speakers could include John Scott (CBER), Stuart Pocock (UK) and an industry speaker (TBD). |
||
Advantages and Challenges in Using a Composite Endpoint in Cardiovascular Outcome Trials
|
||
On primary composite endpoints in cardiovascular clinical trials
|
||
Discussant(s): Mo Huque, FDA |
||
Parallel Session: Practical problems in design, analysis and interpretation of Enrichment clinical trials |
09/24/14 |
|
Organizer(s): George Kordzakhia, Food and Drug Administration; Virginia Recta, FDA; Anthony Rodgers, Merck; Lanju Zhang, Abbvie Inc |
||
Chair(s): Lanju Zhang, Abbvie Inc |
||
Enrichment designs are very appealing with the advancement of biology and identification of molecularly targeted agents. They provide opportunities to increase probability of success of drug development by testing drugs in appropriate highly responsive subgroups of patient populations. However, this assumption challenges the classical statistical paradigm which usually requires a random sample from a homogeneous population. The recent FDA draft guidance, Enrichment strategies for clinical trials to support approval of human drugs and biological products, provides some clarification on enrichment strategies including study designs and result generalizability illustrated by examples and case studies. It leaves much room of ambiguity such as the role of marker negative subgroup and labeling, alpha spending division between different subgroups, or marker validation, and so on. In this invited session, key opinion leaders with first hand experiences of enrichment design from industry and regulatory agencies will share the lessons learned from their practice. |
||
Enrichment Designs in Oncology
|
||
Unique challenges posed by Enrichment Strategies in Oncology combination drug development
|
||
Enrichment design with patient population augmentation
|
||
Discussant(s): Bob Temple, FDA |
||
Parallel Session: Analysis of Longitudinal Count Data |
09/24/14 |
|
Organizer(s): Shiowjen Lee, FDA; Xiaobai Li, Medimmune; Cristiana Mayer, JNJ; Jianliang Zhang, Medimmune |
||
Chair(s): Shiowjen Lee, FDA; Xiaobai Li, Medimmune |
||
Longitudinal count data are common in clinical research. Statistical analysis approaches such as the Generalized Estimating Equations (GEE) or the Generalized Linear Mixed Models (GLMM) are readily available in some statistical software packages. However, such availability does not necessarily justify their proper use. Too often statisticians tend to use these programs without having sufficient knowledge of the underlying assumptions for these procedures. Typically modeling such type of data relies on the assumptions of the latent process from which the counts arise. In this session, the three speakers will (1) share their experiences in modeling longitudinal symptom count data, (2) illustrate the application of Bayesian hierarchical Poisson regression models, and (3) compare different methods to model data with small counts of recurrent events. |
||
Comparison of models for recurrent events when multiple events can occur simultaneously
|
||
Statistical Challenges in the Analysis of Longitudinal Symptom Count Data
|
||
Bayesian Hierarchical Poisson Regression Models: An Application to a Driving Study with Kinematic Events
|
||
Parallel Session: Considerations and challenges in using simulations for regulatory decision making. |
09/24/14 |
|
Organizer(s): Nelson Lu, FDA/CDRH; Rajesh Nair, FDA/CDRH; Brian Renley, Abbott Diagnostics; Kefei Zhou, Amgen |
||
Chair(s): Xin Fang, FDA/CDRH |
||
Panelist(s): Scott Berry, Berry Consultants; Gerry W Gray, FDA, CDRH; Nitin R Patel, Cytel, Inc and MIT; Yi Tsong, FDA |
||
Computationally intensive simulations are being increasingly used to support regulatory decision making. Designing and evaluating simulations can be challenging due to the absence of widely available standard software. Practical issues range from difficulties in assessing the validity of the model to interpreting the simulation code in a short timeframe. From a regulatory perspective it can be challenging to evaluate the simulations without a detailed protocol describing how the model was developed and analyzed. This session will explore ways to improve the quality of simulations and their reporting. Topics for discussion will include documentation that should be provided so models can be easily understood and validated by regulators. Session speakers: 1) Nitin Patel 2) Scott Berry Panelists: Gerry Gray, Yi Tsong, Nitin Patel & Scott Berry |
||
Developing, Testing and Documenting Simulation Software
|
||
Trial Simulation for Adaptive Designs
|
||
Parallel Session: Statistical Challenges in Personalized Medicine Bridging Studies |
09/24/14 |
|
Organizer(s): Mat D Davis, Theorem Clinical Research; Paul DeLucca, Merck; Zhiheng Xu, FDA/CDRH; Tinghui Yu, FDA |
||
Chair(s): Zhiheng Xu, FDA/CDRH |
||
Applications of personalized medicine are becoming increasingly prominent. A key component of personalized medicine is the development of companion diagnostics that measure biomarkers, e.g. protein expression, gene amplification or specific mutations. A well-characterized companion diagnostic device (CDx) is often desired for patient enrollment in device-drug pivotal clinical trial(s) to ensure FDA that appropriate clinical and analytical validation studies are planned and carried out for CDx. However, such a requirement may be difficult or impractical to accomplish due to the availability of CDx at time of the device-drug pivotal trial. A clinical trial assay (CTA) instead of CDx may be used for patient enrollment in the pivotal clinical trial(s). Therefore, a bridging study is required to assess the concordance between CDx and CTA and evaluate the drug efficacy in CDx intended-use population through bridging the clinical data from CTA to CDx. However, it is often that a subset of samples may not be available to be retested by CDx due to many reasons. In addition, CDx and CTA may not have a perfect agreement, patients who are eligible to be enrolled based on CDx may not be enrolled based CTA, and likewise, patients who are not eligible to be enrolled based on CDx may be enrolled based CTA. This particularly is a problem for the pivotal trials for which only a subset of marker patients are enrolled into the trials. The issues in bridging study design and data analysis will impact both drug and device clinical validation. In this session, we will invite speakers from FDA, industry and academia to discuss statistical challenges in study design and data analysis in personalized medicine bridging studies. This session may be interesting to statisticians in fields such as pharmaceuticals, diagnostic medicine and methodology research. Speakers: 1) Meijuan Li, Ph.D., Mathematical Statistician, Team Leader, Diagnostic Devices Branch, Division of Biostatistics, OSB/CDRH/FDA (confirmed) 2) Walter Offen, Ph.D., Global Head of Statistical Innovation and Safety Statistics, Abbvie (potential) 3) Stefan K. Grebe, M.D., Ph.D., Department of Laboratory Medicine and Pathology, Mayo Clinic (potential) |
||
Survival Analysis With Mixed Population
|
||
Statistical consideration and challenges in bridging study of personalized medicine
|
||
Statistical Challenges in Personalized Medicine Bridging Studies
|
||
CMC Session 3: Beyond Batch Manufacturing: Statistical Considerations for Continuous Manufacturing Processes |
09/24/14 |
|
Organizer(s): David Christopher, Merck; Yulan Li, Novartis; Yuqun Abigail Luo, FDA/CBER; Helen Strickland, GSK; Lanju Zhang, Abbvie Inc |
||
This session will explore important considerations to determine what different statistical approaches may be necessary to address product quality management for continuous manufacturing processes versus single-batch processes. The speakers and the session organizers will engage the audience in Q&A after the presentations. It is expected that this will be the first year of a series of CMC sessions at this workshop for this evolving topic. |
||
Regulatory considerations for continuous manufacturing: how can statisticians help?
|
||
Continuous Manufacturing – Accomplishments, Opportunities, and Challenges
|
||
Parallel Session: Assessing safety in drug development utilizing Bayesian methods |
09/24/14 |
|
Organizer(s): Rong (Rachel) Chu, Agensys, Inc; Meg Gamalo, Office of Biostatistics, CDER/FDA; Judy X Li, FDA/CBER |
||
Chair(s): Judy X Li, FDA/CBER |
||
Assessing the safety of a medical product is an ongoing learning process. At times information from other trials are used to inform the analysis of data in a future trial, while at other times we would like the data in that trial to be self-evident. In either case, a posterior distribution can be constructed to provide the relevant probabilities for making development decisions. In this session, we will discuss three various methods and situations in which applied Bayesian methods can be employed for assessing safety in drug development. In pre-clinical and clinical trials, the assessment of safety is so encompassing that ensuring omnibus power with a fixed sample size is often impossible. Hence, Bayesian methods which utilize various prior distributions derived from existing scientific data can be used to help alert trialists to potential safety concerns that can be incorporated into the design, monitoring, and analysis of future trials. Our first presentation will discuss various strategies of incorporating historical information as a simple conjugate prior or through meta-analytic methods. In addition, a robust prior will be discussed which incorporates a weakly informative component in a mixture distribution, and hence a quicker discounting of the historical information with increasing prior-data conflict. The assessment of safety also involves multiplicity issues which can lead to an inflated type I error. However, Bayesian multi-level models provide an alternative in addressing a type I error. Unlike traditional type I error adjustment approaches which increase the length of the confidence intervals only, the multi-level models barrows strength across safety related issues which shrink the point estimates towards each other. Our second presentation will discuss a multivariate Bayesian logistic regression model which allows for a combined analysis from multiple studies to search for vulnerable subgroups based on covariates in the model. Assessing safety continues even after a drug has successfully received marketing approval by the FDA. This assessment includes utilization of large databases, such as the FDA’s Adverse Events Reporting System in conjunction with appropriate Bayesian methods. In the last presentation, well established Bayesian methods and some recently new developed methods will be presented for the purposes of post marketing safety surveillance. The performance of these methods will be evaluated through simulation and their relevant merits will be discussed. Speakers include: Jerry Weaver (Novartis), William DuMouchel (Oracle), and Ram Tiwari(FDA). |
||
Bayesian approaches for data-mining for spontaneous adverse events
|
||
Strategies for using prior information in assessing safety
|
||
Bayesian Modelling of Drug Safety Data
|
||
Parallel Session: Recent FDA review experiences, advices and industry stake-holding efforts in adaptive clinical trial designs. |
09/24/14 |
|
Organizer(s): Annie Lin, FDA/CBER; Stan Lin, FDA; Feng Liu, GlaxiSmithKline; Wei Zhao, MedImmune LLC |
||
Chair(s): Annie Lin, FDA/CBER; Feng Liu, GlaxiSmithKline |
||
Depending on particular purposes, various shapes and forms of adaptive design have been proposed and adopted in clinical trials conducted for product registrational purposes. Guidance has been written based on considerable practical and scientific considerations of the particular trial design. But accumulation of experience is nonstop and innovations continue. This session is proposed to provide product (drug, device, biologics, etc.) development stake-holders (FDA and Industry) to share practical experiences on adaptive designs. |
||
Statistical Challenges with Emerging Adaptive Designs of Cardiovascular Device Studies: A Case Study and Beyond
|
||
Multiplicity adjustment in seamless phase II/III trials using a biomarker for dose selection
|
||
Applications of Adaptive Designs in Antiviral Clinical Trials.
|
||
Adaptive Designs at CBER: Opportunities, Challenges and Lessons Learned
|
||
Parallel Session: Assessing benefit-risk: Novel Approaches |
09/24/14 |
|
Organizer(s): Yunfan Deng, FDA/CDER; Xiaohui (Ed) Luo, PTC Therapeutics; Theodore Lystig, Medtronic, Inc.; Dongliang Zhuang, FDA/CDER/OTS/OB/DB4 |
||
Chair(s): Yunfan Deng, FDA/CDER |
||
Balancing benefit and risk is never an easy task since the process is highly related with disease and drug being studies and targeted patient population. Assessing benefit/risk usually involves various tools and methods and a universally accepted method/framework does not seem feasible. In recent years, novel benefit-risk assessment tools and methods have emerged rapidly. Some of these approaches are motivated by uniqueness of certain disease/product or patient population; and they may have broader application for other diseases/products. Invited speakers from FDA and industry will discuss novel benefit-risk assessment tools and methods they utilize during their drug development process, and they will share their views of the benefit-risk profile based on these novel approaches. |
||
Rethinking Assessment of Benefit-Risk
|
||
Treatment-Associated Second Primary Malignancy in Oncology Clinical Trials
|
||
Longitudinal benefit-risk assessment in clinical trials
|
||
CMC Session 2: Issues and Considerations for Statistical Approaches for Assessing Dissolution Similarity |
09/24/14 |
|
Organizer(s): David LeBlond, Consultant, CMC Statistics; Yi Tsong, FDA; Harry Yang, MedImmune |
||
This session will address issues and considerations for statistical approaches for assessing dissolution similarity, including a specific focus on a Bayesian approach. |
||
The posterior probability of dissolution similarity
|
||
Parallel Session: Modeling and Simulation in Clinical Trials |
09/24/14 |
|
Organizer(s): Scott Berry, Berry Consultants; Jingyee Kou, FDA; Hui Quan, Sanofi; Sutan Wu, CDER/FDA |
||
Chair(s): Xiaoyu (Cassie) Dong, CDER/FDA; Jingyee Kou, FDA |
||
Modeling and simulation (M&S) methodologies are gaining in popularity in the clinical trial community, and are part of FDA’s strategic plans. The advantages of using M&S include support of pharmaceutical decision making, development and assessment of more cost-effective clinical trial designs, comparison and assessment of different statistical analysis methods, and identification of responder subpopulations either for benefit or for increased risk of adverse events. There will be 3 speakers in this session representing their views/experiences on M&S from industry, academia and regulatory perspectives. Dr. Zhaoling Meng (Sanofi) will demonstrate a PK/PD M&S approach to assess the drug efficacy for a simplified dosing regime based on data from phase 3 program for the final commercial dose/regime selection. Dr. Gary Rosner (JHU) will discuss clinical trial designs incorporating Bayesian concepts which often require M&S to help determine the best design or to evaluate each study’s frequentist characteristics, along with examples of designs that have benefitted from M&S. Dr. Paul Schuette (FDA) will outline guidelines for the credibility of M&S in a regulatory environment, with a focus on establishing pre-specified analysis plans, and an adequate number of simulations to support those analyses. |
||
Modeling and simulation for regulatory decision making
|
||
Efficacy assessment for dosing regimen simplification via Phase 3 PK/PD modeling and simulation
|
||
Modeling and Simulation in Bayesian Clinical Trial Design
|
||
Parallel Session: CBER oncology : unique products and special statistical considerations |
09/24/14 |
|
Organizer(s): Rakhi Kilaru, PPD; David Li, Pfizer; Xue (Mary) Lin, FDA/CBER; Yuqun (Abigail) Luo, FDA/BCER |
||
Chair(s): Xue (Mary) Lin, FDA/CBER |
||
CBER’s oncology products are different from CDER’s in terms of working mechanism and the manufacturing processes. The unique products pose challenge in the study design and statistical analyses. The objective of the session is to share the experience in the development and review of such products. The products covered including stem cell transplantation and cancer vaccines. Presenters from both industry and the regulatory agency will share their perspectives. |
||
Considerations for the development of a human autologous anti-tumor vaccine
|
||
Statistical challenges for clinical development of cell and gene therapies for oncology indications
|
||
Considerations for demonstrating product comparability and process consistency with cell therapy products
|
||
Parallel Session: The Recent Advance in Research, Applications, and Regulatory Issues of Survival Analysis in risk predication and companion diagnostic devices of personalized medicine |
09/24/14 |
|
Organizer(s): Rafia Bhore, Novartis; Eva Miller, Senior Director, Biostatistics inVentiv Health Clinical; Haiwen Shi, FDA/CDRH; Zhiheng Xu, FDA/CDRH |
||
Chair(s): Haiwen Shi, FDA/CDRH |
||
Survival analysis is widely encountered in the FDA reviewing work across various different centers. While survival analysis is always an interesting topic in statistics, new issues faced in time to event data analysis of risk prediction and companion diagnostic devices is practically challenging. In this session, we are going to explore some recent advance and challenges and issues of survival analysis such as competing risks, survival analysis in risk predication and companion diagnostic devices of personalized medicine. We hope to provide a stage for academic and industry researchers and the FDA reviewers to share knowledge in survival analysis and benefit from each other in a direct dialogue. |
||
Practical considerations with survival analysis in studies with biomarker evaluations and implications on Clinical diagnostic devices
|
||
Statistical Issues in Survival Analysis for the Clinical Validation of Diagnostic Devices
|
||
Assessing diagnostic accuracy improvement for survival or competing-risk censored outcomes
|
||
Parallel Session: Utilization of Imaging Biomarkers in Clinical Trial Design |
09/24/14 |
|
Organizer(s): Steven Edland, UCSD; Lan Huang, FDA; Norberto Pantoja-Galicia, FDA; Volha Tryputsen, Janssen R&D |
||
Chair(s): Gerald Novak, Janssen R&D |
||
Biomarkers are very important in studying age-related cognitive decline, like Alzheimer`s Disease (AD) and vascular cognitive impairment (VCI), for numerous reasons. A diagnosis of Alzheimer`s Disease is based on cognitive performance and may be inaccurate because of a subjective nature of the assessment, and biomarkers can improve the accuracy of diagnosis. In clinical trials of disease modifying drugs for Alzheimer`s Disease and VCI, biomarkers can be used for patient selection or serve as an additional measure of disease severity. Biomarkers could also be utilized as an inclusion criterion for a clinical trial or as a measure to be used for patient stratification. Imaging biomarkers are specifically of a high interest in the area of both degenerative and vascular disease because changes in the brain can occur long before a noticeable cognitive decline takes place and a subject gets diagnosed with dementia. Thus, on top of the diagnostic characteristics, imaging biomarkers possess great prognostic potential. In this session we will review imaging biomarkers such as volumetric magnetic resonance imaging and positron emission tomography. The session will consist of two presentations, followed by discussion. Presentations will cover current utilization of imaging biomarkers and potential applications of these techniques to clinical trials, covering both cross-sectional and longitudinal aspects of the data. |
||
Imaging Biomarkers for Clinical Trials: Magnetic Resonance Imaging White Matter Hyperintensity Progression
|
||
Optimizing Region-of-Interest Composites for Capturing Treatment Effects on Brain Amyloid in Clinical Trials
|
||
Discussant(s): Thomas E. Gwise, FDA |
||