Scientific Programme

Conference Courses

Please find abstracts of the conference courses below.

In follow-up studies different types of outcomes are typically collected for each subject. These include longitudinally measured responses (e.g., biomarkers), and the time until an event of interest occurs (e.g., death, dropout). Often these outcomes are separately analyzed, but in many occasions it is of scientific interest to study their association. This type of research question has given rise in the class of joint models for longitudinal and time-to-event data. These models constitute an attractive paradigm for the analysis of follow-up data that is mainly applicable in two settings: First, when focus is on a survival outcome and we wish to account for the effect of endogenous time-dependents covariates measured with error, and second, when focus is on the longitudinal outcome and we wish to correct for non-random dropout.

This course is aimed at applied researchers and graduate students, and will provide a comprehensive introduction into this modeling framework. We will explain when these models should be used in practice, which are the key assumptions behind them, and how they can be utilized to extract relevant information from the data. Emphasis is given on applications, and after the end of the course participants will be able to define appropriate joint models to answer their questions of interest.

Necessary background for the course: This course assumes knowledge of basic statistical concepts, such as standard statistical inference using maximum likelihood, and regression models. In addition, basic knowledge of R would be beneficial but is not required.

Participants are required to bring a laptop with R preinstalled.

Machine learning is becoming more appealing for a number of reasons. First, the properties of some learning machines are better understood. Second, traditional statistical approaches often fail with high throughput molecular biology technologies. Third, several machines have been extended to operate beyond the standard classification problem for dichotomous endpoints. Many researchers are, however, not familiar with recently developed machine learning approaches, such as gradient boosting, random forests or support vector machines and their extensions. This 1-day course therefore aims at providing an intuitive introduction to some of the most important machine learning approaches currently used. We show that all problems from generalized linear models and even survival endpoints can be tackled with machine learning. The focus of the theoretical sessions is the non-technical but intuitive explanation of the algorithms (instructor: Andreas Ziegler), and the focus of the hands on laptop sessions is to see the machines operating using R (instructor: Marvin Wright). The combination of simple descriptions in a language familiar to biostatisticians together with the use of standard statistical software should help to demystify machine learning.

Literature

The topic is new so that there is no standard textbook available covering the whole content of the course. To give one example, Hastie et al. consider survival analysis only in the context of principal components but neither with nearest neighbors, random forests or support vector machines.

About the Speakers

Andreas Ziegler holds a Ph.D. in Statistics and is board certified trial biostatistician and epidemiologist. Since 2001 he is professor and head of the Institute of Medical Biometry and Statistics at the University of Lübeck, Germany. He is honorary professor in the School of Mathematics, Statistics and Computer Science of the University of KwaZulu-Natal, South Africa. He was president of the German Region of the International Biometric Society and the International Genetic Epidemiology Society, and he was member of the presidium of the gmds. He is working on regression models and machine learning approaches for more than 20 years.

Marvin N. Wright holds a B.Sc. in Computer Science & Engineering and an M.Sc. in Computational Life Science. For his master thesis, he was awarded the Bernd-Streitberg Preis of the German Region of the International Biometric Society. Since 2014 he is research associate at the Institute of Medical Biometry and Statistics at the University of Lübeck, Germany. His research focuses on machine learning methods in genetic epidemiology, efficient statistical computing and survival analysis. He is author of several statistical software packages, such as Ranger for random forests and bnnSurvival for bagged nearest neighbors with survival endpoints.

This two-part, half-day tutorial first introduces the general-purpose simulation MCMC procedure in SAS, then presents a number of pharma-related data analysis examples and case studies in detail. The objective is to equip attendees with useful Bayesian computational tools through a series of worked-out examples drawn from situations often encountered in the pharmaceutical industry.

The MCMC procedure is a general-purpose Markov chain Monte Carlo simulation tool designed to fit a wide range of Bayesian models, including linear and nonlinear models, multilevel hierarchical models, models with a nonstandard likelihood function or prior distributions, and missing data problems. The first part of the tutorial briefly introduces PROC MCMC and demonstrates its use with simple applications, such as Monte Carlo simulation, regression models, and random-effects models.

The second part of the tutorial takes a topic-driven approach to explore a number of case studies in the pharmaceutical field. Topics include posterior predictions, use of historical information, hierarchical modeling, analysis of missing data, and topics in Bayesian design and simulation.

This tutorial is intended for statisticians who are interested in Bayesian computation. Attendees should have a basic understanding of Bayesian methods (the tutorial does not cover basic concepts of Bayesian inference) and experience using the SAS language. The tutorial is based on SAS/STAT® 14.1.

About the Speakers

Fang Chen is a senior manager of Bayesian statistical modeling in the Advanced Analytics Division at SAS Institute Inc. Among his responsibilities are development of Bayesian analysis software and the MCMC procedure. Before joining SAS Institute, he received his PhD in statistics from Carnegie Mellon University in 2004.

G. Frank Liu is a distinguished scientist at Merck, Sharp & Dohme, Corp. For the past 20 years, he has worked in a variety of therapeutic areas, including neuroscience, psychiatry, infectious disease, and vaccines. His research interests include methods for longitudinal trials, missing data, safety analysis, and non-inferiority trials. He co-leads a Bayesian missing data analysis team in the DIA Bayesian Working Groups. He received a PhD in statistics from UCLA in 1994.

This tutorial provides a thorough presentation of statistical analyses of interval-censored failure time data with detailed illustrations using real data arising from clinical studies and biopharmaceutical applications. Specifically, we will start with some basic review of commonly used concepts and the problems of common interest to practitioners. Commonly used statistical procedures will then be discussed and illustrated as well as some recent development in the literature. In addition, some available software functions and packages in R and SAS for the problems considered will be discussed and illustrated. The specific topics to be discussed include:

Biases inherent in the common practice of imputing interval-censored time-to-event data Nonparametric estimation of a survival function: Three basic and commonly used procedures will be discussed for nonparametric estimation of a survival function along with their comparison.
Nonparametric treatment comparisons: We will start with generalized log-rank tests and then introduce several other recently developed nonparametric test procedures. A couple of R packages are available for the problem considered and will be discussed.
Semiparametric regression analysis: For this part, we will first introduce several commonly used regression models including the proportional hazards model and the linear transformation model. The corresponding inference procedures are then introduced and illustrated using read data.
Analysis of multivariate interval-censored failure time data: This part will discuss nonparametric estimation of joint survival functions and regression analysis of multivariate interval-censored failure time data. For the former, the focus will be on bivariate failure time data.

About the Speakers

Jianguo Sun (Tony) Sun received his Ph.D. in Statistics from the University of Waterloo in 1992 and he is currently a professor at the University of Missouri. Professor Sun has been working on failure time data analysis and longitudinal data analysis for over 20 years, especially on various statistical problems in AIDS studies. He has published many papers and in particular, wrote a book Statistical Analysis of Interval-censored Failure Time Data published by Springer in 2006. In addition, he has given many invited presentations of his work in both academics and industry and as co-editor with Professors Ding-Geng (Din) Chen and Karl E. Peace on book "Interval-Censored Time-to-event Data: Methods and Applications" published by CRC in 2012.

Ding-Geng (Din) Chen received his Ph.D. in Statistics from University of Guelph (Canada) in 1995 and he is now the Wallace Kuralt Distinguished professor at University of North Carolina at Chapel Hill. He was a professor in biostatistics at the University of Rochester Medical Center and the Karl E. Peace endowed eminent scholar chair in biostatistics from the Jiann-Ping Hsu College of Public Health at the Georgia Southern University. Professor Chen is also a senior biostatistics consultant for biopharmaceuticals and government agencies with extensive expertise in clinical trials and bioinformatics. He has more than 100 referred professional publications and co-authored/co-edited 10 books on clinical trials, meta-analysis, causal inference and bigdata science. The tutorial is partially based on his co-edited book "Interval-Censored Time-to-event Data: Methods and Applications" with Professors Jianguo (Tony) Sun and Karl E. Peace by CRC in 2012.

The tutorial will focus on the use of surrogate endpoints in drug development. Efficacy of new drugs is assessed using clinical endpoints. Often, the most clinically relevant endpoint is difficult to use in a trial. This happens if the measurement of this clinical endpoint requires, for instance, a large sample size (because of low incidence of the event of interest) or a long follow-up time. A potential strategy in these cases is to look for a surrogate endpoint or a surrogate (bio)marker that can be measured more cheaply, more conveniently, more frequently, or earlier than the clinical endpoint.

From a regulatory perspective, an endpoint is considered acceptable for efficacy determination only after its establishment as a valid indicator of clinical benefit, i.e., after its evaluation as a surrogate endpoint for the clinical endpoint of interest. In the tutorial we will formalize the concept of surrogate endpoints and present the main issues related to their application. The major part of the tutorial will be devoted to a review of the statistical methods of evaluation of surrogate endpoints. In particular, we will focus on the so-called meta-analytic approach; however, other approaches will be briefly mentioned as well. All discussed methods and concepts will be illustrated by using real-life examples from clinical oncology.

About the Speaker

Tomasz Burzykowski received the M.Sc. degree in applied mathematics (1990) from the Warsaw University in Poland, and the M.Sc. (1991) and Ph.D. (2001) degrees from the Hasselt University in Belgium. He is appointed as Professor of Biostatistics and Bioinformatics at the Hasselt University and is Vice-President of Research of the International Drug Development Institute (IDDI) in Louvain-la-Neuve in Belgium. His main interest focuses on application of statistics in biology and medicine. Tomasz is a co-author of two monographs and multiple papers on the methodology and practical applications of the evaluation of surrogate endpoints.

A programme for developing therapies for common diseases will typically involve dozens of trials and thousand of patients. Such an approach cannot work for rare diseases, where a conventional drug development programme might require the recruitment of all patients suffering from the disease over several decades and would be completely unreaslistic. Thus, in many cases, another model altogether is needed. This course will consider possible statistical solutions to the challenges that studying treatments for rare diseases raise.

Amongst the matters that will be covered are: approporiate standards of evidence, alternative clinical trial designs, exploiting covariate information and using non-interevential studies. Real examples from the presenters’ own experiences will be used throughout. The course will be given by two experienced statisticians, well known for their thought-provoking writings on statistics in drug development but also for their attitude to planning and analysis: the way to judge the value of a statistical method is to ask if it helps to find useful treatments for patients.

About the Speakers

Simon Day has spent 30 years working in clinical trials, mostly in the pharmaceutical industry but also including five years at the UK and European regulatory agencies. He now works as a statistical and regulatory consultant to pharmaceutical and biotechnology companies around the world. He is particularly well known for his work in the area of developing treatments for rare diseases and has given numerous lectures and courses on statistics and clinical trials, including courses at the FDA on development and regulatory assessment of orphan drugs.
He is a former president of the International Society for Clinical Biostatistics. He is joint editor of Statistics in Medicine, and on the editorial board of Translational Sciences of Rare Diseases. In 2012 he was elected a Fellow of the Society for Clinical Trials.
He has published widely in statistical and medical journals, is author of one book “Dictionary for Clinical Trials” and is joint editor of the “Textbook of Clinical Trials”, both published by Wiley.

Stephen Senn has worked as a statistician but also as an academic in various positions in Switzerland, Scotland, England and Luxembourg. Since 2011 he has been head of the Competence Center for Methodology and Statistics at the Luxembourg Institute of Health in Luxembourg. He is the author of Cross-over Trials in Clinical Research (1993, 2002), Statistical Issues in Drug Development (1997, 2007), Dicing with Death (2003) and in 2009 was awarded the Bradford Hill Medal of the Royal Statistical Society. He is an honorary life member of PSI and ISCB.

Confirmatory adaptive designs are a generalization of group sequential designs. With these designs, interim analyses are performed in order to stop the trial prematurely under control of the Type I error rate. In adaptive designs, it is additionally permissible to perform a data-driven change of relevant aspects of the study design at interim stages. This includes, for example, a sample-size reassessment, a treatment-arm selection or a selection of a pre-specified sub-population. This adaptive methodology was introduced in the 1990s (Bauer et al, 2016). Since then, it has become popular and the object of intense discussion and still represents a rapidly growing field of statistical research.

This shortcourse provides an introduction to the confirmatory adaptive design methodology. We start with a short introduction to group sequential methods. This is necessary for understanding and applying the adaptive design methodology supplied in the second part of the course. Essentially, the combination testing principle and the conditional Type I error approach are described in detail. We consider designing and planning issues as well as methods for analyzing an adaptively planned trial. This includes estimation methods and methods for the determination of an overall p-value. An overview of software for group sequential and confirmatory adaptive designs is supplemented.

Literature

Bauer, P., Bretz, F., Dragalin, V., König, F., Wassmer, G.: 25 years of confirmatory adaptive designs: opportunities and pitfalls. Statistics in Medicine 2016, 35: 325-347.
Wassmer, G., Brannath, W.: Group Sequential and Confirmatory Adaptive Designs in Clinical Trials. Springer Science and Business Media, 2016.

About the Speakers

Werner Brannath is full professor of applied statistics and biometry and head of the Biometry Section of the Competence for Clinical Trials Bremen at the University of Bremen. He has been trained as mathematician and is working in the field of Biostatistics for almost 20 years. He has written several methodological papers on adaptive and group sequential designs, in particular, on flexible multi-stage designs, point and interval estimation as well as adaptive enrichment designs with survival endpoints. Further research interests are multiple testing and simultaneous confidence intervals. He is principle investigator of a number of research projects (e.g. on early phase adaptive clinical trials) and is involved in the planning, conduct and analysis of clinical trials. He has also experience as member of independent data monitoring committees.

Gernot Wassmer is adjunct Professor for Biostatistics at the Institute of Medical Statistics, University of Cologne, Germany. Currently, he is visiting professor at the Institute of Medical Statistics at the Medical University of Vienna, Austria. He received his PhD 1993 at the University of Munich, Germany, and was a Research Fellow at the Institute of Statistics, University of Munich, at the Institute for Epidemiology, GSF Neuherberg, and at the Institute of Medical Statistics, University of Cologne. His major research interest is in the field of statistical procedures for group sequential and adaptive plans in clinical trials. He is author of a number of methodological papers on group sequential and adaptive designs. He has been a member in independent data monitoring committees for international, multi-center trials on different therapeutic fields and also serves as a consultant for the pharmaceutical industry.

  Print
http://www.cenisbs2017.org/programme/courses.html