logo
  • userLoginStatus

Welcome

Our website is made possible by displaying online advertisements to our visitors.
Please disable your ad blocker to continue.

Current View

Biomedical Engineering - Clinical Technology Assessment

complete notes

Complete course

LEZIONE 1 Technology is the application of sci entific knowledge. In health care technology we can find two different categories: • Physical nature: like drugs, devices, equipment, medical and surgical procedures, support systems, organizational and administrative systems; • Purpose or applications: prevention ( intended to protect against disease by preventing it from occurring, reducing the risk of its occurrence or limiting its sequelae ), screening ( intended to detect a disease, abnormality or associated risk factors in asyntomatic people ), diagnosis ( intended to identify the cause and nature or extent of disease in a person with clinical signs or symtoms ), treatment ( designed to improve or maintain health status, avoid furthe r deterioration or provide palliation ), rehabilitation ( intended to restore, maintain or improve a physically or mentally disabled person’s function and well -being ); Health technology encompasses medical devices ranging from simple wooden tongue depress ors and assistive devices, to the most sophisticated implants, medical and surgical procedures, and the organizational and supportive systems within which such care is provided. It is used to solve a health problem and improve quality of life. Medical device is used in the prevention, diagnosis or treatment of illness or disease, of for detecting, measuring, restoring, correcting or modifying the structure or function of the body for some health purpose. Medical equipment is used for the specifi c purposes of diagnosis and treatment of disease or rehabilitation following disease or injury. The risk class of medical device is determined by factors such as: • Level of invasiveness; • Duration of use in the body; • Duration in the body ( transient 30 days ); The classification rules for medical devices other than IVDs ( in vitro diagnostic device ) depend on the features of the device, such as whether it: • Is life supporting or sustaining; • Is invasive and if so , to what extent and for how long; • Incorporates medicinal products; • Incorporates human or animal tissues or cells; • Is an active medical device; • Delivers medicinal products, energy or radiation; • Could modify blood or other body fluids; • Is used in combinatio n with another medical device; The general Essential Principles of safety and performance for medical devices include the following: • The processes for the design and production should ensure that a medical device when used according to the intended purpose and meeting the conditions of technical knowledge and training of the user is safe; • The manufacturer should perform a risk assessment to identify known and foreseeable risks and to mitigate these risks in the design, production and use of the medical device; • Performance and safety should not be affected during the lifetime of a medical device; • Performance and safety should not be affected by transport or packaging and storage; • Known and foreseeable risks should be weighed against the benefi ts of the intended purpose; The term evidence -based medicine refers to the use of current best evidence from scientific and medical research, and the application of clinical experience and observation, in making decisions about the care of individual patie nts. The ultimate aim of HTA is to inform decision making process ( improving quality/providing standards and guidelines ). The underlying questions which HTA needs to address are at three levels: 1. What is the clinical impact of the intervention? 2. What is th e magnitude of the clinical impact of the intervention? 3. What is the clinical impact of the intervention weighted against its cost? In some instances, ethical and societal consequences are sufficiently important to require inclusion. :TA is “the systematic evaluation of properties, effects, and/or impacts of health - care technology”. =t may address the direct, intended consequences of technologies as well as their indirect, unintended consequences. Its main purpose is to inform technology -related policy -makin g in health care. HTA is conducted by interdisciplinary groups using explicit analytical frameworks drawing from a variety of methods. It is concerned with evaluating the safety, effectiveness and cost - effectiveness and ( where appropriate ) the social, et hical and legal impact of a technology. Health care technologies may be described as being: • Future: in a conceptual stage, anticipated, or in the earliest stages of development; • Experimental: undergoing bench or laboratory testing using animals or other models; • Investigational: undergoing initial clinical evaluation for a particular condition or indication; • Established: considered by clinicians to be a standard approach to a particular conditi on or indication and diffused into general use; • Obsolete/outmoded/abandoned: superseded by other technologies or demonstrated to be ineffective of harmful; LEZIONE 2 The basic HTA frameworks are the follows: • Identify assessment topics; • Specify the assessment problem or questions; • Determine organizational locus or responsibility for assessment; • Retrieve available relevant evidence; • Generate or collect new evidence; • Appraise/interpret quality of the evidence; • Integrate/synthesize evidence; • Formulate f indings and recommendations; • Disseminate findings and recommendations; • Monitor impact; In HTA are used three types of measures: 1. Structure: administrative, organizational and physical facilities for health care delivery; 2. Process: content or nature of the health care delivered; 3. Outcome: health status and well -being of patients, especially as this is affected by health care; Data can be divided in five different categories: 1. Dichotomous data, where each individual’s outcome is one of only two possible categorical responses; 2. Continuous data, where each individual’s outcome is a measurement of a numerical quantity; 3. Ordinal data, where the outcome is one of several ordered categories, or generated by scoring and summing categories respon ses; 4. Count and rates, calculated from counting the number of events that each individual experience; 5. Time -to -event data, that analyse the time until the event occurs, but where not all individuals in the study experience the event; By effect measures, we r efer to statistical constructs that compare outcome data between two intervention groups. Dichotomous outcome data arise when the outcome for every participant is one of two possibilities, for example, dead or alive, or clinical improvement or no clinical improvement. The most commonly encountered effect measures used in clinical trials with dichotomous data are: 1. The risk ratio (RR); 2. The odds ratio (OR); 3. The risk difference ( RD, also called the absolute risk reduction ); 4. The number needed to treat for an additional beneficial or harmful outcome ( NNT ); Risk is the concept more familiar to patients and health professionals. Risk describes the probability with which a health outcome ( usually an adverse event ) will occur. In research, risk is commonly expressed as a decimal number between 0 and 1, although it is occasionally converted into a percentage. �������� ���� (�� )= ���� �� ������� ���� �� ��� − ������� Odds is often known as the ratio of money that may be won versus the amount of money bet. In statistics, an odds of an event is the ratio of the probability that the event will occur to the probability that the event will not occur. In simpler term, an odds of an event can be calculated as number of events divided by number of non - events. So, the odds of having the disease is the ratio of the probability that the disease will occur to the probability that the disease will not occur or, the odds of having the dis ease can be calculated as the number of people with the disease divided by the number of people without the disease. It is commonly expressed as a ratio of two integers. Odds can be converted to risks, and risks to odds, using the formulae: Absolute risk reduction (A_RD or risk difference) and relative risk reduction (R_RD over the risk in the control) are measures of treatment effect that compare the probability of outcome in a control group with that of a treatment group. Number need ed to treat is a treatment effect that determines the number of patients who need to be treated to prevent one outcome effect. It is the inverse of absolute risk reduction. A biomarker is an objectively measured variable or trait that is used as an indicator of a normal biological process, a disease state, or effect of a treatment. An intermediate endpoint is a non -ultimate endpoint that may be associated with disease status or progression to ward an ultimate endpoint such as mortality or morbidity. A surrogate endpoint is a measure that is used as a substitute for a clinical endpoint of interest, such as morbidity and mortality. They are used in clinical trials when it is impractical to measur e the primary endpoint during the course of the trials, such as when observation of the clinical endpoint would require years of follow -up. A surrogate endpoint is assumed, based on scientific evidence, to be a valid and reliable predictor of a clinical en dpoint of interest. As such, changes in a surrogate endpoint should be highly correlated with changes in the clinical endpoint. Continuous data can take any value in a specified range. The mean difference is a standard static that measured the absolute difference between the mean value in two groups in a clinical trial. The standardized mean difference (SMD) expresses the size of the intervention effect in each study relative to the variability observed in that study. Quality of life measures or index are increasingly used along with more traditional outcome measures to assess efficiency and effectiveness, providing a more complete picture of the ways in which health care affects patients. QoL measures capture such dime nsions as physical function, social function, cognitive function, anxiety, bodily pain, sleep/rest, energy/fatigue and generate health perception. These measures may be generic or disease -specific. They may provide a single aggregate score or yield a set o f scores, each for a particular dimension. HQRL measures are divided in two categories: • Primary measures, such as physical functioning, social functioning, psychological functioning, Global Quality of Life, perceptions of health status; • Additional measure s, such as sleep disturbance, neuropsychological functioning, sexual functioning, work -related impacts; The category of measures known as health -adjusted life years (HALYs) recognizes that changes in an individual’s health status or the burden of populatio n health should reflect not only the dimension of life expectancy but a dimension of QoL of functional status. The QALYs is a unit care outcome that combines gains in length of life with quality of life. QALYs are usually used to represent years of life su bsequent to a health care intervention that are weighted or adjusted for the quality of life experienced by the patient during those years. In QoL scale is typically standardized to a range of 0.0 (death) to 1.0 (perfect health). A scale may allow for ratings below 0.0 for states of disability and distress that some patients consider to be worse than death. QALYs are used primarily to adjust a person’s life expectancy by the levels of health - related quality of life that the person is predicted to experi ence during the remainder of life or some interval of it. DALYs are primary used to measure population disease burden; they are a measure of something “lost” rather than something “gained”. The disability weights represent level of loss of functioning cau sed by mental of physical disability caused by disease or injury. The burden of disability in calculating DALYs depends one one’s age. DALYs incorporate an age -weighting function that assigns different weights to life years lived at different ages. Whether the QALY captures the full range of health benefits, that the QALY does not account for social concerns for equity, whether the QALY is the most appropriate generic preference -base measure of utility, whether a QALY is the same regardless of who ex periences it. What the appropriate perspective is for valuing health states, from the perspective of patients with particular disease or the general public. Screening is conducted in asymptomatic patients; diagnosis is conducted in symptomatic patients. Whether a particular test is used for screening, or it is used for diagnosis, can have a great effect on the probability that the test result truly indicates whether or not a patient has a given disease or other health condition. During a screening test, we can have four different types of outcomes: • True positive: test result is one that detects a marker when disease is present; • True negative: test result is one that does not detect the marker when the disease is absent; • False positive: tes t result is one that detects a marker when the disease is absent; • False negative: test result is one that does not detect a marker when disease is present; �������� = �� + �� �� + �� + �� + �� �������� = (����������� )∗(���������� )+ (����������� )∗(1 − ���������� ) The biomarker for certain diseases or conditions is typically defined as a certain cutoff level of one or more variables. Clinically useful cutoff points may vary among different population subgroups. A cutoff point that is set to detect more true positives will also yield more false positives; a cutoff point that is set to detect more true negatives will also yield more false negatives. However, the selection of a cutoff point should consider the acceptable risks of false positives vs. false negatives. Given the different purposes of screening and diagnosis, and the associated penalties of false positives and false negatives, cutoff points may be set differently f or screening and diagnosis of the same disease. A single cut -point of a diagnostic test defines one single point in the ROC space; however, different possible cutpoints of a diagnosis test determine a curve in ROC space, which is also called ROC curve. ��� = �� (�� + �� ) ��� = �� (�� + �� ) The area under ROC curve (AUC) provides a way to measure the accuracy of a diagnostic test. The larger area, the more accurate the diagnostic test is. AUC of ROC curve can be measured by the following equation: ��� = ∫ ��� (�)�� 1 0 Where t = (1 -spcificity) and ROC(t) is sensitivity. Receiver operating characteristic (ROC) curve plots the relationship between the true positive ration ( sensitivity ) and false positive ratio ( 1-specificity ) for all cutoff points of a disease or condition marker. For a perfect test, the area under the curve would be 1.0. ROC curves help to demonstrate how raising or lowering a cutoff point selected for defining a positive test result affects tra deoffs between correctly identifying people with a disease and incorrectly labeling a person as positive who does not have the disease. The test and Health outcomes are: • Technical capacity: does the technology perform reliably and deliver accurate informat ion? • Diagnostic accuracy: does the technology contribute to making an accurate diagnosis? • Diagnostic impact: do the diagnostic results influence use of other diagnostic technologies? • Therapeutic impact: do the diagnostic findings influence the selection an d delivery of treatment? • Patient outcome: does use of the diagnostic technology contribute to improved health of the patient? • Cost effectiveness: does use of the diagnostic technology improve the cost effectiveness of health care compared to alternative in terventions? LEZIONE 3 Primary data methods involve collection of original data, ranging from more scientifically rigorous approaches for determining the causal effect of health technologies, such as randomized controlled trials (RCTs), to less rigorous ones, such as case series. Integrative methods involve combining data or informati on from existing sources, including from primary data studies. These can range from quantitative, structured approaches such as meta -analyses or systematic literature reviews to informal, unstructured literature reviews. Prospective study comparing the eff ects and value of intervention(s) against a control in human beings. Each participant is followed from a well -defined point in time, which becomes time zero or baseline from that person in the study. We can have 3 types of trial: 1. Trial of drugs, that may involve healthy subjects and patients with specific health conditions. Testing efficiency/effectiveness and testing safety; 2. Trial of devices; 3. Trials of procedures; Also a trial can be divided based on: 1. How the researchers behave: clinical observational study, interventional study; 2. Trial purpose: prevention, screening, diagnostic, treatment, quality of life and compassionate use trials; A clinical trial is defined as a research study in which one or more human subjects are prospectiv ely assigned to one or more interventions to evaluate the effects of those interventions on health -related biomedical or behavioral outcomes. A question, in a clinical trial, can be: • Primary question: testing a hypothesis that the intervention has a particular outcome which, on the average, will be different from the outcome in a control group; • Secondary question regarding benefit; • Questions regarding harm; Primary methods include clinical trials, epidemiological or observational studies, and other me ans of collecting original data. Primary methods also include many types of technical studies involving laboratory or “bench” testing of technical attributes of health technologies such as technical safety, reliability, ease of maintenance, ergonomic facto rs and biocompatibility. We can have 6 types of primary methods: 1. Prospective studies are superior to retrospective ones; 2. Controlled studies are superior to uncontrolled ones; 3. Contemporaneous controls are superior to historical ones; 4. Randomized studies are superior to non -randomized ones; 5. Large studies are superior to small ones; 6. Blinded studies are superior to unblinded ones; Did investigator assign exposures? • YES -> Experimental study: try MD for safety and effect. RCT are the best to produce evidence, this is true for drug. They might not be the best for MD to get CE mark; • NO ->Observational studies: used to formulate/consolidated hypothesis; Prospective studies are planned and implemented by investigators using real -time data collection. In retrospective studies investigators collect samples of data from past interventions and outcomes involving one or more patient groups. Patients’ interventions and outcomes hav e already transpired and been recorded, raising opportunities for intentional or unintentional selection bias on the part of investigators. Retrospective studies are far less expensive and can include large volumes of data over extended time periods. A co hort study can be: • Prospective: a group of people who do not have the outcome of interest are observed in time; • Retrospective: use data already collected for other purposes; Cohort studies are used to study incidence, causes and prognosis. Because they measure events in chronological order they can be used to distinguish between cause and effect. Cross sectional studies are used to determine prevalence. They are relatively quick and easy but do not permit di stinction between cause and effect. At one point in time the subjects are assessed to determine whether they were exposed to the relevant agent and whether they have the outcome of interest. Some of the subjects will not have been exposed not have the out come of interest. Case controlled studies compare groups retrospectively. People with the outcome of interest are matched with a control group who do not. Retrospectively the researcher determines which individuals were exposed to the agent or treatment or the prevalence of a variable in each of the study groups. They seek to id entify possible predictors of outcome and are useful for studying rare diseases or outcomes. They are often used to generate hypotheses that can then be studied via prospec tive cohort or other studies. Confunding occurs when any factor that is as sociated with an intervention has an impact on an outcome that is independent of the impact of the intervention. Control group enable isolating the impact of an intervention of interest on patient outcomes from the impact of any extraneous factors. Confou nder factor differs from the intervention and control group and has an impact on the treatment effect that is independent from the intervention of interest. Random allocation of patients diminishes the impact of any potentially known or unrecognized confo unding factors by tending to distribute those factors evenly across the groups to be compared. The methods for generating primary data are: • Large randomized controlled trial; • Small RCT; • Non -randomized trial with contemporaneous controls; • Non -randomized trial with historical control; • Cohort study; • Case -control study; • Cross -sectional study; • Surveillance; • Series of consecutive cases; • Single case report; Currently, some phase 2 and most phase 3 drug trials are designed as: • Randomized: each study subject is randomly assigned to receive either the study treatment or a placebo; • Blind: the subjects involved in the study do not know which treatment they receive. If the study is double -blind, the researchers also do not know which treatment a subject re ceives. This intent is to prevent researchers from treating the two groups differently. A form of double -blind study called a “double -dummy” design allows additional insurance against bias. =n this kind of study, all patients are given both placebo and act ive doses in alternating periods; • Placebo -controlled: the use of a placebo allows the researchers to isolate the effect of the study treatment from the placebo effect; Random error is a source of non -systematic deviation of an observed treatment effect or other outcome from a true one. Random error results from change variation in the sample of data collected in the study. The main approach reducing random error is to establish large enough sample sizes to detect the true treatment effect at acceptable leve ls of statistical significance. The starting point of an unbiased intervention study is the use of a mechanism that ensures that the same sorts of participants receive each intervention. Why does alternation not guarantee unbiased? Concealment of the all ocation sequence: the use of mechanism to prevent foreknowledge of the next assignment. Bias can be: • Accidental: groups are comparable with known and unknown risk factors; • Allocation: remove investigators bias in the allocation of participants; Randomization can be simple,blocked and stratified to control and balance the influence of covariates. For stratified randomization we divide each risk factor into discrete categories and randomize within each stratum. Keep a current list of the total patients on each treatment for each stratification factor level and consider the lines from the table above for that patient’s stratification levels only. Considering two possible criteria: • Count only the direction (sign) of the difference in eac h category. Trt A is “ahead” in two categories out of three, so assign the patient to Trt B; • Add the total overall categories. Since Trt A is “ahead”, assign the patient to Trt B; Blinding can be unblinded, single -blind( active placebo/ask at the end of th e trial ), double -blind ( participants,personnel,outcome assessors ). We have an appropriate use of placebo controls: • When an effective therapy already exists; • Statistical requirements; • New treatment is not available in a given setting; • When and how to use a placebo effect to patient advantage; Registration of Clinical Trials and results help to diminish reporting bias nad publication bias, improving the quality of the evidence available for HTA. In the US, the FDA mandates that certain clinical trials of d rugs, biologics and medical devices that are subject to FD. A regulation for any disease or condition be registered with ClinicalTruals.gov. A study protocol is a written agreement between the investigator, the participant and the scientific community. The contents provide the background, specify the objectives and describe the design and organization of the trial. We have three basic kinds of outcomes: • Dichot omous response variables, such as success and failure. The event rates in the intervention group and the control group are compared; • Continuous response variables, such as blood pressure level or a change in blood pressure. The true, but unknown, mean leve l in the intervention group is compared with the mean level in the control group; • Time to failure, for survival data, a hazard rate, is often compared for the two study groups or at least is used for sample size estimation; LEZIONE 4 The validatio n of a study can be: • External: is the study asking the appropriate research question? It is connected with the generalizability or applicability of a study’s findings . It refers to the extent to which the findings obtained from an investigation conducted u nder particular circumstances can be generalized to other circumstances; • Internal: does the study answer the research question correctly? The study demonstrates that there is a causal relationship between an intervention and the outcome. It refers to the extent to which the findings of a study accurately represent the causal relationship between an intervention and an ou tcome in the particular circumstances of an investigation; The design characteristics of any method of study affect the validity of its results. To assign the quality of primary data studies, we consider, for internal validity : • Prospective rather than ret rospective design; • Experimental/interventional rather than observational; • Controlled, with one or more comparison groups, rather than uncontrolled; • Contemporaneous control groups rather than historical ones; • Internal control groups rather than external one s; • Allocation concealment of patients to intervention and control groups; • Randomized assignment of patients to intervention and control groups; • Blinding of patients, clinicians and investigators as to patient assignment to intervention and control groups; • Large enough sample size to detect true treatment effects with statistical significance; • Minimal patient drop -outs or loss to follow -up of patients for duration of study; • Consistency of pre -specified study protocol and outcome measures with the rep orted protocol and outcome measures; For external validity: • Flexible entry criteria to identify a patient population that is representative of patient diversity likely to be offered the intervention in practice, including demographic characteristics, risk factors, disease stage, comorbidities; • Large enough patient population to conduct meaningful subgroup analyses; • Comparator is standard of care or other relevant, clinically acceptable intervention and dosing, regimen, or other forms o f delivering the comparator consistent with standard care; • Patient monitoring and efforts to maintain patient adherence comparable to those in practice; • Selection of outcome measures relevant to those experienced by and important to intended patient groups ; • Multiple study sites representative of type/level of health care settings and patient and clinician experience anticipated in practice; Randomized Control Trials are designed to maximize internal validity and are generally regarded as the “gold standard” study design for demonstrating the causal impact of a technology on health care outcomes. However, some attributes that strengthen the internal validity of RCTs tend to dimmish RCTS’ external validity. Bias is a tendency of an estimate to deviate in one direction from a true value. Bias refers to any systematic deviation in an observation from the nature of an event. In clinical trials, bias may arise from any factor that systematically distorts the observed magnitude of an outcome relative to the true m agnitude of the outcome. We have: • Selection bias (sequence generation -allocation concealment); • Performance bias (blinding of study participants and personnel); • Detection bias (blinding of outcome assessors); • Attrition bias (incomplete outcome data -ex clusion/attrition); • Reporting bias; In the risk of bias table: • Within each entry, the first part of the tool describes what was reported to have happened in the study, in sufficient detail to support a judgement about the risk of bias; • The second part of the tool assigns a judgement relating to the risk of bias for that entry. That is achieved by assigning a judgement of ‘low risk’ of bias, ‘high risk’ of bias or ‘unclear risk’ of bias; For dichotomous outcomes: the higher the r atio of participants with missing data to participants with events, the greater potential there is for bias. For continuous outcomes: the potential impact of missing data on continuous outcome increases with the proportion of participants with missing dat a. LEZIONE 5 On log scales, we express intervention effects with the OR ( lowest value=0, no intervention effect=1, highest value=infinity ) . For each study is �� = ���� ���� The standard error of the log odds ratio being: �� {ln (��� )}= √ 1 �� + 1 �� + 1 �� + 1 �� The RR for each study is given by: �� = �� /�1� �� /�2� The standard error of the log risk ratio being: �� {ln (��� )}= √ 1 �� + 1 �� + 1 �1�+ 1 �2� The RD for each study is given by: �� = �� �1� − �� �2� With standard error: �� {RDi }= √ ���� �31�+ ���� �32� The individual studies estimates for continuous outcomes are: �� = �1�+ �2� Pooled standard deviation across the two groups: �� = √ (�1�− 1)��21�+ (�21 − 1)��22� �� − 2 In the general formula the intervention effect estimate is denoted by  (which is the study’s log odds ratio, mean difference or standardized mean difference...) . In the general formula the intervention effect estimate is denoted by  (which is the study’s log odds ratio, mean difference or standardized mean difference..). The individual effect sizes are weighted according to their reciprocal of their variance giving: �� = 1 �� ({})2 Under the null hypothesis that there are no differences in intervention effect among studies this follows a chi -squared distribution with k -1 degrees of freedom (where k is the number of studies contributing to the meta -analysis). A large chi provides evid ence of heterogeneity of intervention effects. I^2 quantifies inconsistencies, it describes the percentage of variability in effect estimates that is due to heterogeneity rather than sampling error (chance). LEZIONE 6 The objectives of survival analysis are: • Analysis of patterns of event times, estimate time -to -event for a group of individuals; • Comparison of distributions of survival times in different groups of individuals; • Examining whether and by how much some factor s affect the risk of an event of interest; We trying to estimate the curve where only the outcome can be any binary event, not just death . Survival analysis is the analysis of time -to -event data, such data describe the length of time from a time origin to an endpoint of interest. The time origin must be specified such that individuals are as much as possible on an equal footing. The endpoint or event of interest should be appropriately specified. There are different types of events, including: • Relapse; • Pro gression; • Death; The time from “response to treatment” ( complete remission) to the occurrence of the event of interest is commonly called survival time. The two most important measures in cancer studies include overall survival, the disease -free survival t ime and event -free survival time. Survival analysis methods are important in trials where participants are entered over a period of time and have various length of follow -up or exposure time. These methods permit the comparison of the entire survival experience during the follow - up - The graphical presentation of the total survival experience during the period of observ ation is called the survival curve, and the tabular presentation is called the lifetable. In clinical trial all participants are observed for T years ( follow up or exposure time) . If all the participants are entered as a single cohort at the same time, the actual period of follow -up is the same of all the participants. If the entry of participants is staggered over some recruitment period, then equal periods of follow -up may occur at different calendar times for each participant. Data is time between two distinct events, repeated among many subjects/objects/organisms. The first event is predefined while the second is typically some specific kind of transition. The time between two events will be called the transition time. It can happen that subjects exit the study for other reasons than the event of interest (stopped participating, study terminated). This is called censored data. Transition time is more than what we have registere d, but we don’t know by how much. Some participants may not experience an event before the end of observation. The follow -up time or ex posure time for these participants is said to be censored; that is, the investigator does not know what happened to these participants after they stopped participating in the trial. Assume that times -to -event for individuals in your dataset follow a continuous probability distribution (which we may or may not be able to pin down mathematically). f(t) describes the probability -density of the transition times. The probability of the failure time occurring at exactly time t (out of the whole range of possible t’s). �(�)= lim������� →0 �(�< � < �+ �� ) �� Survival curve , S(t) describes the probability of not going through a transition before a given age, t , i.e. the individual survives beyond time t. =n more mathematical terms, it’s the cumulative sum (integral) of probabilities (de nsities) for all times larger than t. �(�) = ∫ �(�)�� = 1 − �(�) ���������� �(�) = − �′(�) ∞ � Hazard rate h(t) describes the chance of going through a transition the next time interval, given that the subject has not done so earlier. One possible estimate for the hazard ratio is the number of observed events divided by the total exposure time of the person at risk of the event . H is the cumulative hazard function. Higher the hazard, the lowest the survival. ℎ(�)= lim������� →0 �(� < �+ �� |� > �) �� = �(�) �(�)= − log (�(�))′ This is subtle, but the idea is that, when you are born, you have a certain probability of dying at any age. That’s the probability density . However, as you survive for a while, your probabilities keep changing. That’s the conditional probability. In parametric models we make assumptions about the patterns of the survival times, such as: • Exponential, the underlying hazard function is constant in time, i.e. the occurrence of the events in time is totally random; • Weibull, allows a monotonic hazard; • The Gompertz distribution, hazard increases monotonically with an inherent proportional hazard assumption; Considering the Weibull distribution, as well as the scale parameter >0, it has an additional shape parameter k>0 that allows increasing, keeping co nstant, or decreasing hazard. The bigger is , the quicker the survival function falls. K defines how the hazard changes over time; if k=1, the risk is constant, if k>1, the risk increases over time. The survival function of the Weibull distribution is: �(�)= �−(������� )������ The hazard function is: ℎ(�)= �� (�� )�−1 Considering the Gompertz distribution, it is used to fit human mortality and the data of growth of tumours. Like the Weibull distribution, the Gompertz distribution has two parameters (shape the scale) and its increases or decreases monotonically with an i nherent proportional hazard assumption. The survival function is: �(�)= exp { � �(1 − �������� )} The hazard function is: ℎ(�)= ��������� The cumulative hazard is: � (�)= − � � (1 − �������� ) Kaplan -Meier estimate the product of conditional probabilities leads to the survival estimate. The curve is used to estimate and plot the survival function. Let t1