Drug Safety Matters
Drug Safety Matters brings you the best stories from the world of pharmacovigilance. Through in-depth interviews with our guests, we cover new research and trends, and explore the most pressing issues in medicines safety today. Produced by Uppsala Monitoring Centre, the WHO Collaborating Centre for International Drug Monitoring.
The views and opinions expressed in the podcast are those of the hosts and guests respectively and, unless otherwise stated, do not represent the position of any institution to which they are affiliated.
Drug Safety Matters
#45 How to perform better disproportionality analyses – Michele Fusaroli & Eugene van Puijenbroek
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Send a text message to the show!
For all its ease and speed, disproportionality analysis can be distorted by many biases, making it easy to misuse and misinterpret. Michele Fusaroli from Uppsala Monitoring Centre and Eugene van Puijenbroek from the Netherlands pharmacovigilance centre Lareb explain why we shouldn’t abuse this powerful but fragile tool.
Tune in to find out:
- Why we should never treat disproportionality signals as verdicts
- How poorly performed analyses affect scientists, regulators and patients
- How to avoid the most common sources of bias
Want to know more?
- Michele and Eugene’s paper in Drug Safety is a concrete guide to charting and sidestepping the pitfalls of disproportionality analysis.
- In another Drug Safety paper, Michele and colleagues show how directed acyclic graphs (DAGs) can help map and address biases in disproportionality analysis.
- Michele also reviewed the method’s limitations in Uppsala Reports, where he argues that ‘pharmacovigilance must move past crude disproportionality’.
- Last year, Retraction Watch covered the spike in pharmacovigilance studies in the literature and why some journals decided to ban drug safety database papers.
- Previously on Drug Safety Matters, Michele and Daniele Sartori discussed the READUS-PV guidelines for reporting disproportionality analyses.
- In 2016, the IMI PROTECT project published recommendations to improve signal detection practices, especially for quantitative methods like disproportionality analysis.
- UMC’s guidebook on signal detection in small datasets offers step-by-step advice for qualitative methods and manual case review.
Got a story to share?
We’re always looking for new topics and interesting voices. If you have an idea or any other feedback for the show, get in touch!
About UMC
Uppsala Monitoring Centre promotes safer use of medicines and vaccines for everyone everywhere. Follow us on Facebook, LinkedIn, X, and Bluesky.
Welcome & introduction
Federica SantoroDisproportionality analysis has become the go-to method for many in pharmacovigilance. It's easy, it's fast, but it can also be distorted by many biases, making it easy to misuse and misinterpret. This isn't just a problem for scientists and regulators. It can have serious consequences for patients too. So how do we use the method more responsibly? My name is Federica Santoro and this is Drug Safety Matters, a podcast by Uppsala Monitoring Centre, where we explore current issues in pharmacovigilance and patient safety. Joining me today are Michele Fusaroli, Senior Pharmacovigilance Scientist at Uppsala Monitoring Centre, and Eugene van Puijenbroek, clinical pharmacologist and physician at the Netherlands Pharmacovigilance Center Lareb. We spoke about the common pitfalls of disproportionality, how to avoid them, and why we shouldn't abuse this powerful but fragile tool. I hope you enjoy listening. It's such a pleasure to have you both back on drug safety matters. Today we're talking about one of the core methods in pharmacovigilance, disproportionality analysis. A method that is widely used and can be really powerful, but it also poses problems because it can be misused and its results misinterpreted. We're basing this very interesting discussion on a paper that you recently co-authored, charting and sidestepping the pitfalls of disproportionality analysis. And I think that title really captures the mood, because we're not gonna kill disproportionality, that's not our aim, but we do want to offer pointers on how to use it wisely. So why don't we start by reviewing first of all what disproportionality is? How does it work, Eugene, and why is it so popular?
Eugene van PuijenbroekWell, it disproportionality analysis basically asks a very simple question. Is a particular drug event pair showing up more often than we would have expected in our data set? And the idea is not new. In fact it has been around for decades. And one of the earliest forms was carried out by the Canadian pharmacovigilance expert Ed Napke somewhere in the sixties of the previous century. And he developed a cabinet with open drawers, which he called pigeonholes, in which he could store a notification, reports which he had color coded. And the more notifications with a certain colour stood out, the more interesting it became for him. And that's the moment when he decided to take a closer look at it. But as you can imagine that with growing number of reports, also other approaches were needed. So that's the moment where automated statistical screening approaches were developed, like the IC, the ROR, the PRR, etc. Basically they are all based on contingency tables. But they're based on the same principle. Are reports on a certain drug adverse event combination more reported than expected? And it's a very popular approach because it's fast, it's scalable, and when you've got hundreds of thousands of reports, sometimes millions of reports, you can't read everything case by case. So you'll have to rely on something else. So especially for those large databases, the approach became a necessity because this proportionality actually lets you scan your database in a very quick way. And it also lets you prioritize which drug ADR combination needs human clinical follow-up.
Easy to use – and misuse
Federica SantoroSo an excellent tool to sort through amounts of data and really find the needle in the haystack, so to say. It's also easy to run, but does that mean that it's also easy to misuse or misread, Michele?
Michele FusaroliYeah, for sure. Easy to run also means uh easy to misread. And we can imagine it as a kind of fire alarm that tells us that when it rings, it tells us that something is happening, not necessarily that there is a fire. There are many triggers that can set off a fire alarm, and a fire is just one of them. And the same is with disproportionality. It is our alarm, and when it rings, there's not always a reaction under it. There can be a confounding, there can be misclassification, there can be a different reporting habit, and we will discuss about these problems later in the episode. But it doesn't necessarily mean that there is an adverse drug reaction. So what should we do when our alarm goes off? Well, we should try to understand why. What was the trigger in our specific case? So, exactly like in the case of uh Ednapke, we have to look at what comes out from our disproportionality, uh, go through the cases and try to understand exactly what was the reason for this pile of reports for this disproportional reporting.
Federica SantoroI like the fire alarm analogy. That's a very useful way of putting it. And so what you're saying is disproportionality is only the starting point really for investigations, it's not the end point. Would you agree, Eugene?
False alarms create fear and fatigue
Eugene van PuijenbroekYes, exactly. I think we should keep in mind that in pharmacovigilance the word signal refers to information that suggests a new potentially causal association or a new aspect of an association that's already known. Uh but that only warrants further investigation. It's not a sign that that something is wrong for sure. And this also applies to a signal of disproportionate reporting. It only indicates that an adverse event is reported more frequently with a particular drug compared to the other reports in your data set. And it just represents an early signal that there might be a causal association. But it doesn't constitute proof that there is a problem. Okay? So you need this additional validation or meticulous assessment that's required before you can actually draw a conclusion that something is wrong. And and I notice that this is sometimes very difficult to convey because with formal epidemiological studies, calculations and statistical outcomes are the final step of the scientific process. You carefully design a study for a specific problem and then you will do the statistical analysis. But in disproportionality analysis on spontaneous reported data, especially when you're screening only on the combination of an adverse event and a drug, it's always the starting point. Okay? So you have identified a disproportionate number of reports between a drug and an event, but now it requires validation. And then hopefully the reason for the signal will be revealed. Okay, so the beginning is there, but don't forget that the real work still needs to be done and not the other way around.
Federica SantoroAnd we will discuss that work in detail, what needs to be done after the initial alert comes up. But let's discuss the consequences of misusing this proportionality first. So on one hand, you can end up missing signals altogether, and that's obviously dangerous ultimately for patients when safety information is not discovered. On the other side though, when we flag signals that aren't there, that aren't real, what consequences does that have? What does that mean for patients, for regulators, but also for the credibility of the pharmacovigilance system as a whole? Michele, do you want to answer that?
Michele FusaroliYeah, sure. And I think it's it's not just research waste, it's not just a waste of pharmacovigilance resources. It is instead a potential cause of real harm for patients and for the public health in general. To understand it better, we can use again, once again, the metaphor of the fire alarm. Because as we know, for sure, a fire alarm that never rings, even when there is a fire, is extremely problematic. It can be dangerous, people are not going to exit the building when there is an actual fire. But also, a fire alarm that always rings, even in the lack of a fire, this can also be problematic. Because of at least three reasons. And the first one is that it's gonna scare people without a real reason. And in our case, this proportionality could scare people away from medication. They could take less medication, they could not take the medication because of fear. Also, doctors could not prescribe a medication because of something that came out in the literature without being sufficiently substantiated. So this is a general problem related with the fact that when a drug is on the market, it already has a lot of evidence about his benefits. And instead, when we put something out on the literature concerning a disproportionality in the reporting, is just an hypothesis of potential harm. But these two different weights are not well perceived by the patient or by the doctor that has to take decision in the clinical setting. The second motivation is that many false alerts can actually overload the system. Because, as we said before, the disproportionality in reporting is just the first step. We need to do a lot of assessment over the reports that are the object of the disproportionality. We have to perform some case-by-case analysis to triangulate the evidence with external evidence sources and so on. And all this work takes a lot of effort, takes a lot of energy, a lot of resources, and pharmacovigilance uh resources are not unlimited. So the problem is that sometimes we have so many false alerts, so much noise, that we kind of delay our detection of important safety concerns, or sometimes we miss them completely. And the third motivation is that too many false alerts also erode our trust in pharmacovigilance. It is kind of like the story that they were telling us when we were children about the boy that cried wolf. So the boy that cried wolf many times and the wolf didn't come, and then at some point the wolf actually comes, but no one believes the child anymore. Because we become tolerant when an alarm has rung too many times without actual consequences. So, to sum up what we said, the two main problems related with too many false alarms are on one side that patients and doctors become scared of the medication without reason. On the other side, they can actually make us and the pharmacovigilance system deaf to actual threats that would require instead attention and management.
Should we ditch the method?
Federica SantoroRight. So there's very real and potentially very serious consequences of this misuse. But if we stick with your clever fire alert metaphor, when an alarm keeps going off and there is no fire, as you say, there's a bit of fatigue, people just stop listening to it. But the instinct is also to reach up to the fire alarm and just tear it out of the ceiling. So are we looking at a similar situation with this proportionality analysis? Should we just stop using it if it generates so much noise? And do we even have alternative methods?
Eugene van PuijenbroekI see what you mean, and I totally get that instinct. And you do see people who react in that way. But a noisy alarm isn't a reason to rub it off the ceiling and to throw it away, the entire alarm system. But it is a reason to understand how it works and to fix how we use it and to explain which steps we should uh take uh once the alarm actually goes off. So I I guess here's the trade-off. If we remove the adverse event reports and disproportionality entirely, we will also lose one of the best ways we have to spot the rare, unexpected and early signals. And that's the kind of harms uh that actually doesn't show up in trials and usually takes a long time to emerge in other data sources. So we'll need a system. Uh personally I'd like to think of an adverse event reporting system as a particular lens on reality. It's a distorted lens, absolutely, but it shows you things that other lenses don't. And if we throw it away, we don't get a clearer picture, we get a poorer picture or or no picture at all. So the answer isn't digit the alarm. It is calibrate it, interpret it properly, and have a good protocol for what to do when it rings. So once again, once the alarm goes off, the real work still has to be done.
A practical guide to disproportionality
Federica SantoroNow I think we've given our listeners a great overview of the method and the main issues at hand. I'd like to dive into the specifics of your new publication now. One thing that struck me as I was reading your paper is that this issue is not that new, right? As we've said, that disproportionality is misused, misinterpreted, misreported by the scientific community, has been known for years. And for years, other scientists have been warning about this tendency in the literature to, on one hand, publish crude disproportionality analysis that haven't been properly validated or reviewed, and on the other hand, to overstate results, something that is often referred to in the community as spin. So we've known about the problem for a while. What does your new paper add to this body of knowledge, Michele? What is different about your contribution?
Michele FusaroliWe are trying to propose a shift in mindset because a lot of work up to now has been focused on a mantra that is be careful, disproportionality analysis can mislead you. And this is definitely true. So we are completely agreeing with this statement. But this mantra is also not very actionable. So, what we are trying to build is a different way of thinking. When our disproportionality alarm rings, we would like the farmakovigilance professionals not to come out with the frustrated statement that it rings so often, it's not really reliable, it rang again. What are we going to do now? Instead, we are trying to promote another kind of thinking that is posing ourselves the question: why did it ring this time? What was the trigger for our disproportionality alarm to ring this time? So, what we did in the paper is kind of collecting different mechanisms that can result in seeing a disproportionality, in hearing our alert in the lack of an adverse drug reaction. And also, we tried to collect some practical recommendations, some practical ways of thinking and of looking into the data to spot this kind of problem, this kind of alternative triggers of the alarm. In general, our goal is not to criticize this proportionality analysis, it's actually to help interpret it intelligently, not to stop, as Eugene said, at the idea that we got a disproportionality, then we reached the end of our process. No, actually, it's we got a disproportionality. Why did we get it this time?
How to avoid common mistakes
Federica SantoroSo, your new publication is really an encouragement to think critically about the underlying reasons that set off that alarm so that you can look at it carefully and proceed in that sense. But give us a few real examples so we understand what we're talking about more concretely. What are these pitfalls of disproportionality? And also, why do people fall into them?
Michele FusaroliSo, pharmacovigilance is interested in a causal question. And the main question that we are asking every day in our job is: does the drug cause the event? But one common situation in which we find a disproportionality in the lack of a reaction is actually when we are facing the other way around, when it's not the drug causing the event, but it's actually the event that is the reason for taking the drug. This is also defined as reverse causality, and it happens for amyotrophic lateral sclerosis and edaravon, that is a drug that is prescribed specifically for amyotrophic lateral sclerosis. When we search for a disproportionality for this drug-event combination, we find a huge one. But it doesn't really mean there is a reaction, right? It means that something is happening, that we are seeing a lot of reports of edaravon with amyotrophic lateral sclerosis. But that is kind of expected even if it's not recognized by the disproportionality, because edaravon is used to treat amyotrophic lateral sclerosis.
Federica SantoroSo you need that clinical understanding, obviously, to spot that.
Michele FusaroliExactly. And a second example can be instead confounding, and this happens when the exposed population is per se more susceptible to the event. For example, when we have vaccines that are preferentially administered to children, and we are investigating child-specific outcomes like growth retardation. In this case, we will find a disproportionality. Why? Because we are calculating the expected on a background that includes mainly adults that are not susceptible at all to growth retardation. So for sure we will see a disproportional reporting of a vaccine used in children and growth retardation. But it doesn't mean anything. When actually we restrict our background to children, the disproportionality goes away. So there's even something that we can do to address these mechanisms of distortion. A third example is amplified media attention. This can result in increased reporting even when the underlying risk doesn't change. An example is what happened in the late 1990s with vaccines and autisms, a peak in reporting that followed a fraudulent, now we know a fraudulent publication, and that resulted in seeing a huge disproportionality, even if subsequent evidence has amply disproven any causal relation between the two. So all these mechanisms are examples, like they are not the exhaustive list, but they are examples of how our fire alarm can sound for many different causes for many different reasons. And the sound is kind of always the same, it's always really similar. And our job as signal assessor is to go into these uh alerts and discriminate between the triggers, trying to identify which alerts actually point to an actual adverse drug reaction and which do not.
Why journal bans won't work
Federica SantoroThat's very interesting. And thanks for giving us those concrete and easy to understand examples. There are many more sources of bias that you mention in the paper. So obviously, we will refer interested listeners to the original publication. We'll have a link in the show notes for you to dive deeper. Let's talk about another aspect of the problem now. How easy it is to access and analyze pharmacovigilance data these days. So I'm thinking about open databases like the US Food and Drug Administration's fairs, for example. Via one of those databases, anyone really, the average citizen, can run an analysis in a few clicks and spot alleged signals. And in fact, the literature has been swamped by those types of papers, what journals call drug safety database papers. And there are some journals that have even started banning them altogether. Have the bans helped Michele, and what else can we do?
Michele FusaroliYeah, this is definitely a trend that we observed, and it's uh it's a completely understandable. Response because there was really there is really a huge volume of publications of low quality, not really well contextualized, not really useful to the regulator that are coming out concerning this proportionality. But while this is an understandable response to the problem, it cannot be a long-term solution. Because on one side, this is kind of a blanket ban that covers any disproportionality publication, any publication that is about adverse event reports. And as we said, adverse event reports offer an alternative lens into reality. Maybe it's not as clear as the lens that we would get using epidemiological data for sure, but it's it gives us some richness that we cannot obtain with other kinds of data sources. So it is losing an opportunity, throwing away an entire evidence source. And on the other side, if immediately papers are kind of blocked from reaching the literature because of this ban, actually, this is not going to work for a long time because there are many other journals that will publish these papers. And so it's just this paper will just stream into other journals and will reach the literature and will have the adverse consequences that we discussed before. Right. So what would be the way out? Plausibly not banning these studies but raising the standards, like, for example, requiring every time there is a disproportionality in a paper, requiring also a case by case assessment, requiring also triangulation with external data, contextualization within already existing literature, consideration about biological and pharmacological plausibility, and so on. So trying to qualify this proportionality, not stopping at just the alert, but actually trying to understand what was the trigger of disproportionality. Yes, open data have contributed for sure to the volume of publications that came out and that resulted in this ban. But open data is not bad. Transparency is always good. It is kind of putting a lever to activate the fire alarm where everyone can pull it. It can be a problem. It will become a problem for sure because many people don't have the training or don't feel the responsibility in activating the alarm. There will be many false alerts. There will be panic, there will be noise, there will be fatigue, tolerance to the alert. So there will be problems. But the answer, as Eugene already said, is not to remove the alarm, is not to remove the lever. It's actually to agree on rules for when to pull the lever, when to activate the alarm, and what we should do just after the alarm rings.
Designing effective training
Federica SantoroI'm glad you brought up both training and guidance because we will dive into this now. I'm really interested in hearing your thoughts on how to guide the community towards more responsible and better use of disproportionality analysis. So, Eugene, you've been quiet for a while, so I'll throw the next question at you. Not everyone has the resources or skills for that matter to assess signals thoroughly. And these people are probably most at risk of misusing disproportionality, but also of feeling overwhelmed by all these false alarms that are raised around them. As a former professor, would you say that more training and education is the answer?
Eugene van PuijenbroekOf course I agree fully with that. Um training would help, but to be honest, I think there should be a very dedicated training on this problem. This proportionality analysis is like finding raw ore. It might contain something valuable, but you can't really use the product until you actually refined it. And I guess a common mistake is that treating the statistical result like the finished product. Okay. So if we are going to develop training, it shouldn't be a training like how to press the disproportionality button and be finished. Okay. It should be in fact a training in signal assessment as such. So that means how to review the actual case narratives and how to judge the timing, the plausibility, the alternative explanations mentioned in the reports. Because as you remember, the trigger itself is only based on the number of reports on a drug and event combination. So that's the only information that was taken into account at that stage. But it doesn't take into account the rich information mentioned in the reports itself. So that's why we need the second step. But you also, like Michela said, you need to recognize the classic distortions, the confoundable indication, the masking, the notoriety bias that may occur when there is a lot of attention for a certain signal. Data quality issues, all right? You might also focus your training on how to use simple checks to find these distortions, stratifying by age, sex, time, country maybe, uh looking for reporting spikes if you're you want to study notoriety bias, checking for concomitant drugs and indication if you want to know something about confining by indication. And last but not least, not mentioned in the reports, but present in the minds of the pharmacovigilance assessors, is how to triangulate the findings of this proportionality analysis combined with the analysis of the cases, and triangulate that with other evidence like literature, pharmacology, clinical knowledge, regulatory information, etc. etcetera. And one more point important to mention don't uh forget that both case by case analysis and DPA, so disproportionality analysis are in fact the triggers for further investigation, so further assessment of the signals. So we need a common causal thinking framework across both methods. The case by case assessment which is rich in clinical detail, and disproportionality analysis, which is strong when you have big numbers, and need to understand the patterns and potential confounding and the dynamics behind the reporting. So both approaches actually answers different parts of the same question, and training should teach people how to connect them in a responsible way.
Why guidelines alone do not fix misuse
Federica SantoroExcellent. You're giving us and our listeners plenty of ideas to design new pharmacovigilance courses. Um on to the guidelines and instructions aspect. Some kind of guidelines do exist. And back in 2016 there was a project called EMI Protect that published a collection of good signal detection practices, and that included guidance on disproportionality analysis. And yet, here we are, ten years down the line, we're still struggling, still seeing the same mistakes being repeated. So, Eugene, why didn't those guidelines solve the problem?
Eugene van PuijenbroekI guess uh EMI Protect uh did something really important. It showed that disproportionality analysis is more complex than it it looks at first sight. And for instance, that results can change depending on the database, depending on the time frame, and depending on the choices you make for the analysis. That its simplest form is just one quick comparison. It's easy to run, easy to publish maybe, but if you really want to do it well, you need the judgment about bias, confounding, reporting behavior and the database context next to the clinical analysis which I just mentioned. So, in other words, uh guidelines can't replace expertise, but they also can't stop misuse if the culture rewards quick signals over careful assessment and validation. So the fix isn't more rules, the fix is raising awareness about the risks, impairing the rules with training, better defaults in the tools, and last but not least, stronger expectations that signals must be clinically validated and assessed before they're treated as conclusions.
Federica SantoroI understand what you mean. Guidelines cannot live in a vacuum and they need to be supported by other tools so that people are aware of them and know exactly how to apply them. But we do have guidelines for how to report this proportionality, and that's the readers PV that we covered in a past episode. So is it time then to update the EMIProtect guidelines and come up with clear instructions for how to perform this proportionality, Michele?
Michele FusaroliI would love that, but I think we are still not there. I think there's need for much work on which are the choices that matter most in which context, because the best study design, the best analysis is going to be different for different databases in the presence of different confounding, in the presence of different biases, and so on. Our paper is a small step in this direction. We are trying to give more practical, more pragmatic insights for researchers that have to deal with disproportional reporting. What we do is collecting and organizing the mechanism that can result in seeing a disproportionality in the lack of an adverse drug reaction. And we are trying to turn this map that we created into a practical way of thinking: of discriminating between the different triggers that can result in the alarm ringing. So, this is our work. As I said, it's just a small piece of the jigsaw. There's many other pieces, many have already been done. So, like Emi Protect is part of this huge jigsaw. It was studying, as Eugene was saying, the statistical complexity of dysproportionality analysis. With Ridas PV, we worked on the reporting, on the transparent reporting of dysproportionality. With the PFOS, we are trying to show how it is important to build clinical and epidemiological thinking into our disproportionality analysis so that we can recognize exactly why the alarm went off. And also, more recently, we published another paper on directed acyclic graphs that tries to link the RIDAS and the pitfalls paper by providing an alphabet to translate this epidemiological thinking into pharmacovigilance. So to drive both the reporting of our causal assumption of our expected biases and also to drive the design of our study. But the general motive of this jigsaw that has still many missing pieces is that disproportionality analysis should not just be a statistical exercise. A good disproportionality analysis is both statistical and clinical and epidemiological.
Listener question: database size
Federica SantoroAnd so putting together these two parts is what we have to do now, data analysis depends critically on the type of data you're looking at. And in pharmacovigilance, that means that because databases are different and contain different types of information, context really matters, and what's best practice for a certain database can be very different from what's best practice for another database. So on that note, our listener Marcus asks two questions. The first one, I think, Eugene, I'll direct to you. How large and diverse does a safety database need to be for this proportionality to work properly? And can you quantify this?
Eugene van PuijenbroekI think the the short answer is it's not the amount of report that counts, but the amount of skills uh one has to analyze the data. Like you said, Friederika, the databases differ in the number and composition of the reports. That's something at the EME Protect project actually learned us. So signal detection cannot be based only on DPA, it should always be accompanied by the additional assessment steps like we discussed. It's always the full package. And as long as you keep that in mind, the trick of a signal detection can be the number of reports on an association. It can be a disproportionate signal, but there is no minimum number of reports because it always should be accompanied by this additional analysis. And also in my own database, in our own database in the Netherlands, uh the signal assessment process can sometimes be triggered by disproportionality analysis, sometimes by case by case analysis, sometimes it is simply a number of striking reports, sometimes it can also be a single well-documented case, for instance. And sometimes it can even be other triggers, such as a publication in scientific literature or a regulatory decision somewhere in the world. And although we know the ins and outs of DPA and our center, and we know the characteristics of our database inside out, a signal of disproportionate reporting, a statistical signal for us is just a trigger for that full signal assessment package. So it's never alone. So I would like to encourage uh the listeners to check the publication of the UMC called signal detection for PV centers with a small data set, by the way, freely available on the website of the UMC, that really provides excellent background information and guidance on how to optimize that process. So not only the DPA, but also the subsequent steps.
Listener question: method sensitivity
Federica SantoroThank you for reminding us of that resource and we'll link to that too. Marcus' next question is about our practices at Uppsala Monitoring Center. So that question is for you, Michele. He asks, since this proportionality has low sensitivity, what method do we at UMC use to detect sudden changes in adverse events in VG base, WHO's global database? Is it cluster analysis or something else?
Michele FusaroliYes, like today we we spoke mainly of the problem of specificity, right? Of uh the positive predictive value that can be quite low. Many times the alarm rings without a fire being actually there. But this is the other face of the problem that is uh disproportionality analysis is not always ringing when there is an actual reaction occurring. And that is exactly what uh already uh Eugene was saying. We cannot rely on it alone. There are other ways to make inquiry into the same kind of data, into adverse event reports, like case by case, but also looking at uh an expected distribution in the time to onset. So there is a strange lag between the first exposure to the drug and the outcome and the manifestation of the event, or sometimes we can see some peak in the reporting in a specific time or in a specific region, and this can point, for example, to problem with batches of medication that may be substandard. There is also some cluster analysis like a Vigroup that allow us to look into how different events tend to concur together in a sort of syndromic reactions. But adverse event reports are also not the only source of signal of new hypothesis. We can generate new hypotheses looking at the literature, because, for example, not all the case reports that are published reach our database, reach VigiBase. And many times when they reach VGBase, they are much less granular in detail. Or also, we can generate new hypotheses based on biological plausibility or even preclinical data. So there are many ways in which we can perform signal detection, and disproportionality analysis is just one way. But if we want to be sensitive, we have to use many different strategies.
Concrete advice for stakeholders
Federica SantoroExcellent. And Marcus, if you're listening, thank you for contributing to the show, and I hope your questions were answered. Well, thank you guys. This was such a useful discussion. I'd like to wrap up with a positive note. Let's talk about the way forward before I let you go. The problems we discussed with this proportionality analysis have important consequences for many different stakeholders, from scientists to National Pharmacovigilance Center staff, journals, and so on. So, if you could give one concrete piece of advice to each of these key players in the field, what would that be? What would you ask them to do to make this proportionality less noisy and more useful? Michele, why don't we start with you? You're a researcher, so what's your take-home message for researchers?
Michele FusaroliSo, researchers can actually help in mapping the mechanisms, the different triggers of the disproportionality alert, and also can help us scaffold the causal thinking that allow us to start from the data that we know have many limitations to try to answer causal questions like is this event a reaction to the drug? But also at which dose does it happen, or also in which specific subpopulation of the exposed. These are all insights that are going to be useful for the regulators when they have to do to take specific action to reduce the harm due to adverse drug reaction. Instead, for what concerns authors of publication, I would say that we have to try to raise the signal and lower the noise. We have to try to publish only disproportionality analyses that are qualified, that are contextualized in external literature that have been already validated in the sense of we have to already try to understand whether other triggers could have been responsible for the disproportionality and not a reaction. And then for journals, I would say that a ban is an understandable response to the huge volume of publication, but it's not necessarily the best way forward because ditching away the lens is going to result in losing an opportunity to have a specific perspective into drug-related safety issues. So instead of banning the lens, we should try to fix the standard and make sure that only qualified disproportionality analysis reach the literature.
Federica SantoroThank you. Eugene, what about providers of pharmacovigilance tools and services that use this proportionality? What's your recommendation for them?
Eugene van PuijenbroekI think what's really important is to um build clinical thinking into the workflow of the programs. Better defaults, better diagnostics for potential pitfalls, uh interfaces that nudge users towards stratification, allowing for time trend checks, confounding, and of course the case review itself, and not a product that just simply says, Well, here's the signal. And maybe there might be a role for artificial intelligence to summarize cases and to prioritize in the future. But it will never, at least in my opinion, be able to replace the human judgment. And in respect to national centers and pharmacovigilance teams, I would like to say, well, match the methods to your data and build mixed teams. Because this proportionality is strongest with a larger data set as a first filter, but in smaller settings, case reviews may be more informative. Either way, the best results I guess come from these multidisciplinary teams. Clinical view, epidemiological view, and data science. And from treating this proportionality as a prioritization that feeds into validation and not as an end point. Okay, so in the end, I think the goal isn't fewer signals, the goal should be better signals and of course better decisions.
Federica SantoroAnd I couldn't have wrapped it up better. Thank you both very much for this in-depth discussion of disproportionality. I hope our listeners feel like they now know what to look at and what to address when they're analyzing their data. Thank you very much for your time once again.
Michele FusaroliThank you so much, Federica. It was really a pleasure to be here to the next time.
Eugene van PuijenbroekYes, likewise. I really enjoyed this program, and I think we were able to brought forward an important message. I hope that the listeners will have gained some more uh insight into the ins and outs of this proportionality analysis.
Federica SantoroThat's all for now. But if you want to know more about disproportionality analysis, we've collected some useful links in the show notes. If you enjoyed this podcast, don't forget to subscribe in your favorite player. And if you have an idea or any other feedback for the show, you can now send us a text by clicking the top link in the show notes. It's anonymous and we'd love to hear from you. We'll be back soon with another episode. But until then, visit our new site UppsalaReports at upsalareports.org for more pharmacovigilance stories. And if you want to learn more about Uppsala Monitoring Center and how we promote safer use of medicines and vaccines globally, then visit our website or follow us on social media. You'll find us on Facebook, LinkedIn, X and Blue Sky. For Drug Safety Matters, I'm Federica Santoro. I'd like to thank our listener Marcus for submitting questions, my colleagues Matthew Barwick and Alexandra Coutinho for production support, and of course you for tuning in. Till next time,
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
ECDC: On Air
European Centre for Disease Prevention and Control
Inside EMA
European Medicines Agency
LSHTM Viral
LSHTM