Drug Safety Matters

#31 A guide to reporting disproportionality analyses – Michele Fusaroli and Daniele Sartori

Uppsala Monitoring Centre

Disproportionality analyses are a mainstay of pharmacovigilance research, but without clear guidelines, they often lead to confusion and misinterpretation. Enter the READUS-PV statement: the first-ever guide for reporting disproportionality analyses that are replicable, reliable, and reproducible.  

Tune in to find out: 

  • The history of reporting guidelines in pharmacovigilance and why the READUS-PV guidelines were created 
  • Why there has been a spike in the publication of disproportionality analyses in recent years and what this means for their reliability 
  • What it means to publish “good” pharmacovigilance science  


Want to know more? 

Join the conversation on social media
Follow us on X, LinkedIn, or Facebook and share your thoughts about the show with the hashtag #DrugSafetyMatters.

Got a story to share?
We’re always looking for new content and interesting people to interview. If you have a great idea for a show, get in touch!

About UMC
Read more about Uppsala Monitoring Centre and how we work to advance medicines safety.

Alexandra Coutinho:

Disproportionality analyses are the so-called bread and butter of pharmacovigilance research, but there lack specific guidelines on how to report them. As a result, disproportionality analysis reports are often ambiguous, hard to interpret and can lead to incorrect conclusions when not put into the correct context. Thus the READUS- PV Statement was created, the first guide to reporting disproportionality analyses.

Alexandra Coutinho:

My name is Alexandra Coutinho and this is Drug Safety Matters, a podcast by Uppsala Monitoring Centre, where we explore current issues in pharmacovigilance and patient safety. Joining me today are Daniele Sartori, a pharmacovigilance scientist at UMC and doctoral researcher at the University of Oxford, and Michele Fusaroli, a PhD student at the pharmacology unit of the Department of Medical and Surgical Sciences in the University of Bologna. While our discussion focused on the READUS- PV guidelines, it led to some pretty interesting reflections on the efficacy of reporting guidelines, publishing good science and the importance of transparency, replicability and reproducibility in pharmacovigilance. I hope you enjoy listening. Hi, Daniele and Michele, and welcome to Drug Safety Matters. I'm really glad that we were able to get both of you in here to speak about the project that you've been co-authoring.

Alexandra Coutinho:

So how are you both?

Michele Fusaroli:

Fine, thank you.

Daniele Sartori:

I think I'm okay. I could use an extra coffee maybe, so overall I'm fine.

Alexandra Coutinho:

Great, good to hear. So you're both here today to talk about a recent paper you have co-authored, along with other researchers, on guidelines for reporting disproportionality analyses. For people like me who may not be familiar with this term, what is disproportionality analysis and what is its role in pharmacovigilance?

Michele Fusaroli:

Well, pharmacovigilance databases collect individual case reports of suspected adverse drug reactions from all over the world, and we can use these reports to identify unexpected safety issues. The gold standard would be to do a case-by-case analysis, but as these databases get bigger and bigger, relying only on case-by-case ana lysis becomes practically unfeasible and therefore we have to find other methods. And data mining enters the stage here, because when we have so many data, what we want to do is to perform some statistical analysis, such as, for example, disproportionality analysis, to identify those drug event combinations that occur more often than expected. And that is what disproportionality analysis help us to deal with big, big databases.

Daniele Sartori:

Yeah, and it might be helpful to learn that disproportionality as a concept is not necessarily new. Disproportionality has been around since the late 90s at least. But the concept of disproportionality had already been in place, or at least since the mid-60s, with the work from Finney or the work from Patwary, who were trying to apply observed or expected analyses in the WHO database at the time.

Alexandra Coutinho:

So, coming to your project in partic ular, the READUS-PV project. So these are guidelines on how to report disproportionality analyses, right? These guidelines were created to answer a long-standing problem with reporting in pharmacovigilance science and research. Can you tell us a little bit more about the problem with reporting in healthcare and its history, Daniele?

Daniele Sartori:

Yes, around 1978, Freiman and colleagues first surveyed randomized controlled trials with negative results. So these are randomized controlled trials that suggest that there is no difference between the intervention and the placebo, and they found some improper reporting of important aspects of these trials. For example, the allocation concealment strategy was not well reported and there were some inaccuracies in how the sample sizes were calculated and estimated. And they suggested that had these trials been better conducted, then the results could have improved as well, which means that perhaps in some cases the fact that there was no difference between intervention and placebo would not have necessarily held had the trial been well reported and well conducted. So fast forward in the 90s, the SORT group the standards of reporting trials. They compiled a checklist for properly reporting randomized controlled trials and in parallel the Asilomar Working Group did the same but also recommended that checklist, well, their checklist, should have been part of the submission process for published research. So effectively, journals should have asked authors who wish to submit publications of randomized controlled trials. They should have followed this checklist.

Daniele Sartori:

Eventually, the SORT group and the Asilomar working group joined forces and came to create the CONSORT statement, which is quite well known today, at least for, well it's well known for randomized control trials. So in the wake of the CONSORT statement, the UK National Health Service started funding the EQUATOR Network, and the EQUATOR Network is nowadays a very prominent organization when it comes to reporting checklists. Essentially, they collect all the checklists that are around, but they also facilitate their spread among journal editors and among peer reviewers. They also try to compile what could be the difficulties in putting together checklists, and nowadays you can go on the EQUATOR website, verify that there is a reporting checklist for a specific study design that you wish to do, and then you can use it when you wish to submit a publication. So now we're in the early 2000s, still, so in the early 2000s, when it came out, the CONSORT statement only had one bullet point for harms, so for which essentially said you should report, as part of your clinical trial, the adverse drug reactions that took place in placebo and intervention arms. So this was perceived as insufficient, and so in 2004, the checklist was expanded to accommodate for another 10 points on adverse drug reaction and harms. And much later, in 2022, the CONSORT harm statement was further rephrased and updated. So nowadays you could say that there is a solid basis in checklists for reporting randomized control trials, specific to harms and adverse drug reactions. This is to say that, you know, randomized control trials are part of pharmacovigilance.

Daniele Sartori:

Pharmacovigilance starts at very early stages, much earlier than just post-marketing or case reports. But nowadays we've got guidelines for many different study designs. It's not just randomized control trials, it's also systematic reviews and scoping reviews. With the PRISMA checklist you have guidelines for their protocols, which are the PRISMA, for protocols of scoping and systematic reviews. You have checklists for case reports, which are the CARE guidelines, or surgical case reports, which are the SCARE guidelines, and for terms that are a bit closer to pharmacovigilance in the sense of post-marketing pharmacovigilance, you have guidelines for pharmacoepidemiological studies, which are the RECORD- PE guidelines, and I think this whole very slow progression naturally evolved in the READUS checklist for reporting disproportionality analyses which in its very early stages was effectively based on the RECORD- PE guidelines for pharmacopidemiology studies.

Alexandra Coutinho:

So maybe let's delve a little bit deeper then into the READUS-PV project. Michele, what is this project and what problem has it been trying to solve with disproportionality reporting?

Michele Fusaroli:

Yes, the READUS-PV project arose from a pervasive acknowledgement of the problems of reporting in disproportionality analysis that are, in fact, the same problem of scientific research in general, but are particularly accentuated when dealing with disproportionality analysis, and these problems can be clustered into three different domains that are completeness and transparency, justification of the methodological choices and correct interpretation of the results. Concerning the completeness and the transparency of the reporting, this is a necessary step to allow for reproducibility, replicability, assessment and interpretation of a study, and therefore it is a crucial part of the reporting. It should involve transparently report the preprocessing of the data, the analysis performed and the interpretation of the results, and, like all the results have to be shown in the report. There may be many motivations to not provide a complete report of a study, and some of these are, for example, the perception that the subjectivity in data preprocessing is actually not influencing the results. The fact that there is this idea that is actually true, maybe, but it is also problematic that the reader is actually not interested in all the details of the study and just wants a kind of take-home message at the end of the article. Also, the fear of showing the limitations of your study and therefore being exposed to the judgment of other researchers. And finally, also the jealousy that someone else may copy your methods and data and benefit from them. But anyway, these are just motivations that are not justified, and completeness and transparency should be a key point of their repo rting.

Michele Fusaroli:

And then we have justifications of methodological choices. In fact, a study by Currie et al. showed that actually, you can play with the population studied, with the threshold, with the definition of the object of study and then you can obtain, actually, any result that you want from disproportionality analysis, and this implies that every methodological choice that we do should be justified by expected biases, by considerations about why a definition, for example, our event is better than another, and so on. We cannot just provide some methods and some results and be happy with that. And finally, a correct interpretation, because in fact, it has been shown by Mouffak et al. that there is a spread spin in pharmacovigilance, and spin means that there is a tendency to overstate results in disproportionality articles and, in particular, to not take into account the limitations of the data we are dealing with.

Michele Fusaroli:

There are, in fact, some motivations that are also shared with other studies, in particular the desire to publish stronger results. That is always a conflict of interest of every researcher. And also, on the other side, the necessity to navigate an editorial system that often relegates weak results, limited results or negative results to the grey literature. So, summing up everything, there was a problem in disproportionality analysis, there is still a problem when publishing disproportionality analysis in the fact that the reporting is not complete, the choices are not justified and the interpretation is often not taking into account the limitations of the data and the methods, and that's why we started the READUS project. Underlying the READUS- PV project was the idea to gather experts from all over the world; experts in pharmacovigilance and experts in disproportionality analysis to kind of provide a framework for driving the reporting of disproportionality analysis and also a regulation sometime to ensure that what we have out there in the literature is actually reported in an accurate and useful way.

Alexandra Coutinho:

Right, I feel like a lot of what you had said in your answer really applies to scientific publishing in general, having worked in academia myself for a time. Specifically then to the paper, when I was reading it, I found a really, really interesting this particular finding that there was a bit of a spike, was it, in disproportionality assessments since 2017? What has led to the spike in reporting?

Daniele Sartori:

It's challenging to find the root cause of this in a singular element in the pharmacovigilance space, but I can perhaps think of a couple of possibly explanatory things. For one, around 2004-2005, there was an increase in the number of publicly available data sets, of spontaneous reports, and progressively the number of disproportionality analyses started to increase from 2004 to 2005 and so on. But it was in 2010 when Paluzzi and colleagues said, you know, the number of disproportionality analyses is exponentially increasing and, if you will, we had some signals already that disproportionality analyses were on the rise already, you know, almost 15 years ago. They also incidentally, called for a minimum set of requirements for reporting these analyses before publication, really. And it was in 2013, I think, where you started to see a few disproportionality analyses that are nowadays well cited that use publicly available data sets to show what disproportionality was capable of and how you could implement it in your data set effectively. And part of this is also because disproportionality analysis is quite easy to implement.

Michele Fusaroli:

And also from 2017, indeed, there was a further spike and acceleration in publishing disproportionality analysis, and we could speculate on some factors that may have promoted this spike. In fact, between 2015 and 2017, many public dashboards were made available. This website allowed anyone with a few clicks to access pharmacovigilance data and also to perform simple disproportionality analysis. This surely had some impact. Another important factor may have been the publication in 2016 of the good signal detection practices by the IMI project, giving already some rules not for reporting, but for performing a disproportionality analysis.

Alexandra Coutinho:

Again, generally, to my mind, it sounds great to making data available to so many people to be able to conduct analyses in general, but it seems to have a bit of a detrimental effect on publishing good science and actual finding true findings. I find that really interesting. A bit sad as well, because you want people to look into this data, to mine this data so that we can get possibly true signals. That being said, what effect then has the spike in specifically disproportionality analyses had on pharmacovigilance efforts and healthcare in general?

Michele Fusaroli:

Particularly if our speculation about the public dashboard influencing it is true. f, this possibility, this opportunity to actually, with a few clicks, obtain access and perform a disproportionality analysis, sort of broke away the responsibility of, like, designing the best study and knowing your data and correctly interpreting the results from the researcher. And this is, in fact, not so dissimilar from what is happening today with generative AI, so you actually, with a few clicks, obtain results and you maybe don't have the tools to interpret it. So if this is true, then even if the stronger participation of the pharmacovigilance community is really a good thing because it brought a lot of signals also to be recognized, we expect that the signal to noise ratio was reduced. So there were many more signals published, but at the same time they were kind of diluted in a sea of published disproportionality that were actually poorly performed, poorly documented and also poorly interpreted, because actually the researchers were not more in control of the study design and of the disproportionality analysis and they actually didn't know the data that they could access through the public dashboard, just in a superficial way.

Daniele Sartori:

So even before the public dashboard and even before the READUS, prior to 2013 or during that time, there were examples of well-reported and well-conducted disproportionality analyses. But I think what the READUS introduces is that if the responsibility prior to the READUS to report a disproportionality analysis well fell almost entirely on the author, the READUS now says that the responsibility is shared among the authors, the editors and the peer reviewers. So I think we're going to see a positive impact from this.

Alexandra Coutinho:

So in the papers that I read to prepare myself for this interview, one of the papers was like a scoping review of the reviews looking at guidelines on reporting analyses, looking at its effectiveness, and that these reviews had found that, despite their existence, inadequate reporting does still exist. What has been preventing more widespread support and integration of previous guidelines for transparent reporting in healthcare and pharmacovigilance?

Michele Fusaroli:

If these guidelines are not endorsed by journal, then actually the guidelines are not actually adopted by the scientific community. So this is the main problem. But then there are also other problems, for example the fact that a guideline may be too difficult, too complex, it may take too much time, or also that there is some kind of inertia, anyway, in the fact that it is difficult for a researcher to change the way they are doing and reporting research. And finally also well, guidelines are a manifestation of the current culture in the scientific community and therefore if they do not evolve with the scientific community and with the knowledge and with the perception of what is needed in the reporting of a study, then they are going to be obsolete.

Alexandra Coutinho:

Some reviews have also found that inadequate reporting was common in specialty journals and journals published in languages other than English, for example. What other patterns have you seen in your assessment of the disproportionality analyses published so far?

Daniele Sartori:

I think it's beyond my anecdotal experience and beyond what the study by Corrie that Michele cited earlier. I'm not aware of studies that have specifically looked into the quality of reporting in journals that use languages other than English. When I was carrying out my scoping review of signals, which came out a few years back, I did struggle to follow through some disproportionality analyses from journals that were not in English. But I also had the opposite experience, so I found some reviews written in Spanish or French that were quite well done. So it's been a hit or miss for me. This is just my experience really.

Michele Fusaroli:

Another thing that I experienced is that it is difficult to convey the importance of completeness in the study, even if the article and the disproportionality is directed to a clinical journal. We want to be complete, particularly because the reader may not have the tool and knowledge to understand and assess our study, so we want to be the most complete possible.

Alexandra Coutinho:

So, moving to maybe a slightly different subject, then, the READUS- PV project was carried out using something called the Delphi method. Again, not being a PV scientist myself, what is the Delphi method and what are its strengths and weaknesses with regards to projects such as this particular one?

Daniele Sartori:

The Delphi is a method that is used to arrive at a consensus in a panel of experts. Now, the way in which the Delphi gets this consensus is through an iterative process. It moves through rounds, rounds of questions from the researcher to the panel of experts and the Delphi says if all of these experts agree on something, then they have reached consensus. Well, it's not really all of the experts, it's not unanimity. It is the Delphi method requires the researchers to set a threshold of consensus that typically it hovers around 70 to 80 percent of people saying yes, we agree on this specific topic, but other Delphi studies have used lower thresholds. So there is no gold standard consensus threshold for Delphi studies. So it proceeds iteratively. So you could start with a question like should we define humans as featherless bipeds? And then if 70% of people say yes, then we all agree that humans are featherless bipeds. Or rather, this panel of experts says, you know, humans are featherless bipeds.

Daniele Sartori:

Now, it proceeds iteratively in the sense that these questions can be amended in subsequent iterations, which are called rounds, and they're modified based on the feedback from the participants to the study. So, for example, if we ask the same question and the answer is overwhelmingly no, or a group of people has said well, you should perhaps rephrase it in a different way, then subsequent rounds of the Delphi will account for these modification to the question until everybody can agree, or not. It's a perfectly fine outcome of a Delphi to say we did not agree on this. It's also important to note that the feedback is not just one way, from the participants to the research group, but also from the research group to the participants. So at the end of each iteration of a Delphi, the participants are given the outcome of the previous iteration. So they are aware that either the majority or a small minority of participants have said yes or no to a given question, or this question should be amended as follows. Another important aspect of the Delphi is that the panelists, people who take part in this, they're anonymous. So if Michele and I were in the same Delphi, speaking purely theoretically, he and I are not allowed to know that we are taking part of this Delphi. So the communication is from the researchers to one author at a time and only aggregate feedback is given to the whole panel of participants.

Michele Fusaroli:

There are some strengths of this kind of study and in particular, for example, that it allows to reach these expert consensus, giving voice to everyone. So there are no like predominant voices that may overcome other voices and instead it's possible to grab every opinion. What we did in particular, was starting from the RECORD- PE structure and we asked all the experts that were involved to provide the items that they thought had to be included in their reporting checklist and then, through these rounds, we selected the ones that were actually considered by the majority of the experts, and in our cases it was 80%, to be important enough to be included in a reporting checklist. There were some weaknesses also in this process and, in particular, the fact that it took a lot of time. It took a lot of effort. Also, even if we tried to contact experts from all over the world, the majority of them was, in fact, from Europe. But, yeah, this is a weakness that, plausibly, we will manage to solve in the next revision of the guidelines.

Daniele Sartori:

Yeah, and perhaps another limitation is, just because you ask a group of experts to answer a specific question, even if they agree, they can still be wrong. And another important thing to bear in mind when one designs a Delphi is how you define an expert. I don't know what an expert in pharmacovigilance is. I think you could get around that, for example, by setting a number of years of experience for, for example, and as Michele said, representativeness of the panel is crucially important.

Michele Fusaroli:

What we decided to do for gathering experts was actually a bibliometric analysis, so we actually saw the researchers that published the most disproportionality analysis. So, actually, as every practical and operative definition, has its problems, but at the same time we thought it was also a way to involve people that are publishing a lot of disproportionality analysis and therefore are going to benefit a lot of the reporting checklist.

Alexandra Coutinho:

The limitations that you both were talking about with regards to the Delphi method and the READUS- PV project. I guess you can kind of apply that perhaps to co-authors on a paper, right? You talk about like oh, the experts could agree on a finding or a specific hypothesis, but they could still be wrong. Of course, that applies to normal scientific papers as well.

Michele Fusaroli:

Exactly. The only thing is that here you want to actually try to have everyone in the world to follow a checklist. So if the expert opinion and expert board is wrong, then it's more of a problem because you are trying to try to implement a new standard. But yes, that was necessary, as we said before, because reporting in disproportionality analysis had really a lot of problems and that was actually potentially impacting, we cannot really know, but speculatively that could have impacted the patient safety because of the overburden of signals and noise and you don't really know what to do with so many disproportionality analysis published that are not well accessible. So, like that was important to do. It's not definitive, we will have to see how these guidelines are adopted by the pharmacovigilance community and why it is difficult sometimes to adopt some of its items and therefore also change them as we gather new evidence on how they can be refined.

Alexandra Coutinho:

And then it also then makes even more sense to my mind. I mean that if it's going to be used by the community, then the community should all have a say in how you're going to do this properly. You both being pharmacovigilance scientists, what are your thoughts on transparency, replicability and reproducibility in pharmacovigilance science and signal detection in general, and what kind of impact will these READUS-PV guidelines have on these characteristics in disproportionality analyses?

Michele Fusaroli:

We hope that READUS- PV will have an important impact on improving the reporting of disproportionality analysis. The READUS- PV checklist can just be also a tool for authors to write a complete and transparent and high quality report of a disproportionality analysis. At the same time, we hope that it will be endorsed by journals, and that's what we are spending our energy on now. That is, we need, as Daniele said before, to make the high quality reporting of disproportionality analysis a responsibility that is shared not only by the authors but also by peer reviewers and editors. But in fact, we think that everyone could benefit from the READUS-PV checklist as, for example, peer reviewers could actually see whether a report is complete or not following this checklist, but also could in fact be exposed to higher quality reports.

Michele Fusaroli:

So actually, what happens to me now is that I spend a lot of time in reviewing articles because most of the time it's difficult to understand exactly what was performed in the study. It is difficult to understand the why of some choices, and instead, if READUS- PV checklist are adopted, this problem will hopefully be reduced. This would bring a lot of benefit also to the journals and the editors because on one side, they ensure that higher quality reports are published on their journals and this should be a priority of scientific journals, but also it's going to be also easier to find people available to do a peer review. Ultimately, also, a better reporting will benefit regulatory agencies and even the readers of disproportionality analysis that will be more able to interpret and assess the results of a study.

Daniele Sartori:

I think you've hinted at this previously, like how authors and peer reviewers and editors will hopefully use the READUS checklist.

Daniele Sartori:

But I think, much like other guidelines that were developed through consensus processes, the, the READUS PV will also be updated as people use it, because it it will create an awareness of its essential items and people will have the chance to reflect on them and further build on the checklist itself.

Daniele Sartori:

So, hopefully we will see a READUS- PV extension, and I don't think it's entirely going to be up to you or the READUS group. The beauty of it, I think, is that it will be, as I said, an awareness among people who publish regularly disproportionality analysis, and so they might come up with additional items for the checklist that they feel will improve the reporting of these analyses. Like, ultimately, the Delphi method is a starting point. It's what you do when you have very little, and the starting point is now there and, much like everything else in science, it needs to be built upon and also, I think, at UMC, whenever we publish works that use, for example, the vigiR ank, which is based off of, in part, of disproportionality analysis, we will have to report the disproportionality bit of the method, in accordance to the READUS, and if we communicate signals that have been detected in VigiBase using disproportionality, the READUS will make things a lot clearer for whoever reads the signals.

Alexandra Coutinho:

Yeah, it's clear to see that there are many advantages to integrating and practicing the READUS- PV project guidelines throughout the PV community, but it will only really work if everyone agrees to use these guidelines together. So, having said this, what are the next steps to ensure their uptake and overall integration into the PV community?

Michele Fusaroli:

Yes, well, endorsement of journals and scientific societies also will be an important part of it. Also, letting authors know about these guidelines and how they can be useful to their practice, also as a structure, as a skeleton, as we said before, is going to be important. And finally, another important thing is, just as we said before, to gather feedback and to actively monitor how the guidelines are adopted. What are the difficulties that people meet when they try to adopt the guidelines and how they can be refined? And I absolutely agree with Daniele that, yeah, there was the READUS-PV group that drove the initial collection of this checklist, and that was necessary because there wasn't anything. But this should be seen more as a pharmacovigilance community, even responsibility and opportunity, to create a better reporting of disproportionality and a better pharmacovigilance in general.

Alexandra Coutinho:

So before I let you both go, we have a question from one of our listeners, one of our colleagues at the Uppsala Monitoring Centre, Magnus Ekelo. He asks how this discussion applies when we perform disproportionality analyses on medication errors specifically.

Michele Fusaroli:

Everything that we said before apply also here. So completeness and transparency are going to be important, and correct interpretation and justification of the choices made. And actually, this proportionality analysis compares the observed number of reports in which a drug and an event are co-reported together, with the number of reports that would be expected if the drug and the event were independent. Okay, so with medication errors, we always know that the drug is related to the event. What we can say is that disproportionality analysis can still be used, for example, to prioritizing some medication error of a drug relative to the medication errors of another drug, and this is why it is important to justify your choice. Why are you doing a disproportionality analysis? It's not that you cannot do it, but explain what is your aim and perform the analysis consequently and interpret consequently the results.

Alexandra Coutinho:

So yes, so I guess, regardless of the fact that medication errors have more of a link between the event and the drug, whereas risk proportionality analyses looks at these as separate entities, the guidelines still apply in that, you know, it makes it maybe even more important that you apply the guidelines in this specific case.

Michele Fusaroli:

Yeah, exactly.

Alexandra Coutinho:

So we're nearly finished with the interview. Before we go, do you have anything more that either of you wanted to say on reporting disproportionality analyses and their review in pharmacovigilance?

Daniele Sartori:

So I think disproportionality analysis is a valuable tool that we have in pharmacovigilance. It's a starting point when you analyze a large data set and all, and in the READUS we do state that there should preferably be a case-by-case assessment as well, so that your measures of disproportionality don't appear alone. So I think it'd be helpful to have some form of guideline that also tells you how to report the case-by-case assessment. At the moment we don't really have one, so hopefully there is some room for development there. And the second thing is, with massive conflict of interest, of course, you should use the READUS- PV guideline and read it. Read also the explanatory document as well.

Michele Fusaroli:

Yeah, again, as I think another general message that is particularly important and we spoke about it before is that the quality of the reporting shouldn't be just a responsibility of the authors. It should be a shared responsibility and therefore it's important that everyone is involved in adopting the READUS- PV guidelines as a framework for writing better disproportionality analysis and as a framework to select for publication better disproportionality analysis and also, it should be a responsibility of everyone to find better ways to report for an extension of the READUS-PV guidelines when the time comes.

Alexandra Coutinho:

Yeah, the community was definitely my key takeaway from this discussion. Thank you both for a very, very interesting discussion on these guidelines and your paper, and thank you for your time. That's all for now, but we'll be back soon with more conversations on medicine safety. If you'd like to know more about transparent reporting in pharmacovigilance, check out the episode show notes for useful links. If you like our podcast, subscribe to it in your favorite player so you won't miss an episode, and spread the word on social media so other listeners can find us.

Alexandra Coutinho:

Apart from these in-depth conversations with experts, we host a series called Uppsala Reports Long Reads, a selection of audio stories from UMC's pharmacovigilance news site, so do check that out too. Uppsala Monitoring Centre is on Facebook, Linkedin and X, and we'd love to hear from you. Send us comments or suggestions for the show or send in questions for our guests next time we open up for that. For Drug Safety Matters, I'm Alexandra Coutinho. I'd like to thank Daniele and Michele for their time, our listener Magnus Ekelo for submitting questions, Fredrik Brouneus for production and post-production support and, of course, you for tuning in. Till next time.

People on this episode