Matches in SemOpenAlex for { <https://semopenalex.org/work/W1837426067> ?p ?o ?g. }
Showing items 1 to 55 of
55
with 100 items per page.
- W1837426067 endingPage "S18" @default.
- W1837426067 startingPage "S16" @default.
- W1837426067 abstract "In medicine today, there are disparities between the clinical research community and the basic science researchers. There is a perception that if better clinical trials were conducted the efficacy of medications would be established and both false positive and false negative results would be minimized. Since the mid-1940s when randomized controlled trials first began, there has been a naive idea that the only determinant of the effect size observed in a clinical trial is the treatment. We must first understand that effect size can be defined as the standardized difference between the active treatment and the control, which is usually a placebo. The effect size can be variable. If a strong treatment is given, a big difference between active and placebo is seen. If a weak treatment is given, a smaller difference is seen and the treatment is often deemed ineffective. Recently it has become clear that a variety of factors other than the strength of the treatment impact effect sizes, including: trial structure, dosing regimen, the use of concomitant analgesics, the use of rescue medications, the primary endpoint of the trial, the statistical analysis plan that is being used, the pain threshold for entry, and the pain disorder being studied. Additional features of the study design for pain trials (by which I mean what is written in the protocol) include: the mode of administration of the primary endpoint, study duration, participants' baseline general health, any associated psychopathology, pain duration, baseline pain intensity assessment, number of patients with prior treatment failure, number of treatment groups, active comparators, and the sequence of administration of any questionnaires (Katz, 2005; Turk et al., 2012; Dworkin et al., 2013). It is not only how the study is designed that impacts outcome, it is how the study is conducted at the sites where the patients are seen. Some study conduct factors that can either increase or decrease the observed effect size include diary compliance, accurate pain reporting, placebo response, number of sites, number of subjects per site, medication adherence, location of sites, patient referral sources, type of research site (academic vs. non-academic), protocol concealment, baseline pain variability, patient retention, and decreased or missing data (Katz, 2005; Turk et al., 2012; Dworkin et al., 2013). These factors were validated in a review of the methodologic features of 29 clinical trials of opioid analgesics for chronic pain. These trials frequently fail to distinguish the analgesic effect from placebo, despite known efficacy of the drug. The intent of the study was to identify the factors that may be associated with a risk of failure. The studies were randomized, placebo-controlled trials of opioids with at least 1 week of continuous treatment. The methodologic features that appeared to predict success in these trials were slow titration of medications, flexible dosing, minimizing concomitant and rescue analgesics, homogeneous samples, particularly in terms of opioid use upon entry, fewer study sites (for the same sample size), and including as much data as possible in the statistical analyses. The research suggests that opioid analgesics should be studied in a clinically relevant manner that supports internal validity (Katz, 2005). Such factors may have a different degree of impact in studies of different types of medications. For example, it appears that the decreased drug effect observed with increased number of research sites is more dramatic in opioid studies than in studies of neuropathic pain treatments in general. A more in-depth examination of some of the specific problems that characterize these studies reveals that the placebo response is a concern. First of all, pain scores always go down in chronic pain studies in every group with rare exceptions, perhaps among cancer patients who experience worsening pain. By and large, pain always gets better in chronic pain studies regardless of the treatment group the patient is assigned to. Why? There are a number of possible reasons for this. One is that the natural history of disease is cyclic. Patients often enter clinical trials when they are experiencing increased amounts of pain. They sign up for a clinical trial when they are motivated to do something different. Another possible reason is a statistical phenomenon called regression to the mean. A third possible reason is the placebo effect. The expectation that the pill will improve the patient's pain causes an improvement in the subjective response. How much of this decline pain intensity in people assigned to the placebo group is accounted for by that actual placebo effect is not really known. Attempts have been made to determine whether training programs that “n'eutralize” patients' expectation of treatment response can decrease the response in the placebo group. For example, in a study of 601 asthma patients, each patient received either a placebo inhaler or a real inhaler. Regardless of which group they were assigned to they either got enhanced messages or they got neutral messages. An example of an enhanced message is “This is the latest and most potent drug available for your condition.” An example of a neutral message is: “You're in a clinical trial. You're not here to get patient care. This is an experiment. There's a 50/50 chance you'll get placebo, and even if you get the active drug, it doesn't work for everybody. Maybe it will work for you. Maybe it won't.” In this particular study, the enhanced messaging enhanced the effect of placebo on subjective asthma symptoms, but did not enhance the effect of the active drug. There was a larger observed difference between active drug and placebo in the neutral messaging group. It should be noted that this effect was seen for subjective symptoms but not pulmonary function tests and other objective tests. This is consistent with what we know about the placebo effect as having more of an impact on subjective phenomena than objectively measured phenomenon. These data suggest that maybe you can enlarge the observed effect size of treatment or discriminate effective treatments better by promoting this kind of neutral expectation rather than high expectation (Wise et al., 2009). Other studies in the literature demonstrate this effect, and still others present conflicting data. To overcome these barriers, we have designed a Placebo Response Training Program that utilizes live role-playing to train investigators. The investigators take the role of a patient, an investigator, or an observer and spend 20 or 30 min answering mock questions from patients. There are web-based components, and patient training options that help to instill neutral expectations in the patients. This program has been validated in a small study of 40 patients with painful diabetic neuropathy. The data from this pilot study showed that this kind of a relatively brief training intervention can lower patients' expectation compared with patients in a control condition who did not receive this training. A larger multicenter study is underway; however, the data will not be available for a while. Another problem that we are attempting to overcome is the variability in accurate pain reporting by patients. To address this issue, we have developed an algorithm using the Medoc thermal sensory testing device. We have also developed a Pain Reporting Training Program that can be conducted at Investigator Meetings. Similar in design to the Placebo Response Workshop, this program has both investigator and patient education components. We also have an active trial that is designed to train patients to be able to discriminate when they have really had pain relief or when they have not. In this study, painful diabetic neuropathy patients are receiving a single dose of either known analgesics or placebos in our clinic once a week. They are observed for a few hours and then asked: “Why do you think you received the active drug or the placebo?” We then unblind the medication they received and discuss their responses. This is a very novel and interesting protocol with lots of pros and cons. Patient phenotyping has been suggested as a strategy that allows researchers to assess a patient at baseline in a manner that may identify a subgroup in which the medication may be more effective. Questionnaires are one approach that may suggest which subgroups can be predicted to have preferentially good responses to various kinds of analgesics. Sensory testing approaches have also been tried. Some retrospective and open-label studies have demonstrated the predictive validity of phenotyping. The final point I would like to cover is the importance of monitoring. A retrospective study of three completed clinical trials examined several different variables to see whether they were predictive of the ability to discriminate an active treatment from a placebo. The data showed that there are a few variables that could be monitored that actually do predict data quality. Specifically, the study noted that different measures of the same thing should correlate with each other if the patient is reporting their subjective symptoms in an appropriate way using the scales. If those things are discordant, that is a sign of poor quality data. In conclusion, there are a number of factors related to study design and conduct that can impact the quality of the data. Interventions to mitigate these factors are available, to some extent validated, and more are coming." @default.
- W1837426067 created "2016-06-24" @default.
- W1837426067 creator A5001265546 @default.
- W1837426067 date "2014-09-30" @default.
- W1837426067 modified "2023-09-26" @default.
- W1837426067 title "Novel phase II clinical trial design approaches" @default.
- W1837426067 cites W1966615063 @default.
- W1837426067 cites W1967148732 @default.
- W1837426067 cites W2013578490 @default.
- W1837426067 cites W2103793045 @default.
- W1837426067 doi "https://doi.org/10.1111/jns.12081_2" @default.
- W1837426067 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/25269732" @default.
- W1837426067 hasPublicationYear "2014" @default.
- W1837426067 type Work @default.
- W1837426067 sameAs 1837426067 @default.
- W1837426067 citedByCount "0" @default.
- W1837426067 crossrefType "journal-article" @default.
- W1837426067 hasAuthorship W1837426067A5001265546 @default.
- W1837426067 hasBestOaLocation W18374260671 @default.
- W1837426067 hasConcept C126322002 @default.
- W1837426067 hasConcept C178790620 @default.
- W1837426067 hasConcept C185592680 @default.
- W1837426067 hasConcept C41008148 @default.
- W1837426067 hasConcept C44280652 @default.
- W1837426067 hasConcept C535046627 @default.
- W1837426067 hasConcept C71924100 @default.
- W1837426067 hasConceptScore W1837426067C126322002 @default.
- W1837426067 hasConceptScore W1837426067C178790620 @default.
- W1837426067 hasConceptScore W1837426067C185592680 @default.
- W1837426067 hasConceptScore W1837426067C41008148 @default.
- W1837426067 hasConceptScore W1837426067C44280652 @default.
- W1837426067 hasConceptScore W1837426067C535046627 @default.
- W1837426067 hasConceptScore W1837426067C71924100 @default.
- W1837426067 hasIssue "S2" @default.
- W1837426067 hasLocation W18374260671 @default.
- W1837426067 hasLocation W18374260672 @default.
- W1837426067 hasOpenAccess W1837426067 @default.
- W1837426067 hasPrimaryLocation W18374260671 @default.
- W1837426067 hasRelatedWork W1506200166 @default.
- W1837426067 hasRelatedWork W2039318446 @default.
- W1837426067 hasRelatedWork W2048182022 @default.
- W1837426067 hasRelatedWork W2080531066 @default.
- W1837426067 hasRelatedWork W2604872355 @default.
- W1837426067 hasRelatedWork W2748952813 @default.
- W1837426067 hasRelatedWork W2899084033 @default.
- W1837426067 hasRelatedWork W3026805679 @default.
- W1837426067 hasRelatedWork W3032375762 @default.
- W1837426067 hasRelatedWork W3108674512 @default.
- W1837426067 hasVolume "19" @default.
- W1837426067 isParatext "false" @default.
- W1837426067 isRetracted "false" @default.
- W1837426067 magId "1837426067" @default.
- W1837426067 workType "article" @default.