Galcanezumab in Chronic Migraine

Migraine is one of the most common neurological conditions, and chronic migraine is a condition that, while less common than episodic migraine, is nevertheless a major cause of loss of quality of life in otherwise well individuals.

Once analgesia headache has been effectively treated, and tension type headache excluded, chronic migraine is treated with migraine preventative medications, often very effectively. However there are a proportion of patients who remain resistant to single or combination preventative treatments.

A novel target for migraine treatment is the calcitonin gene related peptide CGRP receptor on the smooth muscle of blood vessels in the head. CGRP is released from trigeminal ganglion efferents to the blood vessels to cause potent vasodilation as part of the trigeminovascular response (analogous to the “triple response” of pain, redness and swelling of skin inflammation). Blocking this may therefore block this response. Monoclonal antibodies raised against the receptor, or against CGRP itself, have been explored as migraine treatments.

This study describes a double blind trial on galcanesumab, one such monoclonal antibody targeting CGRP. The paper does not discuss the relative hypothetical or actual benefits versus other monoclonal Ab migraine therapies already marketed or in development.

Study Design

Around 270 patients were given each of two doses of galcanezumab by monthly subcutaneous injection, and 560 were given normal saline placebo. To be enrolled on the study, patients had to have 15+ headache days per month, at least 8 of which had to be migraine days. They needed at least 1 headache free day per month. If a patient failed >3 other preventatives, they were excluded. Before the study, patients had to stop all their existing migraine preventatives except propranolol or topiramate at least 30 days before study start.

Migraine days were defined as >30 minutes of migraine or probable migraine according to ICHD-3 beta criteria (even though the duration criterion of the latter is 4+ hours). If a patient thought it was a migraine and it did not satisfy the criteria but responded to a triptan, that also counted as a migraine day.

Over 90% of patients completed the study. Only 15% of patients were on topiramate or propranolol (not specified if this was the same proportion in the three treatment groups).

The primary outcome measure was migraine days per month. At the start of treatment, this was around 19 days. Placebo reduced this by 2.7 days per month, low dose galcanezumab by 4.8 days and high dose by 4.6 days. Therefore, compared to placebo, the drug on average reduced migraine by 2 days per month. There were only about 2 extra non migraine headache days per month on average.

There were many secondary measures. Of note, 4.5% of placebo patients had a 75% reduction in migraine days, and 7% of low dose and 8.8% of high dose patients, while 0.5% of placebo patients had a 100% response, and 0.7% of low dose and 1.3 % of high dose patients (not significantly different).

There was no overall quality of life measure, but there was a migraine related quality of life measure that showed significantly more improvement, about 25% more improvement than placebo. There was a patient global disease severity 7 point scale, where there was a 0.6 point improvement from placebo, and 0.8 for low dose and 0.9 for high dose, only the latter reaching significance.

The side effect profiles were similar between placebo and drug, notably common in both groups! However, there were no concerning side effects, nor indeed any characteristic enough to tend to unblind the patients or investigators.

Opinion

The Journal Club thought it was strange that the study would exclude the very patients in whom the drug would mainly be used, namely those who had failed >3 conventional treatments. The focus was clearly on maximising benefit as measured by the study. By the same token, patients had to stop any preventatives before the study, even if they were partially beneficial, apart from topiramate and propranolol.

It was furthermore strange that only 15% of the recruited patients were on the two most common treatments for chronic migraine. Had they only been tried on the others, or had they had side effects? In real practice, there are usually at least some marginal benefits from preventatives and patients often remain on them.

It is therefore possible that many patients were treatment naïve as far as preventatives were concerned. This makes the 2 fewer migraine days per month vs placebo (from an initial 19 days per month) an all the more modest magnitude of benefit.

It is difficult to reconcile the cost of the drug with the fact that patients on average will still have 15 migraine days a month. Most patients would not consider this a treatment success, and certainly not such that a patient would happily be discharged from specialist care. In terms of patients having a 75%+ reduction in migraine days, generally the minimum level of meaningful benefit in a pain study, the excess over placebo was only 3-4% of patients.

The lack of a general quality of life measure means that cost benefit analysis cannot be performed. The quality of life measure used was specific for migraine and likely to show much larger differences; a cured migraine sufferer might have a near 0% to 100% swing on this scale, but another individual considering the range from death to total disability to perfect health might assign curing migraine only a swing from 90% to 100%.

A major aspect of migraine care is what happens when treatment is stopped. Patients do not want lifelong medication, let alone lifelong monthly injections. Fortunately we find that after six months of treatment, traditional preventatives can often be withdrawn. Although the study mentioned that there was an open label period and then a wash out period, we do not know any of these results; presumably they are to be held back for another publication. Is there rebound migraine on treatment withdrawal? Any funding body would want to know if the patients would likely need the treatment for 3-6 months or for many years.

As a final point, it was queried whether the definition of migraine is sufficiently specific; perhaps this limits the observed benefit in this and similar studies. Some headaches recorded as migraine may be tension type headache and therefore not responsive to specific anti-migraine treatment. The table below shows the relevant criteria.

ICHD-3 Headache Diagnostic Criteria

Probable Migraine Probable Tension Type Headache Definite Tension Type headache
2+ of: 2+ of: All of:
4-72 hours duration 30 min to 7 days duration 30 min to 7 days duration
2+ of:

Unilateral,

Pulsing,

Moderate+ severity,

Avoid routine physical activity

2+ of:

Bilateral

Pressing or tightening

Moderate- severity

Not aggravated by routine activity

2+ of:

Bilateral

Pressing or tightening

Moderate- severity

Not aggravated by routine activity

Nausea or

Photo plus phonophobia

No nausea

Not both phono and photophobia

No nausea

Not both phono and photophobia

 

A headache is diagnosed as a migraine if fits probable migraine and is not a better fit with another headache diagnosis, which presumably means definite rather than probable tension type headache. The severities and durations overlap so they cannot distinguish. One of photophobia or phonophobia overlaps. So a unilateral, pressing headache with avoidance of routine activity with no nausea no photophobia and no phonophobia  is classified as migraine as long as it lasts 4 hours, but it seemed that some of the migraine days were half an hour of headache. Also a headache not satisfying these criteria is a migraine if there is a response to triptans, but we have seen the large placebo response already from the main data. In general practice a tension type headache might be unilateral, and might interfere with routine activity if at the more severe end of the scale; certainly a neck ache or jaw (including temporalis muscle) ache from which a tension headache may arise may have these features.

The paper on which this Journal Club article is based was presented by Dr Piriyankan Ananthavarathan, Specialist Registrar in Neurology at Barking, Havering and Redbridge University Hospitals Trust.

Posted in Migraine | Tagged , , | Leave a comment

Disease Modifying Therapies in Multiple Sclerosis: Background for General Readers

Multiple sclerosis (MS) is a presumed autoimmune condition of demyelination and often inflammation of the central nervous system. Its evolution is very variable; some patients suffer episodes lasting weeks to months with complete or near complete recovery in between, and the periods between episodes may span months to decades (relapsing remitting MS). Other patients accumulate progressive disability as a result of or between episodes (secondary progressive MS). Still other patients, around 10% in total, do not suffer episodes but instead undergo a gradually progressive course with variable rapidity, but usually noticeable over the course of months to years (primary progressive MS). Patients with MS can evolve from one category to another; some in fact at a certain point remain clinically stable indefinitely.

For many decades, its immune basis has prompted trials of various immunomodulatory agents to try and reverse or at least arrest the progression of multiple sclerosis. Some have been shown not to work, e.g corticosteroids, immunoglobulin. Some work but have largely been overtaken by newer, more expensive, therapies. For example, azathioprine is a traditional commonly used immunosuppressant and in a Cochrane review was found to reduce relapses by around 20% each year for three years of therapy, and to reduce disease progression in secondary progressive disease by 44% (though with wide confidence intervals of 7-64%). There were the expected side effects but no increased risk of malignancy. However it remains possible that there could be a cumulative risk of malignancy for treatment durations above ten years. In the 1990s, beta-interferon became widely used but was never compared directly with azathioprine. With the 21st century came the introduction of “biological therapies”, typically monoclonal antibodies against specific immune system antigen targets. There has also been a reintroduction of non-biological therapies originally used to treat haematological malignancy or to prevent organ transplant rejection.

These new therapies, called disease modifying therapies, as opposed to symptomatic treatments or short courses of steroids for relapses, are now conceptually, though not biochemically or mechanistically, divided into two groups: those better tolerated or with fewer risks of causing malignancy or infections but less effective, and those with more risk of cancer and serious infection, including reactivation of the JC virus to cause fatal progressive multifocal leukoencephalopathy, but with greater efficacy.

The former group includes beta-interferons, glatirimer acetate and fingolimod. Fingolimod is an agent derived, like ciclosporin, from fungal toxins that parasitise insects and has the convenience of oral administration, but is now not routinely recommended because of severe relapses on withdrawal, and cardiac and infection risks.  The latter group includes the biological agents natalizumab (which targets a cell adhesion molecule on lymphocytes), rituximab and ocrelizumab (which target CD20 to deplete B-cells) and alemtuzimab (which targets CD52 expressed on more mature B and T cells) and the oral non-biological anti-tumour agent cladribine which blocks deoxycytidine kinase and thus interferes with DNA synthesis. Another  non biological oral agent, dimethyl fumarate, acts as an immunomodulatory rather than immunosuppressive agent and sits somewhere between the two groups, having oral administration convenience and better efficacy than the first group, but also possessing the increased PML and Fanconi renal syndrome risk of the second group.

Recent studies indicate that higher strength DMTs may slow disability progression in secondary progressive MS, as well as reduce the number of relapses. There have also been trials in primary progressive MS but these, most notably using rituximab, were not clearly positive. For a study looking at ocrelizumab on primary progressive MS, see the accompanying Journal Club review.

 

Cost of Disease Modifying Therapies

The disease modifying therapies are extremely expensive and, given MS is unfortunately not a rare disease, have a significant impact upon the health economy.

For example, in relation to the accompanying paper review of ocrelizumab for primary progressive MS, this drug is not really expensive compared to similar medications, having a list price of £4790 per 300 mg vial, with four infusions a year. There are many further costs associated with imaging, screening, monitoring and admission for infusions.

Normally, cost effectiveness is justified at around £35,000 per Quality of Life Adjusted Year (QUALY). This means the cost would be justified at £35,000 a year if each year it gave patients 100% quality of life who would otherwise die or have zero quality of life. Clearly ocrelizumab does not do that; it appears to preserve at least 0.5 or 1 out of 10 on a disability scale in 6% of patients on an ongoing basis, giving a quality of life per patient benefit of very roughly 0.6% and a QUALY estimate of over £3 million. Of course, there are other considerations such as wider health economy costs of disability, the fact that some patients might have been prevented from deteriorating by more than 1 point on the EDSS, and the potential costs of monitoring for and treating cancer and PML complications in a relatively young patient population even after treatment is stopped. Note that there was actually no significant difference in this study in the SF 36, with both groups remaining surprisingly little changed after about 2 years, which probably fits with the 0.6% mean improvement figure calculated above.

If the NHS, or the health economies of other countries, do not consider a tighter subset of primary progressive patients who might respond better, it is difficult to balance this with other medical, or indeed social care, conditions that require resourcing.

Posted in Inflammatory/ Auto-Immune Diseases, Primer Posts for General Readers | Tagged , | 1 Comment

Ocrelizumab versus Placebo in Primary Progressive Multiple Sclerosis

Recent studies indicate that higher strength disease modifying therapies (DMTs) may slow disability progression in secondary progressive multiple sclerosis (MS), as well as reduce the number of relapses. There have also been trials in primary progressive MS but these, most notably using rituximab, were not clearly positive. For a more general review, please see the post Disease modifying therapies in multiple sclerosis.

The study being reviewed in this post, by Montalban et al., 2019 is on rituximab’s sister compound, ocrelizumab, and targets younger patients with more active disease, which seemed to be a subgroup that might have responded to rituximab.

Study Design

There were 732 patients randomly assigned to ocrelizumab or placebo in a 2:1 ratio. Inclusion criteria were a diagnosis of primary progressive MS according to established criteria and age 18 to 55 years. Their disability had to range from moderate disability but still no walking impairment to impaired walking but able to walk 20m, perhaps with crutches (EDSS 3.0 to 6.5). The disease duration had to be within 10-15 years. They should never have had any relapses.

Pairs of ocrelizumab or placebo infusions were given every 24 weeks for at least five courses. The main end point was the % of patients with disability progression, defined as at least 1 point on the EDSS scale sustained for 12 weeks, or 0.5 points at the more disabled end of the scale.

Only if this primary end point was reached would the study be continued to test secondary end points such as 24 week sustained disability progression, timed walk at week 120, change in volume of MRI brain lesions, and change in quality of life on the SF36 score.

Results

Patients had a mean disease duration of around 6 years, and 3% more patients having ocrelizumab had gadolinium enhancing lesions on MRI (27% versus 24%).

39.3% of placebo patients had increased disability sustained for a period of 12 weeks, and only 32.9% of ocrelizumab patients (p=0.03, relative risk reduction 24%). This was similar when confirming sustained disability over 24 weeks.

On the timed walk, there was a mean 39% slower performance after 120 weeks in patients on ocrelizumab and 55% slower in patients on placebo (p=0.04). There was no difference in quality of life (SF36 – physical component; a 0.7 out of 100 deterioration on ocrelizumab and 1.1 out of 100 on placebo).

There were three potentially relevant deaths in the ocrelizumab group (out of 486 patients), two from pneumonia and one from cancer, and none in the placebo group, but the overall rate of serious infections was not really different. Cancer rate was 2.3 % versus 0.8%, but obviously this would have to be monitored over further decades. Even during one year of open label extension there were two further cancers in the ocrelizumab group. The overall rate of neoplasms to date is 0.4% per 100 patient years, double the baseline rate, but this reflects a short time in a large number of patients.

In summary, a modest reduction in disability was seen on ocrelizumab, namely preserving against 0.5 to 1 point loss on the EDSS scale in 6 % of patients.

 

Opinion

We focused mainly on the figure (see below) where it seems that ocrelizumab stopped about 5% of patients deteriorating in the first 12 to 24 weeks, from about 9% down to 4%, and then this difference was maintained throughout until the end of the trial where about 60% of patients still had not deteriorated. The plateau at 3-4 years is probably because of the end of the trial (see below), not a stable MS population.

Ocrelizumab

The journal club were surprised at the focus on a 12 week primary end point. Patients would have progressed from zero to 3-6 out of 10 on the EDSS scale over a mean period of 6 years, yet they were measuring progression of 0.5 to 1 point over just three months. This is because there was some confusion over the phrase in the paper describing the primary end point as “percentage of patients with disability progression confirmed at 12 weeks”, and then in the results “percentage of patients with 12-week confirmed disability progression (primary end point) was 32.9% with ocrelizumab versus 39.3% with placebo.” It might seem that the primary end point was recorded at 12 weeks following treatment initiation. In fact the primary end point was recorded at the end of the study stopped after over 2 years when a prior defined proportion of patients had deteriorated. It means that over 2+ years, 32.9% of patients had a deterioration that was sustained over at least 12 weeks, i.e. not a relapse.

On the graph, it shows the numbers of patients remaining without disability at different times, starting at 487 and dropping to 462 at 12 weeks for ocrelizumab, which is 5.1% of patients and 244 to 232 for placebo which is 4.9%. Then at 24 weeks, this was 7.6% versus 13.1%. Some of the dropouts might be due to stopping from tolerability, but this was a small amount, possibly accounting from the small numbers of drop-outs between assessments every 12 weeks. For a 12 week confirmed disability progression, clearly there will be a lag in identifying patients whose increase in disability is sustained for 12 weeks. It seems that the time points do not add this 12 weeks because there is a first jump at 12 weeks in both groups. However, these numbers drop down to zero, not to the 60% of patients that appear not to have dropped out. This is likely to be because of patients dropping out because they started the study later and the study was terminated for them before 216 weeks. Nevertheless, factors such as drop outs due to tolerability and end of study probably explain the difference between the figures in the results and the plateau levels on the graphs.

What is interesting is that the difference between ocrelizumab and placebo diverged very early on the graph, and not really further over 2 years. While the 12-week sustained disability was designed to eliminate the possibility that the study is scoring relapses in previously primary progressive disease, or some other temporary factor such as injury from a fall or intercurrent infection, there is nevertheless a suspicion that ocrelizumab was mainly working well on a small subset with more active disease. The extra 3% with gadolinium enhanced lesions – a proportional difference of about 12% – unfortunately suggests a potential issue with randomisation; this might precisely be the group who could respond better.

It is noteworthy therefore that in its most recent NICE appraisal, the criteria for considering ocrelizumab are not those in this study, but a subset of primary progressive patients with enhancing disease on MRI imaging.

The journal club article described in this post was kindly presented by Dr Bina Patel, Specialist Registrar in Neurology.

Posted in Inflammatory/ Auto-Immune Diseases | Tagged , , , , | 1 Comment

Detection of Brain Activation in Vegetative State by Standard Electroencephalography

EEG title pageThis paper by Claassen et al., 2019 looks at EEG pattern changes in response to verbally given movement commands to see if there is a subset of vegetative state patients who are cognitively responsive and yet who have no motor response. The hope is that this might predict eventual outcome.

The study took 104 patients who had had acute brain injury. Most (85%) had non traumatic brain injury, which in general carries a more predictably bad prognosis. These patients were either in a vegetative state or in a somewhat better minimally responsive state, e.g. localising to pain but not obeying commands.

The EEG testing was performed within a few days of initial ITU referral.

In a trial, a patient was asked eight times to open and close their hand repeatedly for 10 seconds and then relax their hand for 10 seconds while recording ongoing EEG activity. Two second time blocks were analysed in the frequency domain by calculating the power spectral density (PSD), looking at the relative strength of signal in each EEG lead in four different frequency ranges (delta, theta, alpha and beta).

A “machine learning algorithm” was used to distinguish the “move” PSDs from the “stop moving” PSDs.

Patients were considered to show EEG activation if the algorithm consistently showed a significantly greater than chance (p=0.5) level of ability to distinguish moving command to stop moving command.

Outcome was determined by the standard Glasgow Outcome Scale after 12 months, with values >=4 (being able to be left up to 8 hours alone) defined as a good outcome.

Ultimately, patients who had at least one record showing EEG activation had a 44% chance of good outcome as defined above and only 14% of patients without EEG activation had a good outcome (with 5% missing data).

Discussion

Some of the patients were under some sedation for safety reasons, which could influence their responsive in a more reversible manner unrelated to their brain injury and also affect their EEG, although this would be unlikely to affect the change in pattern of EEG over several seconds, other than through the patient’s genuine response level.

It might have been worthwhile to record surface EMG of the forearm flexors, just to confirm there was no difference in EMG activity between “EEG activation” patients and those with no EEG change. In a patient with critical illness neuromyopathy, a little movement or muscle activation might not easily be seen.

Because patients were just taken consecutively, rather than being matched according to their coma severity, there could be poor matching and this was indeed present, where the patients who were subsequently found to be “EEG responsive”, and eventually to have a better outcome, were less likely to be in the worst comatose category at initial enrollment (50% vs 55%) and more likely to be in the best category (31% versus 23%). Although the odds ratios were not statistically significant, this does not mean that with any degree of confidence there was positive evidence for no difference in initial severity between the groups.

In fact, if one stratified patients according to the initial three clinical severity categories, would that have more powerfully predicted better outcome than “EEG responsive” or not, making the test redundant?

On technical appraisal of the methodology, it seems that the power spectral densities were individual 2-second blocks, with all the comparisons and averaging being done subsequently by the machine learning pattern recognition algorithm.

Statistically, the paper used the single value of the area under the curve (AOC) of the receiver operating characteristic (see below). This means that across a range of sensitivities (or true positives (TP), where the algorithm correctly decides that there is enough of a difference between the “move” and “stop moving” patterns), there is an opposing range of false positives (FP). How convex is the curve that describes this range relates to how good the test is. A value of 1 means perfect classification, 0.5 is just random (the straight diagonal in the figure below), and 0 means the pattern change is actually reliably identifying the stop pattern when it was supposed to identify the move pattern.

ROC curves - Receiver operating characteristic - Wikipedia

This is shown in their fig. 3 (below), which seems to show the AOC values for each of the 5 “move” 2-second samples (hence the varying level across each peak and trough) followed by each of the 5 “stop moving” samples, with the whole thing repeated 8 times. However, they say that the graph is shown “for descriptive purposes only” so we do not know how it relates to the real data! We do not know if these are actual averages for all the controls, all the EEG responsive patients (which they call cognitive motor dissociation (CMD)) and all the non EEG responsive patients. If they are averages, they would have to be across all the first 2-second epochs and then all the second 2-second epochs, etc.

EEG pic

Where this is important is that although the algorithm provides a discrete yes-no answer, the confidence of this answer is a continuous variable, and there is a suspicion that this confidence level may fall into a continuous range with healthy volunteers at one end and the most unresponsive EEG patient at the other, rather than there being three discrete modal peaks of normal, EEG responsive and EEG unresponsive. If the former, the inevitable variability about a single mode makes the test far less useful as a predictor of outcome in individual patients. At best, it could be an independent predictor that, combined with other predictors, could build up a reasonably confident prognosis.

A major issue with patients in a vegetative state is when to withdraw support. In the UK, in patients with non traumatic acute brain injury, persistent vegetative state is defined as such around 3 months after injury and this is the time when conversations may be had along these lines on the basis that if the patient has not “woken” by this time, the chance they may eventually do so, with a reasonable quality of life, becomes remotely slim. No-one is ever going to think about withdrawing support at 6 days post-injury on the basis of an “EEG unresponsive” result.

This Journal Club post was presented by Dr Rubika Balendra, Specialist Registrar in Neurology at Barking Havering and Redbridge University Hospitals NHS Trust.

 

Posted in Intensive Care Neurology | Tagged , , | Leave a comment

Double-Blind Double-Dummy Randomised Study of Continuous Intrajejunal infusion of Levodopa-Carbidopa Intestinal Gel in Advanced Parkinson’s Disease

duodopa olanowBackground

Levodopa, a pro-drug of dopamine, has been used successfully to treat symptoms of Parkinson’s disease for fifty years and remains the mainstay of medical management. However after years of treatment, with increasing loss of dopaminergic presynaptic terminals, symptomatic control may become more brittle, with sudden and unpredictable “on” and “off” treatment times during the day, or with involuntary movements called dyskinesia. There are theoretical reasons, and some animal model and clinical evidence, why intermittent oral delivery of  levodopa may increase susceptibility to these problems through unphysiological wide fluctuations in synaptic dopamine; unfortunately the plasma half life of levodopa after an oral dose is as little as an hour. As a result, other long acting medicines have been introduced, but they may come with other side effects and are simply not as powerful as levodopa.

Relatively steady state levels of levodopa can be achieved by direct intra jejunal delivery. Unfortunately, levodopa is not stable in solution and the gel used to keep levodopa in suspension in a form that can be delivered is very expensive to produce. A year’s treatment in the UK was estimated by NHS England in 2015 to cost around £28000. As a result, despite there being now substantial evidence of the treatment’s effectiveness, there has been a debate about the treatment’s cost effectiveness. Calculations of the cost effectiveness in terms of cost per quality of life adjusted years (QALY) gained vary considerably. The calculations depend not only on the cost of treatment versus standard treatment and the difference in quality of life, but also the carer costs and other costs. So if a treatment is less effective, the patient may be more disabled and cost more. It is unclear, however, how figures on cost of disability can be applied to an estimate of how less effective the treatment is at all points of the severity scale. As far as I am aware there is no actual study showing how much is saved in non medication costs in patients on levodopa-carbidopa intestinal gel (LCIG); the information is instead extrapolated.

In one sense, the QALY gain might be counted twice; once for the intrinsic value of the gain in quality of life, and again for the reduction in disability that resulted in the improved quality of life. In another, this might be a fair way to handle such analysis compared to a treatment that improved quality of life without reducing disability cost.

It is important in such calculations to use reliable data on the magnitude of benefit gained, rather than just to show that there is a gain. This is likely to be achieved by a randomised controlled study with a control arm and is exemplified by the study of Olanow et. al., the subject of this journal club.

Study Design

Sixty six of sixty eight candidate patients underwent the trial. Patients were selected on the basis of having IPD for five or more years, having optimised therapy (meaning a trial of levodopa, a dopamine agonist and one other type of anti-parkinsonian therapy), at least three hours of “off” daily, and no clinically significant psychiatric abnormalities.

At first, assumed that the trial was a cross-over design; in fact it was not. Patients all had jejunostomy procedures but were randomised to LCIG plus placebo oral levodopa, or placebo LCIG plus oral levodopa. They were assessed after a four week stabilisation period before intervention, and then 12 weeks afterwards. Then the two groups were compared.

Patients who were on CR preparations or COMT inhibitors were switched to equivalent immediate release preparations. The LCIG dose was the same as the total daily levodopa dose, delivered over 16 hours of the waking day in the normal fashion for jejunal delivery.

Study Findings

On looking at the graph, labelled figure 2B in the MS, it is immediately obvious that both LCIG patients and oral patients improved very dramatically and then leveled off, despite previously being “optimised” on oral therapy. Our possible suspicions about what “optimised” means are confirmed. As explained by the authors, the doctors had the opportunity to increase the LCIG or oral levodopa during the study, and this was done in a number of cases after the 4 week stabilisation period. In fact the oral medication patients had their medication dose increased more (a mean of 250 mg daily versus 100 mg daily). Despite this, neither group had an increased on time with troublesome dyskinesia.

duodopa olanow2

The main message of the study is that after the 12 weeks, the improvement was greater with LCIG, with a mean of around 1.9 less “off” time and 1.8 hours more “on” time without troublesome dyskinesia. I suppose if there is no change in “on” time with dyskinesia, it is obvious that the two values will be similar as one state is replaced by the other.

Regarding quality of life, there was an 11 point versus 4 point improvement in PDQ-39 (a PD quality of life measure. This seems quite important.

Strangely, on the UPDRS there was an improvement in part II (activities of daily living) on LCIG and a worsening on oral, but actually twice as much improvement in part III (motor examination measured in the on state) on oral therapy. Possibly this means that there a subtle side effect of oral therapy, increased during the trial, that adversely affects wellbeing, but the increased “hit” of levodopa made their best on state better than with LCIG.

Comments

It is not clear how the withdrawal of COMT inhibitors made patients in either treatment arm suboptimally treated  and therefore needing increased treatment during the study. It would be important to ascertain if by chance the oral arm had had more COMT inhibitors withdrawn.

The main advantage of this study is that having the control arm at least allows us to appreciate that optimised does not really mean optimised. The patients were clearly underdosed; one has to wonder how much better the oral patients could have been if there was the opportunity to optimise them properly by adjusting top up dopamine agonists, adjusting the frequency rather than just the dose quantities and by introducing, reintroducing or optimising COMT inhibition. After all, studies on COMT inhibitors show reduction in on time by about an hour compared to baseline “optimised” therapy.

A parsimonious interpretation of the data is that LCIG simply has better bioavailability than oral; the patients were underdosed and switching to LCIG Is simply stronger and could be replicated by giving more oral treatment. In fact this may well have been the case, explaining the 150 mg more levodopa per day given to oral patients, but the facility for being able to change doses meant its effect would be minimised in this study.

While the power of the study was easily enough to demonstrate a clinically meaningful difference, I wonder if a cross-over design might have allowed intra-patient comparisons and a more clear effect, and eliminated or elucidated the improvement effect from oral therapy. In this design, each patient would have placebo LCIG for half the time, and placebo oral for the other half. The direction of change at the cross over point would be the key parameter. The patients’ doses would be matched at this cross over point, and then not changed over the second half. This design would be confounded by a bioavailability effect, but at least could be measured by the increase in oral dosing during the first half, and there might be an overdose effect of switching from oral to LCIG during the second half of the trial.

Studies looking at the cost effectiveness of LCIG should primarily take data from those like this one, rather than those that use an open label design showing an improvement compared to baseline “optimised” therapy of four hours “off” time reduction. The increased benefit in PDQ shown in this study is nevertheless quite persuasive that there is some real helpful feature of continuous intrajejunal delivery, at least in the short term.

There are other studies that show long term benefits of LCIG but they have not had the same design. Obviously, this design conducted over too long a period would not be ethical; presumably the principle is that all patients after 12 weeks would be offered LCIG, having already had their PEJ tubes inserted. On the other hand, in a longer term study, one would hope that every ongoing effort would be made to optimise therapy in the oral therapy group.

In practice, one must balance benefit versus side effects. Not all patients will want a PEJ tube, or to carry a large cartridge and pump. Virtually all patients had side effects, more serious ones in 13-20%. In 3% the treatment was discontinued as a result of surgical complications, 24% had tube dislocations, 21% insertion complications, 10% stoma complications, 8% pump malfunctions and 7% peritoneal problems. There are reports of neuropathy from LCIG but in this study there were three possible cases in the placebo group and only one in the treatment group.

Finally, LCIG is not the only advanced therapy available. There are no direct comparisons between LCGIG and deep brain stimulation or apomorphine pump therapy to guide as to which treatment to select in individual patients, although the different inclusion and exclusion criteria do provide some help in choosing which therapy is appropriate for which patient. For example, age over 70 and history of depression exclude deep brain stimulation but not LCIG.

Posted in Parkinson's Disease | Tagged , , , | Leave a comment

Mechanical Thrombectomy for Ischaemic Stroke

 

Introduction

Thrombectomy ReviewStroke is the most common cause of disability in Western Countries, and its lifetime risk is 1 in 6 for men and 1 in 5 for women. While managing acute stroke patients in hyperacute stroke units overall has modest benefits for short and long term outcome (e.g. 51% versus 47% independence and 29% versus 33% mortality), specific therapeutic options are limited. The first major option for treatment of ischaemic stroke was intravenous thrombolysis, paralleling its previous development in acute myocardial infarction.

However, while use in myocardial infarction was widespread in the 1990’s, it has only been widely used to treat acute stroke in the last ten years. This is probably because of the narrower therapeutic window and the more severe consequences of haemorrhagic complications in the brain. In addition, its benefits are actually relatively modest. In the first main randomised clinical trial on its use within three hours (NINDS), bearing in mind that in the first hour a stroke often spontaneously recovers – termed a TIA, good outcome (grades 0 to 1 on the Modified Rankin scale) were achieved in 39% versus 26% of patients receiving placebo, but with a symptomatic brain haemorrhage risk 6% greater than in the placebo group.

When delivered between 3 and 4.5 hours after stroke onset (ECASS III), the benefits on the same scale were 52% vs 45%, which gave a relative risk confidence interval range of 1.01 to 1.34 (p=0.04). In other words, this was only just statistically significant in a study of 821 patients. The risk of causing intracranial haemorrhage was 27% versus 17.6% (p=0.001). Thrombolysis caused major symptomatic brain haemorrhage in 2.4% versus 0.3% of placebo patients (p=0.008).

So it is not surprising that there has been a move, just like in cardiology a decade or two earlier, away from relying solely on intravenous thrombolysis and towards direct intra-arterial catheter treatment. The paper, Revolution in acute ischaemic stroke care: a practical guide to mechanical thrombectomy, summarises recent evidence in favour of this treatment and the infrastructure required to manage patients in this way. This Journal Club review discusses issues around acute stroke treatment and the ramifications for delivery of such a service.

 

The Published Review

The first mechanical thrombectomy devices were approved for use in 2004, but it was only technical developments, and probably the improved expertise that comes with experience, that led to positive results as shown by a spate of studies published after 2010 employing a new generation of devices.

The HERMES collaboration meta-analysis revealed that 46% of patients had a good outcome with functional independence (grades 0-2 on the Modified Rankin scale) compared with 26.5% on best medical treatment. Most of the patients in both groups received intravenous (iv) thrombolysis, since in most study protocols patients had iv thrombolysis before going on to have thrombectomy an hour or so later. Mortality and the risk of brain haemorrhage did not differ between the two groups. The benefit seemed still to be present in patients over 80, and when patients did not receive iv thrombolysis, though the numbers to test the latter were small. While the window for thrombectomy was within 6 hours, there may still be improved outcomes up to 7.3 hours after symptom onset, but in general faster intervention leads to greater benefit. At a Quality Adjusted Life Years (QALY) cost of £2500, the procedure would be considered by any political criteria to be cost-effective.

The Thrombectomy technique has a number of variations depending on the Neuroradiologist and on the particular nature and location of the thrombus. It may be done under general anaesthesia or local anaesthesia with sedation and anaesthetic support. A large gauge catheter is directed to the internal carotid via a femoral puncture, and an intermediate catheter inside it is directed to the Circle of Willis. Then a microcatheter inside the intermediate one serves as a guide wire to the actual clot. The microcatheter is then removed and a stent retriever is placed within the clot, and pulled back to draw the clot to the intermediate catheter. Suction is applied to this catheter to remove the clot entirely. Some techniques involve directly removing the clot by suction on the intermediate catheter. A balloon may be located on the distal end of the clot to prevent forward movement (a clinician would describe this as embolus, an undesirable occurrence). When removing the clot reveals a tight lumen, there is the further option to perform angioplasty or stenting to open the vessel. The same can apply to a carotid stenosis occurring in tandem with a more distal thrombus.

The main complications are technical, including vessel perforation (1.6%), other symptomatic intracranial haemorrhage (3-9%), subarachnoid haemorrhage (0.6 – 5%), arterial dissection (0.6 to 3.9%), or emboli distally (1-9%). In addition , there can be vasospasm or issues related to the puncture site. While the total incidence is 15%, not always is there any actual clinical adverse consequence.

While the 6 hour time window for thrombectomy is wider than for intravenous treatment, there are other selection criteria that are more strict:

  • There should be a documented anterior circulation large vessel occlusion of the middle cerebral or carotid artery. (There is only limited evidence for efficacy in basilar occlusion.)
  • There should be good collateral cerebral circulation.
  • There should be relatively normal extracranial arterial anatomy from the technical viewpoint regarding passing the catheter.
  • There should be significant clinical deficit at the time of treatment (but this parallels the criteria that should be applied also to intravenous thrombolysis), while acknowledging that a large vessel occlusion with minimal clinical deficit nevertheless incurs a significant risk of clinical deterioration.
  • There should be a lack of extensive early ischaemic change on CT (according to the ASPECTS score a threshold of 5). The role of more advanced imaging, e.g. CT perfusion, to establish salvageable brain, is yet to be clarified.
  • Consideration should be given to pre-stroke functional status and the potential of benefit.
  • Patients should have had iv thrombolysis within 4.5 hours of symptom onset.

The authors report that there is little evidence on managing blood pressure around the time of the procedure. It is probably best to avoid lowering blood pressure unless it is greater than 220 mmHg systolic, or 200 mmHg systolic if evidence of clinical complications of hypertension.

Usually no specific anticoagulation is given around the procedure. Some interventionalists use a peri-procedure dose of heparin. Aspirin is avoided beforehand but patients can have their usual 300 mg aspirin dose starting 24 hours after their stroke. If a stent has been implanted, aspirin and clopidogrel are given together for the first 3-6 months.

Authors’ Conclusions

The authors emphasise the great benefits to be had in selected patients, and comment that the selection criteria may be broadened with future experience. In particular, cases of milder stroke with large vessel occlusions may prove to be good candidates or the time window may broaden and perhaps ignored altogether if advanced imaging reveals a reversible penumbra.

They highlight that the significant technical complication rate means that the procedure should be concentrated in centres that deal with a large number of cases to gain and maintain expertise. They describe two models: “drip and ship” where the patient is thrombolysed at a local HASU (or A&E resuscitation unit?) and ambulanced to the thrombectomy centre, versus “mothership”, where the patient is transferred straight to the thrombectomy unit.

Journal Club Comments

The 20% increased good outcome arising from mechanical thrombectomy on top of that from iv thrombolysis is impressive compared to the 13% reported for thrombolysis versus placebo.

While the selection criteria are more stringent, they are not very much more stringent than for thrombolysis alone; a middle cerebral artery occlusion is a common presentation of acute stoke, especially if it is more severe. The review estimated that 10% of acute stroke patients would be candidates. We suspect at most half that amount, given that in practice thrombolysis rates are 10%, and 5% in some centres.

The most striking issues for us were the very high degree of technical expertise required acutely for decision-making and performing the procedure, and the high technical complication rates that parallel the high levels of benefit. The Neuroradiologist appears to decide both before and during the procedure between a number of different technical options and items of equipment. The suspicion is that the complications, unlike the haemorrhage rates for iv thrombolysis, depend much less on blind luck than on user expertise.

We wondered about circumstances where there might be a contraindication to intravenous thrombolysis and yet not to thrombectomy; it does not appear that thrombolysis, or even antocoagulation or antiplatelet therapy, is actually required for the procedure, and intravenous thrombolyis is so short acting that it would not be protecting against new emboli resulting from the procedure. The trials were conducted according to a protocol of having received thrombolysis mainly for ethical reasons around not denying patients proven beneficial treatment.

However, for practical purposes, a poor candidate for thrombolysis is probably in general going to be a poor candidate for thrombectomy. It would nevertheless be interesting to see if the 20% benefit from thrombectomy overlaps with that from thrombolysis, or adds to it. In other words, could patients get a 20% benefit from thrombectomy alone, and not face the 6% risk of thrombolysis-induced brain haemorrhage?

As an aside to the discussion on benefits of stroke treatment, we noted the different slants that can be put on data. This has great practical consequences for the patient. So, returning for a moment to intravenous thrombolysis, at 3 to 4.5 hours after stroke, a clinician may explain to a patient (if they are not too dysphasic at the time), that they can deliver a treatment with an odds ratio of good outcome of 1.34. Or the clinician might more likely say there would be 34% better chance of recovery, or a third as much again better chance of recovery. Right?

Wrong! The odds ratio is the ratio of good versus bad outcome in the treated group over the ratio of good to bad outcome in the untreated group. What layperson would describe things in those terms – terms that deliberately magnify the benefit? The relative risk, i.e. the ratio of a good outcome in the treated group versus that in the untreated group, is what most laypeople would understand, and the figure is 1.16. Even then, this does not mean that 16% more patients have a good outcome. From the actual figures, 52% versus 45%, 7% more patients get a good outcome, which is considerably different from 34%, and not so favourable when at the same time there are 10% more patients getting brain haemorrhages (or should we say 53.4% more likely?!), though only 2.5% (700% more likely!!) of these haemorrhages are giving them a much bigger stroke than they otherwise would have had.

What I would say at 3 to 4.5 hours after stroke onset is:

“We have a treatment available to dissolve clots in the brain that when given at this time after a stroke probably overall improves the chances of a good recovery, but which has risks of causing bleeding, including a brain haemorrhage that may make your stroke worse not better. Overall out of 100 people, on average 7 extra patients will get a good recovery from their stroke when they have the treatment, about 90 will be no different and 3 will be significantly worsened.”

And if the stroke is relatively mild, or one of those where one suspects the patient might be significantly better come the following morning regardless, one really wonders how much the patient stands to gain and whether to take that 2.5% risk of a much worse stroke instead.

The point about dysphasia is a serious one; can one ethically obtain proper consent to deliver a treatment that is definitely going to result in some people suffering additional permanent disability if not death? Even without dysphasia, lying semi-paralysed under a ticking clock is probably a situation, both for the patient and relatives, where choice, let alone informed consent, is an illusion. When consenting for emergency surgery, one generally has at least the impression that the benefits are an order of magnitude greater than the risks, or that a poor outcome without intervention is inevitable.

Another example of statistics and the all-important magical 0.05 p-value relates to the original comment about acute stroke units. The differences from general ward care are surprisingly modest, but it is always quoted from the Stroke Unit Trialists’ Collaboration Cochrane review in 2009 that stroke significantly reduces mortality. A group, Sun et al., (2013) did their own analysis and actually looked at the data. There was a discrepancy in the number of deaths in the control group in the largest study, the Athens trial: 121 deaths versus 127. On contacting the Cochrane review author, they were told that there was an “error which will be corrected in the next update”; on doing the sums to correct the “error”, Sun found that the p value for significant reduction in mortality shifted across the magical 0.05 threshold from 0.03 to 0.06. So there is no clear evidence that stroke units reduce mortality…

If one looks objectively at the data:

  • Thrombectomy leads to 20% more good outcomes, which may replace rather that add to that from intravenous thrombolysis and with no higher risk of brain haemorrhage.
  • Thrombolysis alone leads to 13% more good outcomes, if given within a very restricted window of 3 hours after stroke onset, but with a significant risk of brain haemorrhage and other complications.
  • Stroke units, which also treat the other 90% of strokes, lead to 4% better outcomes, a figure of uncertain clinical significance.

Regarding stroke units, it is possible that it is the 10% who are candidates for intervention that are contributing largely to that 4% improvement, along with those with haemorrhagic stroke getting surgical input or neurological stroke mimics getting fast-tracked to more appropriate acute care. And if general wards treating the other 90% had more focus on early swallow assessments and actually feeding nil-by-mouth patients nasogastrically within 48 hours, would that single measure not improve outcome?

The initial decision to perform thrombectomy is highly technical and requires a neurointerventional radiologist, the procedure obviously requires a neuroradiologist, and therefore the consent should probably be taken by the neuroradiologist, as well as a the post-procedure ward round and early outpatient follow-up. The neuroradiologist requires the support of an anaesthetist during the procedure, and perhaps around the procedure as an intensivist. The technical skill required to write a thrombolysis prescription is negligible; that to perform a highly challenging emergency procedure, to minimise technical complications arising from mistakes and to deal with those complications when they do arise, will make or break the success of thrombectomy and the success of the stroke service. Does it not seem that acute stroke care has shifted from a medical to a “surgical” speciality? Instead of a “mothership”, could we have a Neuroemergency Unit, a Neuro ITU next to a catheter lab, centred around the Neuroradiologist managing the patients with acute stroke patients who are going to benefit from intervention, as well as patients with subarachnoid haemorrhage. They would have support from anaesthetists, stroke physicians/neurologists and neurosurgeons, with stroke physicians and allied health professionals taking on the subsequent rehabilitation role?

Posted in Stroke | Tagged , , , , | Leave a comment

Thymectomy for Myasthenia Gravis

 

thymectomyIntroduction

While thymectomy has long been considered an option to treat myasthenia even if there is simply thymic tissue present rather than thymoma or thymic carcinoma, it has been uncertain how much benefit is achieved by undergoing this major surgical procedure. While there have been a number of retrospective reports of benefit, observational studies where the patients were also on modern immunosuppression did not show benefit, and some studies have indicated that any benefit that does exist is only present in the first 5 years after surgery. There has therefore been a call for a randomised study comparing thymectomy in non-thymoma patients (in thymoma there is an indication to operate anyway) combined with standard immunosuppressive treatment versus standard immunosuppressive treatment alone. Just recently the results of a long-awaited trial on this topic were published in the New England Journal of Medicine.

 

Study Design

From 2006 to 2012, a total of 126 patients were randomised to the two arms as above. Eligible patients were adults under 65 with positive anti ACh antibodies, and non-ocular (i.e. at least mild generalised) myasthenia. Assessors of myasthenic severity were blinded (patients wore high necked clothing during assessment!). Patients did not have to have visible thymic tissue on imaging with CT or MRI; in fact visible thymoma was an exclusion criterion. The surgery removed any mediastinal tissue that could contain macroscopic or microscopic thymic tissue.

The primary measures of severity were the time-weighted average quantitative myasthenic score measuring fatigability in key muscle groups, and the steroid dose requirement to maintain minimally symptomatic disease. Assessment was over a three year period.

 

Findings

The study found a 2.8 point lower average quantitative myasthenia score (i.e. better) in the thymectomy group and also a lower requirement for steroids (44 mg versus 60 mg). Also fewer patients required azathioprine or hospitalisation for exacerbations (9% versus 37%). There was no difference in treatment-associated complications but fewer treatment-associated symptoms which was presumably reflective of the lower average doses of immunosuppression. . The study performed subgroup analysis by sex, and found no difference in myasthenia score for men, but still a reduction in steroid requirement. There was no stratification by age.

 

Authors’ conclusions

Thymectomy improves outcome in the first three years after surgery, even compared with modern immunosuppressive therapy regimes.  The lower score was probably clinically significant given physicians determined clinical improvement at changes as little as 2.3 points. The study falls short of making any clear recommendations to treat everyone with thymectomy for whom surgery is not otherwise excluded.

 

Journal Club’s Conclusions and Comments

We wondered if there might be variability in how hard one looks for thymic tissue on imaging which would in this study trigger exclusion from the trial on the basis of thymic hyperplasia. The less sensitive the investigation, the greater the chance of entering into the trial patients operated upon with hyperplasia and therefore the greater the expectation of benefit from surgery.

One of the key questions was the duration of benefit of surgery, but with a three year trial this cannot obviously be answered. Will patients want surgery if the benefit is only three years of 15 mg lower prednisolone dose (where the error bars are missing from the figure) and fewer hospitalisations (the latter was not a primary outcome measure and obviously the patients themselves who decided to attend hospital were not blinded)? Probably we will get further updates on the same study results in the NEJM cropping up at intervals.

The lack of clear guidance on management as a result of this trial, probably the only such ever likely to be performed, is a little frustrating. Perhaps they are waiting for a longer term follow up. Our group discussed that current practice is to be selective in offering thymectomy. A young woman who wants to have children and who has already proven resistant to or dependent on high dose steroids, is clearly going to be a better candidate for thymectomy than a man aged 65 with mild easily controlled disease. What we need more guidance on is the tipping point in the balance between those two extremes. Nevertheless the study confirms that at least some patients without thymic tissue on imaging do have benefit over the first three years when compared with modern immunosuppressive regimes.

The journal club meeting upon which this report is based was presented by Dr Peter Arthur-Ferraj, Specialist Registrar in Neurology.

 

 

Posted in Myasthenia | Tagged , , , | Leave a comment

Safinamide in Parkinson’s Disease

 

safinamideBackground

The rather specific dopaminergic deficit in Parkinson’s disease (PD) has meant that dopaminergic replacement medications have proven to be an effective mainstay of treatment of the condition. However, later on in the course of the disease, such treatment may have increasing limitations resulting from decreasing efficacy and increasing complications such as dyskinesia, postural hypotension and hallucinations or other psychological manifestations.

Most recent developments in pharmacotherapy have therefore consisted of different formulations of or delivery systems for dopamine agonists or levodopa, as well as agents that promote the effects of dopamine.  It is rare that a new class of agent arrives on the scene for treatment of Parkinson’s disease and such an agent is therefore worthy of close attention.

Safinamide is one such agent, an alpha- aminoamide that, as well as having monoamine oxidase B inhibitory action also has a non-dopaminergic action in the form of glutamate modulation. This modulation is probably achieved by blocking N type Ca channel mobilisation and therefore reducing presynaptic glutamate vesicle release.  An action to stabilise Na channels by promoting the inactive state may also be relevant.

The MAO-B action is therefore akin to that of selegeline and rasagiline, though safinamide is reversible and more selective for MAO-B, perhaps reducing the tendency to side effects such as tyramine reactions and obviating the need for dietary restriction of cheese, etc. The action on glutamate is more akin to that of amantadine, a drug that has useful antidyskinetic properties in PD.

A number of studies on safinamide were conducted prior to its recent licence acquisition. First, as drug companies tend to do, the focus was on initiation therapy in early disease. Presumably a greater market share would be gained, and many patients would start on the drug early and remain on it longer during the long course of their disease.

There does not appear to be a major effect of safinamide when used de novo in early disease. When used in early disease as an adjunct to dopamine agonists, one trial (Stocci et al, 2012) in 270 patients found over 6 months that 100 mg had a significant UPDRS benefit versus placebo (-6 vs. -3.6). The dose of agonist was to remain stable, yet an increase was allowed if worsening symptoms! On blood analysis, drug was found in the placebo group in 26% of patients!! Someone had mixed up the bottles… Despite this, the study was published, presumably because these flaws might have negated rather than enhanced perceived benefit. An extension study in some patients failed to reach the primary end point of delay in requiring additional treatment. Another trial (Barone et al., 2013) in 679 patients failed its primary end point of change in UPDRS, but the 100mg (rather than 50mg) dose subgroup may have improved.

In more advanced disease, a study (Schapira et al., 2013) on 549 patients on any medications except MAO B inhibitors and with at least 1 ½ hours “off” time in a day showed improved “on” time without dyskinesia when safinamide was added to their regime in comparison with the addition of placebo.

The study discussed in this journal club, “Randomized trial of safinamide add-on to levodopa in Parkinson’s disease with motor fluctuations” by Borgohain et al., (2014) similarly looks at 699 patients with more advanced disease. Please refer to the Parkinson’s Disease primer for more general background information.

 

Study Design

This multicentre study first stabilised patients on their levodopa and then continued for 6 months, with a 18 month placebo-controlled extension study in those who had not experienced side effects or whose disease had worsened over the initial 6 months.

Enrolled patients had to have had PD lat least 3 years, be on levodopa with or without other therapies and they had to have at least 1 ½ hours of “off” time a day. Patients with severe dyskinesia or severe dose fluctuations were excluded!

Two doses of safinamide were chosen because of previous evidence that 50 mg may be sufficient for MAO–B action but 100 mg is necessary for gluatamate inhibition.

Assessments included 30 minute interval diary scores of “on” versus “off” and dyskinesia, UPDRS, clinical global impression of change, dyskinesia rating scale when “on”, % change in levodopa (the intention was to keep levodopa unchanged but it could be increased if patients deteriorated) and the PDQ-39 questionnaire. If PD therapy had to increase by 20%, their evaluation was done at this point rather than at 6 months.

A mixed model co-variate statistical analysis was used, comparing versus baseline. The 100 mg dose was analysed first and only if significant was the 50 mg dose versus placebo analysed.

The primary end point, total “on” time without troublesome dyskinesia, was a 1.36 hour improvement after 6 months on 100 mg safinamide, 1.37 hours on 50 mg safinamide and 0.97 hours for placebo. These were both significant versus placebo. There was likewise an improvement in “off” time. The disability measures, PDQ-39 and UPDRS II showed significant improvement only for 100 mg doses. In the extension study there was overall maintenance of benefits and a non significant reduction in dyskinesia. There was no significant increase in side effects.

Authors’ Conclusions

The authors concluded that the drug was successful when used as add-on therapy in improving “on” and “off” time without increasing troublesome dyskinesia, which would be a risk if increasing other types of anti PD medications. This correlates with MPTP treated monkey studies, which showed an improvement in dyskinesia as well as “off” symptoms. However, dyskinesia when reported as a side effect by patients was more common that with placebo in this study, but not more likely with 100 mg doses than with 50 mg doses.

Journal Club Comments

The study numbers were sufficiently powered to produce a meaningful result and the statistical analysis was good, making the main conclusion convincing. The issues of being allowed to change levodopa dose during the study, and then escape the study but still record the outcome if the change was 20%, were discussed. While one suspects that levodopa has stronger anti-PD action and may mask the effect of safinamide, in effect “rescuing” both placebo and safinamide groups, it would tend to decrease the observed benefit of  safinamide versus placebo. One can understand inclusion of this study design element on the grounds of ethics, and also it provides a more real-world setting.

What was strange was the exclusion of more severely dyskinetic patients from the study, given that the main novel pharmacological benefit may be in helping dyskinesia and the paper’s emphasis on measuring dyskinesia, though it was not the primary end point. It is not as if there have not been any other studies already conducted on the drug’s basic action. Perhaps it was felt another study could be got out of addressing this variable, but from the point of view of clinician prescribing, current evidence would not support its use as an antidyskinetic drug (since it was actually a side effect of the drug as reported by patients directly rather than recorded on the diary). Instead it may be a modestly beneficial drug for PD with relatively little action in provoking dyskinesia.

As is always the case, we are hampered by lack of direct comparison with a real life alternative. We would never consider offering a placebo to patients in real life. What we would like to know is does the drug work better in direct comparison with addition of rasagiline or of a dopamine agonist or entacapone. We have a clue that it may be better then simply increasing levodopa dose, but this was not really the primary comparison in the study, merely a possibility allowed by the study design.

The Queens Hospital Journal Club meeting upon which this article is based was perpared and presented  by Dr Stevan Wing, SpR in Neurology.

Posted in Parkinson's Disease | Tagged , , , | Leave a comment

Anti K+ Channel Antibodies in Neuromyotonia

neuromyotoniaBackground

At this Journal Club it was decided to review a historical paper on the pathophysiology underlying autoimmune neuromyotonia. The paper, “Autoantibodies Detected to Expressed K+ Channels Are Implicated in Neuromyotonia”, from Annals of Neurology (1997, 41:238-246), used a novel technique that depended on knowing the gene for the suspected antibody target protein, in this case a potassium channel. The purpose of choosing this paper was partly to highlight how the known range of antibody mediated neurological disease has grown hugely over the subsequent twenty years, and partly to illustrate how positive findings can sometimes be seen in retrospect to have arisen through a degree of serendipity.

Acquired neuromyotonia is now known to be one of a number of neurological conditions that arise through auto-antibodies interfering with voltage gated potassium (KV) channel function. Interference with resting potentials and membrane recovery after action potentials in peripheral nerve results in continual high frequency discharges and continuous muscle activation as cramp, fasciculations and neuromyotonia. Sometimes this can be precipitated by cold, exercise or voluntary muscle activation. Other features included in the spectrum of KV channel auto-immunity include autonomic dysfunction, seizures, psychiatric disturbance and limbic encephalitis. When resulting in a neuropathy  and neuromyotonia, the term Isaac’s syndrome is often used, while a presentation of neuromyotonia with autonomic or CNS involvement is described as Morvan’s syndrome.

Study Design

The techniques used by Hart et al inferred that there would be an affinity of patients’ antibodies for the Kv channel, as it was already known that acquired myotonia results from disturbances of Kv channel function. If there was a known toxin for this channel, as with bungarotoxin for nicotinic acetyl choline receptors, this could be used as a labelled high affinity ligand and form the basis for a radioimmunoassay for detection of circulating antibodies against the channel. Dendrotoxin is a highly specific and high affinity toxin, but binds only a cohort of potassium subunits (Kv 1.1, 1.2 and 1.6).

The first type of assay used in this paper relied upon dendrotoxin; brains containing solubilised Kv channels were treated with radiolabelled dendrotoxin, and then with serum from neuromyotonia patients. An anti-human IgG was used to immunoprecipitate all human antibodies from this solution, which would include any Kv complex-dendrotoxin bound antibodies. When neuromyotonia patient serum was used, the resulting precipitant (which contains any antibody that has bound to its antigen) contained the radiolabel, indicating that the patient antibodies had become coupled to material containing the dendrotoxin and, by inference, become bound to the Kv channel. This result was found in some – but not all – patients (6/12) and reassuringly in none of the control samples (myasthenia gravis, Lambert-Eaton and healthy controls)

Verification that the Kv channel rather than other dendrotoxin-bound material from solubilised brain was provided by demonstrating binding of neuromyotonia patient antibodies to dendrotoxin-bound Kv1 subunits expressed in Xenopus toad oocytes via cRNA after complementary RNA expression. Knowing the gene for Kv1 enabled production of the complementary RNA. Expression of this in the toad oocyte meant that the KV1 protein would now be present in pure form. In this experiment,  4/12 samples were positive of the neuromyotonia cohort (and again, 0/18 controls). This positivity rate was felt to be consistent with the fact that human disease antibodies might be to subunits other than Kv1. The authors offered no information on the correlation between titres on the human brain assay and Xenopus expression system.

The authors then turned to immunohistochemical staining instead of immunoprecipitation. In this assay, antibodies labelled with horseradish peroxidase stain are created that bind to immune complexes. Thus any patient serum anti-Kv channel antibody that has bound to Kv1 channels expressed on the toad oocytes will in turn be bound to by the staining antibody-binding antibodies. The oocytes are then looked at under a microscope. They found positive staining of serum added to different Kv subtypes but not to a number of controls. However, since the oocytes had been fixed, permeabilised and sectioned prior to incubation with patient antibody, one could not confirm that the KV1 channel had actually been expressed on the membrane surface as it would naturally in human neurones. They suggested the technique could be applied to many other putative antigens for pathogenic circulating antibodies, provided the genes for the antigens were known, which is the case for most proteins now.

In another experiment to check for antibody binding to potassium channel subtypes for which dendrotoxin is not a ligand, these Xenopus cells were incubated with sulphur labelled methionine at the time that they were injected with one of three different Kv complementary RNAs, so that when they expressed the channel protein, it would be radiolabelled by the incorporated methionine amino acid, and detectable with autoradiography. The serum of neuromyotonia patients and controls was applied to these preparations and anti-human IgG was used to determine patient antibody-bound Kv material. However, this precipitant did not reveal any labelling with either patient or control serum. The authors suggested that this may be because the antibody binding is conformationally dependent, a feature that somehow did not apply when dendrotoxin had already bound in the other assay. Alternatively, it could also reflect that Kv channels are not really the antigenic target in neuromyotonia – an explanation which has subsequently been confirmed in more recent data.

Historical Context and Journal Club Discussion

Since this paper was published, as mentioned in the background, a more extensive spectrum of disorders associated with potassium channel antibodies has been described, but unfortunately there appears to be no specificity linking disease phenotype to antibodies to particular Kv subunit combinations.

More recently still, the antigenic targets of these antibodies have been clarified to be proteins associated with the potassium channel rather than the channel itself. So the antibodies were not what they were purported by the paper to be after all! It is not surprising therefore that the experiment with directly methionine-labelled subunits yielded negative results. It is not clear why the authors thought the naturally occurring pathological antibodies would bind to a channel better when it had toxin attached to it. But it is also now not clear why the immuno-histochemistry labelling of Kv1 expressing oocytes was positive in some cases, as the actual antigen in most cases was absent. Only in a small minority of neuromyotonia cases have the newer assays demonstrated that the Kv channel proper (and not an associated protein) is the true antigen.

In fact, since 1997, the radioimmunoprecipitation assay by which these antibodies are detected remains largely unchanged from that used before the advance that the paper was supposed to introduce: rodent brain is used as the substrate of Kv channels and still labelled with dendrotoxin. So, since the toxin binds only a small proportion of all Kv channels, there are likely to be many cases of antibodies against other Kv or Kv-associated antigens that are currently undetected by current methods. There is significant scope for improvement in these assays, in terms of range of antigens tested, cross-assay standardisation and, importantly, timescales of test to result.  It was discussed in the journal club how this currently significantly limits appreciation of the potential scope of antibody mediated neurological diseases.

This case was presented and summarised by Dr Sian Alexander, Specialist Registrar in Neurology at Queens Hospital, Romford.

Posted in Inflammatory/ Auto-Immune Diseases | Tagged , , , | Leave a comment

Comparison of New Oral Anticoagulants (NOACs) with Warfarin

comparison of NOACSBackground

Ischaemic stroke is typically either thrombotic (clotting within a cerebral vessel) or embolic (passage of clot material from a more proximal vessel to become lodged in a cerebral vessel). A proportion of embolic stroke events will arise from arterial vessels such as the carotid in the neck, while others will arise from the heart. In atrial fibrillation, the chamber wall does not contract properly and this “stagnant” blood is more inclined to development of thrombus (clot), which may embolise up the arterial tree to the brain (or indeed anywhere else in the body).

While embolisation from high flow vessels may be reduced by antiplatelet agents, such as aspirin, that from low flow vessels (veins and the atria) may be reduced by anticoagulant agents, such as warfarin.

It can be seen, therefore, that some causes of stroke may be reduced by anticoagulant therapy. It will also be readily seen that such therapy is not going to make any difference at all to the statistical majority of causes of stroke, and in fact by its very nature may increase the likelihood of non-ischaemic stroke, namely brain haemorrhage. Nevertheless, there is a rather long-held view that on balance anticoagulation statistically reduces the risk of stroke in many patients who have atrial fibrillation (AF). In fact, this applies not only to patients who have already had a stroke or transient ischaemic attack, or who have structural heart disease making them even more susceptible to thrombus formation, but to patients who have AF as an isolated finding – so-called lone atrial fibrillation.

Two factors have made the issue of anticoagulation for AF topical:

  • Recent evidence has emphasised the view that lone AF is worth treating, as demonstrated in the UK by a recent National Institute of Clinical Evidence (NICE) Guidance document (CG 180).
  • As well as warfarin, there are four (at the time of writing) new anticoagulant agents to choose from and these do not require tedious weekly to monthly blood monitoring.

However, two factors have made the issue of anticoagulation for AF controversial:

  • The new drugs are much more expensive than warfarin
  • Anticoagulation will kill some people and harm others. The very nature of anticoagulants means that they will increase haemorrhage from the bowel or at the time of trauma or emergency surgery and, since some strokes are in fact the result of brain haemorrhage rather than ischaemia, they will even increase the risk of  this type of stroke!

It is not surprising therefore that a meta-analysis of the major studies comparing risks and benefits of warfarin versus novel oral anticoagulants (NOACs) was published recently and has become the subject of much debate. This study, “Comparison of the efficacy and safety of new oral anticoagulants with warfarin in patients with atrial fibrillation: a meta-analysis of randomised trials (Lancet 2014)” by Ruff et al., looks at the four major trials on this subject, all run by the drug companies manufacturing the NOACs.

Before describing the paper, it is worth mentioning wider controversies surrounding these studies.

First, as reported by the British Medical Journal, one of the studies (Rocket-AF) used a defective device to record the clotting effectiveness (INR) of the patients in the warfarin arm. Therefore, the patients using warfarin may have had an artificially bad outcome, not only potentially harming them but compromising the study’s findings. There is debate over when the drug company first knew about this fact.

Second, there was an issue where it was found that blood monitoring of NOACs improves outcome in terms of efficacy and reduction in bleeding complications. Obviously, there is not the same level of risk associated with not monitoring NOACs versus not monitoring warfarin, but there is an argument that some patients statistically might come to avoidable harm through not monitoring. Nevertheless, because the US Food and Drug Administration (FDA) approved the drugs before this was known about, the prescribing recommendation, and the major selling point of the new drugs, need not be changed.

 

Study Design and Findings

The meta-analysis looked at four studies

RE-LY, comparing dabigatran in two doses versus warfarin

ROCKET-AF, comparing rivaroxaban and warfarin

ARISTOTLE, comparing apixaban and warfarin

ENGAGE AF-TIMI 48, comparing edoxaban and warfarin.

A meta-analysis was felt to be justified on the basis that the class effect of the NOACs is similar; they all have a direct antithrombotic effect, while warfarin acts via inhibition of vitamin K, a precursor for different components of the thrombotic pathway. This results in similar shared benefits of faster onset and offset of action,  more predictable action and fewer drug interactions. Any intra-class differences would therefore be outweighed by differences between subgroup populations. Pooling the data would increase the chance of finding subgroup differences in the balance between efficacy and safety, the main stated purpose of the meta-analysis. In all there were around 72,000 non-valvular AF participants!

Median follow up ranged from 1.8 to 2.8 years and the outcome measures were occurrences of ischaemic stroke, haemorrhagic stroke, myocardial infarction, all cause mortality, intracranial haemorrhage, gastrointestinal bleeding and other major bleeding events.

Some studies had higher baseline risks than others, as expected because they had different CHADS2 scores ( a scale measuring risk of ischaemic stroke from AF).

The headline finding of the meta-analysis was that NOACS had a significantly (19%) reduced stroke risk (i.e. better efficacy) and a significantly  (48%) reduced intracranial haemorrhage risk (i.e. better safety). There was reduced all-cause mortality (10%) but increased gastrointestinal bleeding (25%). The relative efficacy and safety was consistent across a wide range of patients.

 

Opinion

Some of the cautionary notes on this study have already been publicised.

First, the study uses one parameter both as a measure of efficacy and safety – intracranial haemorrhage is counted twice! The study does say that the effects of stroke reduction were largely because of reduced haemorrhage; If we are looking at the effects of anticoagulation on stroke, we should be looking at its specific biological action, namely to reduce embolic stroke from the heart in comparison to warfarin, not at reduction in all-cause stroke.

Second, the way data are publicised can shift emphasis, especially for patients. The 0.48 relative risk reduction means a clinician could say that there is less than half the chance of a brain haemorrhage on NOACs compared to warfarin. But this actually equates to a 0.58% risk versus 1.24% risk. The actual risk reduction is therefore 0.66%.  (The absolute rise in gastrointestinal bleed was almost as much at 0.5%, but the latter would still be preferable on balance to intracranial bleed.)

The absolute risks are not annualised risks so they illustrate a point rather than being something useful to quote to patients. In its guidance to UK clinicians, NICE recommends that patients have a choice whether or not to receive anticoagulation. They provide a chart for clinicians so that they can explain the particular annual risk of a stroke based on the CHADS2 score versus a bleeding complication based on the VASC score. These risks are described in terms of “out of 1000 patients, x would be saved from having a stroke each year by taking anticoagulation”.  Converting the figures of this meta-analysis into an annualised form using the same language, about 5 patients per year would have a brain haemorrhage on warfarin, and 3 on NOACs.

In a rationed health economy, we have also to look at costs. In the UK, annual costs per patient for Rivaroxaban and Apixaban are around £700 to £800, and that for warfarin including the clinics for regular monitoring are£283 (NICE CG180).

NICE did a costing exercise (June 2014) based on the CG180 guidance and based on the increased uptake of anticoagulation for AF in general but they assumed that warfarin and NOAC use would increase in parallel (warfarin from 34% to 47% use in AF, and NOACs each from 4.7% to 11.7%). NICE stipulates that patient and clinician choice should determine whether patients go on warfarin or NOAC. But if the figure of 50% reduction in brain haemorrhage is quoted, and it is explained that patients do not have to attend a clinic for a blood test every fortnight, we can all guess what patients who are not paying for their medication will choose!

However, health providers might choose differently. In the UK, some have taken the view that since the excess risk of warfarin was more in a subgroup who had brittle control, warfarin should be given first line in patients for whom vitamin K antagonists are not contraindicated, and switched to a NOAC only if they prove to have brittle control.

Some may find all these economic arguments  laboured and somewhat distasteful – can one put a price on human life? But, again in a rationed economy, some person (or unwieldy committee) has to decide whether a limited amount of money goes on NOACs, or on life-prolonging cancer treatment, or on running blades for childhood amputees. The way these decisions are (supposed to be) made is on the basis of quality adjusted life years (QALYs). The UK Department of Health might have a guide of, for sake of argument, maximum £50,000 cost per QALY gained. The QALY loss of a brain haemorrhage might be 0.5 per year for 5 years, i.e. 2.5. (Some will be trivial, some will be fatal while factoring in a certain life expectancy, some will recover over time, some will die after a finite time.) The prevalence of AF is 1.6% and for 100,000 population, around 1000 should be on anticoagulation, so NOACS would save 2 brain haemorrhages per year in that population. The excess annual cost per 1000 patients is about £500,000. So a rough estimate of cost per QALY is £100,000. One could argue about additional increased efficacy and reduced mortality of NOACs, but perhaps that is subsumed by the haemorrhage reduction anyway, and there would be nearly as many increased GI bleeds as reduced intracranial bleeds.

The other reason why this topic is an emotive issue, and why it has received considerable attention with NICE invoking “patient choice”, is a controversy not related to NOACs versus warfarin as such, but to the knowledge that in a few patients we will be doing potentially terminal harm by starting either treatment. This goes against the physician’s mantra, “First do no harm” and follows the alternative mantra, “The needs of the many outweigh those of the few”. Statistically we know anticoagulation will help more AF patients than it harms, but no-one wants to be one of the few who suffer the brain haemorrhage, or to be the physician who gave them the drug that caused it.

Posted in Stroke | Tagged , , , , , | Leave a comment