Detection of Brain Activation in Vegetative State by Standard Electroencephalography

EEG title pageThis paper by Claassen et al., 2019 looks at EEG pattern changes in response to verbally given movement commands to see if there is a subset of vegetative state patients who are cognitively responsive and yet who have no motor response. The hope is that this might predict eventual outcome.

The study took 104 patients who had had acute brain injury. Most (85%) had non traumatic brain injury, which in general carries a more predictably bad prognosis. These patients were either in a vegetative state or in a somewhat better minimally responsive state, e.g. localising to pain but not obeying commands.

The EEG testing was performed within a few days of initial ITU referral.

In a trial, a patient was asked eight times to open and close their hand repeatedly for 10 seconds and then relax their hand for 10 seconds while recording ongoing EEG activity. Two second time blocks were analysed in the frequency domain by calculating the power spectral density (PSD), looking at the relative strength of signal in each EEG lead in four different frequency ranges (delta, theta, alpha and beta).

A “machine learning algorithm” was used to distinguish the “move” PSDs from the “stop moving” PSDs.

Patients were considered to show EEG activation if the algorithm consistently showed a significantly greater than chance (p=0.5) level of ability to distinguish moving command to stop moving command.

Outcome was determined by the standard Glasgow Outcome Scale after 12 months, with values >=4 (being able to be left up to 8 hours alone) defined as a good outcome.

Ultimately, patients who had at least one record showing EEG activation had a 44% chance of good outcome as defined above and only 14% of patients without EEG activation had a good outcome (with 5% missing data).


Some of the patients were under some sedation for safety reasons, which could influence their responsive in a more reversible manner unrelated to their brain injury and also affect their EEG, although this would be unlikely to affect the change in pattern of EEG over several seconds, other than through the patient’s genuine response level.

It might have been worthwhile to record surface EMG of the forearm flexors, just to confirm there was no difference in EMG activity between “EEG activation” patients and those with no EEG change. In a patient with critical illness neuromyopathy, a little movement or muscle activation might not easily be seen.

Because patients were just taken consecutively, rather than being matched according to their coma severity, there could be poor matching and this was indeed present, where the patients who were subsequently found to be “EEG responsive”, and eventually to have a better outcome, were less likely to be in the worst comatose category at initial enrollment (50% vs 55%) and more likely to be in the best category (31% versus 23%). Although the odds ratios were not statistically significant, this does not mean that with any degree of confidence there was positive evidence for no difference in initial severity between the groups.

In fact, if one stratified patients according to the initial three clinical severity categories, would that have more powerfully predicted better outcome than “EEG responsive” or not, making the test redundant?

On technical appraisal of the methodology, it seems that the power spectral densities were individual 2-second blocks, with all the comparisons and averaging being done subsequently by the machine learning pattern recognition algorithm.

Statistically, the paper used the single value of the area under the curve (AOC) of the receiver operating characteristic (see below). This means that across a range of sensitivities (or true positives (TP), where the algorithm correctly decides that there is enough of a difference between the “move” and “stop moving” patterns), there is an opposing range of false positives (FP). How convex is the curve that describes this range relates to how good the test is. A value of 1 means perfect classification, 0.5 is just random (the straight diagonal in the figure below), and 0 means the pattern change is actually reliably identifying the stop pattern when it was supposed to identify the move pattern.

ROC curves - Receiver operating characteristic - Wikipedia

This is shown in their fig. 3 (below), which seems to show the AOC values for each of the 5 “move” 2-second samples (hence the varying level across each peak and trough) followed by each of the 5 “stop moving” samples, with the whole thing repeated 8 times. However, they say that the graph is shown “for descriptive purposes only” so we do not know how it relates to the real data! We do not know if these are actual averages for all the controls, all the EEG responsive patients (which they call cognitive motor dissociation (CMD)) and all the non EEG responsive patients. If they are averages, they would have to be across all the first 2-second epochs and then all the second 2-second epochs, etc.

EEG pic

Where this is important is that although the algorithm provides a discrete yes-no answer, the confidence of this answer is a continuous variable, and there is a suspicion that this confidence level may fall into a continuous range with healthy volunteers at one end and the most unresponsive EEG patient at the other, rather than there being three discrete modal peaks of normal, EEG responsive and EEG unresponsive. If the former, the inevitable variability about a single mode makes the test far less useful as a predictor of outcome in individual patients. At best, it could be an independent predictor that, combined with other predictors, could build up a reasonably confident prognosis.

A major issue with patients in a vegetative state is when to withdraw support. In the UK, in patients with non traumatic acute brain injury, persistent vegetative state is defined as such around 3 months after injury and this is the time when conversations may be had along these lines on the basis that if the patient has not “woken” by this time, the chance they may eventually do so, with a reasonable quality of life, becomes remotely slim. No-one is ever going to think about withdrawing support at 6 days post-injury on the basis of an “EEG unresponsive” result.

This Journal Club post was presented by Dr Rubika Balendra, Specialist Registrar in Neurology at Barking Havering and Redbridge University Hospitals NHS Trust.


Posted in Intensive Care Neurology | Tagged , , | Leave a comment

Double-Blind Double-Dummy Randomised Study of Continuous Intrajejunal infusion of Levodopa-Carbidopa Intestinal Gel in Advanced Parkinson’s Disease

duodopa olanowBackground

Levodopa, a pro-drug of dopamine, has been used successfully to treat symptoms of Parkinson’s disease for fifty years and remains the mainstay of medical management. However after years of treatment, with increasing loss of dopaminergic presynaptic terminals, symptomatic control may become more brittle, with sudden and unpredictable “on” and “off” treatment times during the day, or with involuntary movements called dyskinesia. There are theoretical reasons, and some animal model and clinical evidence, why intermittent oral delivery of  levodopa may increase susceptibility to these problems through unphysiological wide fluctuations in synaptic dopamine; unfortunately the plasma half life of levodopa after an oral dose is as little as an hour. As a result, other long acting medicines have been introduced, but they may come with other side effects and are simply not as powerful as levodopa.

Relatively steady state levels of levodopa can be achieved by direct intra jejunal delivery. Unfortunately, levodopa is not stable in solution and the gel used to keep levodopa in suspension in a form that can be delivered is very expensive to produce. A year’s treatment in the UK was estimated by NHS England in 2015 to cost around £28000. As a result, despite there being now substantial evidence of the treatment’s effectiveness, there has been a debate about the treatment’s cost effectiveness. Calculations of the cost effectiveness in terms of cost per quality of life adjusted years (QALY) gained vary considerably. The calculations depend not only on the cost of treatment versus standard treatment and the difference in quality of life, but also the carer costs and other costs. So if a treatment is less effective, the patient may be more disabled and cost more. It is unclear, however, how figures on cost of disability can be applied to an estimate of how less effective the treatment is at all points of the severity scale. As far as I am aware there is no actual study showing how much is saved in non medication costs in patients on levodopa-carbidopa intestinal gel (LCIG); the information is instead extrapolated.

In one sense, the QALY gain might be counted twice; once for the intrinsic value of the gain in quality of life, and again for the reduction in disability that resulted in the improved quality of life. In another, this might be a fair way to handle such analysis compared to a treatment that improved quality of life without reducing disability cost.

It is important in such calculations to use reliable data on the magnitude of benefit gained, rather than just to show that there is a gain. This is likely to be achieved by a randomised controlled study with a control arm and is exemplified by the study of Olanow et. al., the subject of this journal club.

Study Design

Sixty six of sixty eight candidate patients underwent the trial. Patients were selected on the basis of having IPD for five or more years, having optimised therapy (meaning a trial of levodopa, a dopamine agonist and one other type of anti-parkinsonian therapy), at least three hours of “off” daily, and no clinically significant psychiatric abnormalities.

At first, assumed that the trial was a cross-over design; in fact it was not. Patients all had jejunostomy procedures but were randomised to LCIG plus placebo oral levodopa, or placebo LCIG plus oral levodopa. They were assessed after a four week stabilisation period before intervention, and then 12 weeks afterwards. Then the two groups were compared.

Patients who were on CR preparations or COMT inhibitors were switched to equivalent immediate release preparations. The LCIG dose was the same as the total daily levodopa dose, delivered over 16 hours of the waking day in the normal fashion for jejunal delivery.

Study Findings

On looking at the graph, labelled figure 2B in the MS, it is immediately obvious that both LCIG patients and oral patients improved very dramatically and then leveled off, despite previously being “optimised” on oral therapy. Our possible suspicions about what “optimised” means are confirmed. As explained by the authors, the doctors had the opportunity to increase the LCIG or oral levodopa during the study, and this was done in a number of cases after the 4 week stabilisation period. In fact the oral medication patients had their medication dose increased more (a mean of 250 mg daily versus 100 mg daily). Despite this, neither group had an increased on time with troublesome dyskinesia.

duodopa olanow2

The main message of the study is that after the 12 weeks, the improvement was greater with LCIG, with a mean of around 1.9 less “off” time and 1.8 hours more “on” time without troublesome dyskinesia. I suppose if there is no change in “on” time with dyskinesia, it is obvious that the two values will be similar as one state is replaced by the other.

Regarding quality of life, there was an 11 point versus 4 point improvement in PDQ-39 (a PD quality of life measure. This seems quite important.

Strangely, on the UPDRS there was an improvement in part II (activities of daily living) on LCIG and a worsening on oral, but actually twice as much improvement in part III (motor examination measured in the on state) on oral therapy. Possibly this means that there a subtle side effect of oral therapy, increased during the trial, that adversely affects wellbeing, but the increased “hit” of levodopa made their best on state better than with LCIG.


It is not clear how the withdrawal of COMT inhibitors made patients in either treatment arm suboptimally treated  and therefore needing increased treatment during the study. It would be important to ascertain if by chance the oral arm had had more COMT inhibitors withdrawn.

The main advantage of this study is that having the control arm at least allows us to appreciate that optimised does not really mean optimised. The patients were clearly underdosed; one has to wonder how much better the oral patients could have been if there was the opportunity to optimise them properly by adjusting top up dopamine agonists, adjusting the frequency rather than just the dose quantities and by introducing, reintroducing or optimising COMT inhibition. After all, studies on COMT inhibitors show reduction in on time by about an hour compared to baseline “optimised” therapy.

A parsimonious interpretation of the data is that LCIG simply has better bioavailability than oral; the patients were underdosed and switching to LCIG Is simply stronger and could be replicated by giving more oral treatment. In fact this may well have been the case, explaining the 150 mg more levodopa per day given to oral patients, but the facility for being able to change doses meant its effect would be minimised in this study.

While the power of the study was easily enough to demonstrate a clinically meaningful difference, I wonder if a cross-over design might have allowed intra-patient comparisons and a more clear effect, and eliminated or elucidated the improvement effect from oral therapy. In this design, each patient would have placebo LCIG for half the time, and placebo oral for the other half. The direction of change at the cross over point would be the key parameter. The patients’ doses would be matched at this cross over point, and then not changed over the second half. This design would be confounded by a bioavailability effect, but at least could be measured by the increase in oral dosing during the first half, and there might be an overdose effect of switching from oral to LCIG during the second half of the trial.

Studies looking at the cost effectiveness of LCIG should primarily take data from those like this one, rather than those that use an open label design showing an improvement compared to baseline “optimised” therapy of four hours “off” time reduction. The increased benefit in PDQ shown in this study is nevertheless quite persuasive that there is some real helpful feature of continuous intrajejunal delivery, at least in the short term.

There are other studies that show long term benefits of LCIG but they have not had the same design. Obviously, this design conducted over too long a period would not be ethical; presumably the principle is that all patients after 12 weeks would be offered LCIG, having already had their PEJ tubes inserted. On the other hand, in a longer term study, one would hope that every ongoing effort would be made to optimise therapy in the oral therapy group.

In practice, one must balance benefit versus side effects. Not all patients will want a PEJ tube, or to carry a large cartridge and pump. Virtually all patients had side effects, more serious ones in 13-20%. In 3% the treatment was discontinued as a result of surgical complications, 24% had tube dislocations, 21% insertion complications, 10% stoma complications, 8% pump malfunctions and 7% peritoneal problems. There are reports of neuropathy from LCIG but in this study there were three possible cases in the placebo group and only one in the treatment group.

Finally, LCIG is not the only advanced therapy available. There are no direct comparisons between LCGIG and deep brain stimulation or apomorphine pump therapy to guide as to which treatment to select in individual patients, although the different inclusion and exclusion criteria do provide some help in choosing which therapy is appropriate for which patient. For example, age over 70 and history of depression exclude deep brain stimulation but not LCIG.

Posted in Parkinson's Disease | Tagged , , , | Leave a comment

Mechanical Thrombectomy for Ischaemic Stroke



Thrombectomy ReviewStroke is the most common cause of disability in Western Countries, and its lifetime risk is 1 in 6 for men and 1 in 5 for women. While managing acute stroke patients in hyperacute stroke units overall has modest benefits for short and long term outcome (e.g. 51% versus 47% independence and 29% versus 33% mortality), specific therapeutic options are limited. The first major option for treatment of ischaemic stroke was intravenous thrombolysis, paralleling its previous development in acute myocardial infarction.

However, while use in myocardial infarction was widespread in the 1990’s, it has only been widely used to treat acute stroke in the last ten years. This is probably because of the narrower therapeutic window and the more severe consequences of haemorrhagic complications in the brain. In addition, its benefits are actually relatively modest. In the first main randomised clinical trial on its use within three hours (NINDS), bearing in mind that in the first hour a stroke often spontaneously recovers – termed a TIA, good outcome (grades 0 to 1 on the Modified Rankin scale) were achieved in 39% versus 26% of patients receiving placebo, but with a symptomatic brain haemorrhage risk 6% greater than in the placebo group.

When delivered between 3 and 4.5 hours after stroke onset (ECASS III), the benefits on the same scale were 52% vs 45%, which gave a relative risk confidence interval range of 1.01 to 1.34 (p=0.04). In other words, this was only just statistically significant in a study of 821 patients. The risk of causing intracranial haemorrhage was 27% versus 17.6% (p=0.001). Thrombolysis caused major symptomatic brain haemorrhage in 2.4% versus 0.3% of placebo patients (p=0.008).

So it is not surprising that there has been a move, just like in cardiology a decade or two earlier, away from relying solely on intravenous thrombolysis and towards direct intra-arterial catheter treatment. The paper, Revolution in acute ischaemic stroke care: a practical guide to mechanical thrombectomy, summarises recent evidence in favour of this treatment and the infrastructure required to manage patients in this way. This Journal Club review discusses issues around acute stroke treatment and the ramifications for delivery of such a service.


The Published Review

The first mechanical thrombectomy devices were approved for use in 2004, but it was only technical developments, and probably the improved expertise that comes with experience, that led to positive results as shown by a spate of studies published after 2010 employing a new generation of devices.

The HERMES collaboration meta-analysis revealed that 46% of patients had a good outcome with functional independence (grades 0-2 on the Modified Rankin scale) compared with 26.5% on best medical treatment. Most of the patients in both groups received intravenous (iv) thrombolysis, since in most study protocols patients had iv thrombolysis before going on to have thrombectomy an hour or so later. Mortality and the risk of brain haemorrhage did not differ between the two groups. The benefit seemed still to be present in patients over 80, and when patients did not receive iv thrombolysis, though the numbers to test the latter were small. While the window for thrombectomy was within 6 hours, there may still be improved outcomes up to 7.3 hours after symptom onset, but in general faster intervention leads to greater benefit. At a Quality Adjusted Life Years (QALY) cost of £2500, the procedure would be considered by any political criteria to be cost-effective.

The Thrombectomy technique has a number of variations depending on the Neuroradiologist and on the particular nature and location of the thrombus. It may be done under general anaesthesia or local anaesthesia with sedation and anaesthetic support. A large gauge catheter is directed to the internal carotid via a femoral puncture, and an intermediate catheter inside it is directed to the Circle of Willis. Then a microcatheter inside the intermediate one serves as a guide wire to the actual clot. The microcatheter is then removed and a stent retriever is placed within the clot, and pulled back to draw the clot to the intermediate catheter. Suction is applied to this catheter to remove the clot entirely. Some techniques involve directly removing the clot by suction on the intermediate catheter. A balloon may be located on the distal end of the clot to prevent forward movement (a clinician would describe this as embolus, an undesirable occurrence). When removing the clot reveals a tight lumen, there is the further option to perform angioplasty or stenting to open the vessel. The same can apply to a carotid stenosis occurring in tandem with a more distal thrombus.

The main complications are technical, including vessel perforation (1.6%), other symptomatic intracranial haemorrhage (3-9%), subarachnoid haemorrhage (0.6 – 5%), arterial dissection (0.6 to 3.9%), or emboli distally (1-9%). In addition , there can be vasospasm or issues related to the puncture site. While the total incidence is 15%, not always is there any actual clinical adverse consequence.

While the 6 hour time window for thrombectomy is wider than for intravenous treatment, there are other selection criteria that are more strict:

  • There should be a documented anterior circulation large vessel occlusion of the middle cerebral or carotid artery. (There is only limited evidence for efficacy in basilar occlusion.)
  • There should be good collateral cerebral circulation.
  • There should be relatively normal extracranial arterial anatomy from the technical viewpoint regarding passing the catheter.
  • There should be significant clinical deficit at the time of treatment (but this parallels the criteria that should be applied also to intravenous thrombolysis), while acknowledging that a large vessel occlusion with minimal clinical deficit nevertheless incurs a significant risk of clinical deterioration.
  • There should be a lack of extensive early ischaemic change on CT (according to the ASPECTS score a threshold of 5). The role of more advanced imaging, e.g. CT perfusion, to establish salvageable brain, is yet to be clarified.
  • Consideration should be given to pre-stroke functional status and the potential of benefit.
  • Patients should have had iv thrombolysis within 4.5 hours of symptom onset.

The authors report that there is little evidence on managing blood pressure around the time of the procedure. It is probably best to avoid lowering blood pressure unless it is greater than 220 mmHg systolic, or 200 mmHg systolic if evidence of clinical complications of hypertension.

Usually no specific anticoagulation is given around the procedure. Some interventionalists use a peri-procedure dose of heparin. Aspirin is avoided beforehand but patients can have their usual 300 mg aspirin dose starting 24 hours after their stroke. If a stent has been implanted, aspirin and clopidogrel are given together for the first 3-6 months.

Authors’ Conclusions

The authors emphasise the great benefits to be had in selected patients, and comment that the selection criteria may be broadened with future experience. In particular, cases of milder stroke with large vessel occlusions may prove to be good candidates or the time window may broaden and perhaps ignored altogether if advanced imaging reveals a reversible penumbra.

They highlight that the significant technical complication rate means that the procedure should be concentrated in centres that deal with a large number of cases to gain and maintain expertise. They describe two models: “drip and ship” where the patient is thrombolysed at a local HASU (or A&E resuscitation unit?) and ambulanced to the thrombectomy centre, versus “mothership”, where the patient is transferred straight to the thrombectomy unit.

Journal Club Comments

The 20% increased good outcome arising from mechanical thrombectomy on top of that from iv thrombolysis is impressive compared to the 13% reported for thrombolysis versus placebo.

While the selection criteria are more stringent, they are not very much more stringent than for thrombolysis alone; a middle cerebral artery occlusion is a common presentation of acute stoke, especially if it is more severe. The review estimated that 10% of acute stroke patients would be candidates. We suspect at most half that amount, given that in practice thrombolysis rates are 10%, and 5% in some centres.

The most striking issues for us were the very high degree of technical expertise required acutely for decision-making and performing the procedure, and the high technical complication rates that parallel the high levels of benefit. The Neuroradiologist appears to decide both before and during the procedure between a number of different technical options and items of equipment. The suspicion is that the complications, unlike the haemorrhage rates for iv thrombolysis, depend much less on blind luck than on user expertise.

We wondered about circumstances where there might be a contraindication to intravenous thrombolysis and yet not to thrombectomy; it does not appear that thrombolysis, or even antocoagulation or antiplatelet therapy, is actually required for the procedure, and intravenous thrombolyis is so short acting that it would not be protecting against new emboli resulting from the procedure. The trials were conducted according to a protocol of having received thrombolysis mainly for ethical reasons around not denying patients proven beneficial treatment.

However, for practical purposes, a poor candidate for thrombolysis is probably in general going to be a poor candidate for thrombectomy. It would nevertheless be interesting to see if the 20% benefit from thrombectomy overlaps with that from thrombolysis, or adds to it. In other words, could patients get a 20% benefit from thrombectomy alone, and not face the 6% risk of thrombolysis-induced brain haemorrhage?

As an aside to the discussion on benefits of stroke treatment, we noted the different slants that can be put on data. This has great practical consequences for the patient. So, returning for a moment to intravenous thrombolysis, at 3 to 4.5 hours after stroke, a clinician may explain to a patient (if they are not too dysphasic at the time), that they can deliver a treatment with an odds ratio of good outcome of 1.34. Or the clinician might more likely say there would be 34% better chance of recovery, or a third as much again better chance of recovery. Right?

Wrong! The odds ratio is the ratio of good versus bad outcome in the treated group over the ratio of good to bad outcome in the untreated group. What layperson would describe things in those terms – terms that deliberately magnify the benefit? The relative risk, i.e. the ratio of a good outcome in the treated group versus that in the untreated group, is what most laypeople would understand, and the figure is 1.16. Even then, this does not mean that 16% more patients have a good outcome. From the actual figures, 52% versus 45%, 7% more patients get a good outcome, which is considerably different from 34%, and not so favourable when at the same time there are 10% more patients getting brain haemorrhages (or should we say 53.4% more likely?!), though only 2.5% (700% more likely!!) of these haemorrhages are giving them a much bigger stroke than they otherwise would have had.

What I would say at 3 to 4.5 hours after stroke onset is:

“We have a treatment available to dissolve clots in the brain that when given at this time after a stroke probably overall improves the chances of a good recovery, but which has risks of causing bleeding, including a brain haemorrhage that may make your stroke worse not better. Overall out of 100 people, on average 7 extra patients will get a good recovery from their stroke when they have the treatment, about 90 will be no different and 3 will be significantly worsened.”

And if the stroke is relatively mild, or one of those where one suspects the patient might be significantly better come the following morning regardless, one really wonders how much the patient stands to gain and whether to take that 2.5% risk of a much worse stroke instead.

The point about dysphasia is a serious one; can one ethically obtain proper consent to deliver a treatment that is definitely going to result in some people suffering additional permanent disability if not death? Even without dysphasia, lying semi-paralysed under a ticking clock is probably a situation, both for the patient and relatives, where choice, let alone informed consent, is an illusion. When consenting for emergency surgery, one generally has at least the impression that the benefits are an order of magnitude greater than the risks, or that a poor outcome without intervention is inevitable.

Another example of statistics and the all-important magical 0.05 p-value relates to the original comment about acute stroke units. The differences from general ward care are surprisingly modest, but it is always quoted from the Stroke Unit Trialists’ Collaboration Cochrane review in 2009 that stroke significantly reduces mortality. A group, Sun et al., (2013) did their own analysis and actually looked at the data. There was a discrepancy in the number of deaths in the control group in the largest study, the Athens trial: 121 deaths versus 127. On contacting the Cochrane review author, they were told that there was an “error which will be corrected in the next update”; on doing the sums to correct the “error”, Sun found that the p value for significant reduction in mortality shifted across the magical 0.05 threshold from 0.03 to 0.06. So there is no clear evidence that stroke units reduce mortality…

If one looks objectively at the data:

  • Thrombectomy leads to 20% more good outcomes, which may replace rather that add to that from intravenous thrombolysis and with no higher risk of brain haemorrhage.
  • Thrombolysis alone leads to 13% more good outcomes, if given within a very restricted window of 3 hours after stroke onset, but with a significant risk of brain haemorrhage and other complications.
  • Stroke units, which also treat the other 90% of strokes, lead to 4% better outcomes, a figure of uncertain clinical significance.

Regarding stroke units, it is possible that it is the 10% who are candidates for intervention that are contributing largely to that 4% improvement, along with those with haemorrhagic stroke getting surgical input or neurological stroke mimics getting fast-tracked to more appropriate acute care. And if general wards treating the other 90% had more focus on early swallow assessments and actually feeding nil-by-mouth patients nasogastrically within 48 hours, would that single measure not improve outcome?

The initial decision to perform thrombectomy is highly technical and requires a neurointerventional radiologist, the procedure obviously requires a neuroradiologist, and therefore the consent should probably be taken by the neuroradiologist, as well as a the post-procedure ward round and early outpatient follow-up. The neuroradiologist requires the support of an anaesthetist during the procedure, and perhaps around the procedure as an intensivist. The technical skill required to write a thrombolysis prescription is negligible; that to perform a highly challenging emergency procedure, to minimise technical complications arising from mistakes and to deal with those complications when they do arise, will make or break the success of thrombectomy and the success of the stroke service. Does it not seem that acute stroke care has shifted from a medical to a “surgical” speciality? Instead of a “mothership”, could we have a Neuroemergency Unit, a Neuro ITU next to a catheter lab, centred around the Neuroradiologist managing the patients with acute stroke patients who are going to benefit from intervention, as well as patients with subarachnoid haemorrhage. They would have support from anaesthetists, stroke physicians/neurologists and neurosurgeons, with stroke physicians and allied health professionals taking on the subsequent rehabilitation role?

Posted in Stroke | Tagged , , , , | Leave a comment

Thymectomy for Myasthenia Gravis



While thymectomy has long been considered an option to treat myasthenia even if there is simply thymic tissue present rather than thymoma or thymic carcinoma, it has been uncertain how much benefit is achieved by undergoing this major surgical procedure. While there have been a number of retrospective reports of benefit, observational studies where the patients were also on modern immunosuppression did not show benefit, and some studies have indicated that any benefit that does exist is only present in the first 5 years after surgery. There has therefore been a call for a randomised study comparing thymectomy in non-thymoma patients (in thymoma there is an indication to operate anyway) combined with standard immunosuppressive treatment versus standard immunosuppressive treatment alone. Just recently the results of a long-awaited trial on this topic were published in the New England Journal of Medicine.


Study Design

From 2006 to 2012, a total of 126 patients were randomised to the two arms as above. Eligible patients were adults under 65 with positive anti ACh antibodies, and non-ocular (i.e. at least mild generalised) myasthenia. Assessors of myasthenic severity were blinded (patients wore high necked clothing during assessment!). Patients did not have to have visible thymic tissue on imaging with CT or MRI; in fact visible thymoma was an exclusion criterion. The surgery removed any mediastinal tissue that could contain macroscopic or microscopic thymic tissue.

The primary measures of severity were the time-weighted average quantitative myasthenic score measuring fatigability in key muscle groups, and the steroid dose requirement to maintain minimally symptomatic disease. Assessment was over a three year period.



The study found a 2.8 point lower average quantitative myasthenia score (i.e. better) in the thymectomy group and also a lower requirement for steroids (44 mg versus 60 mg). Also fewer patients required azathioprine or hospitalisation for exacerbations (9% versus 37%). There was no difference in treatment-associated complications but fewer treatment-associated symptoms which was presumably reflective of the lower average doses of immunosuppression. . The study performed subgroup analysis by sex, and found no difference in myasthenia score for men, but still a reduction in steroid requirement. There was no stratification by age.


Authors’ conclusions

Thymectomy improves outcome in the first three years after surgery, even compared with modern immunosuppressive therapy regimes.  The lower score was probably clinically significant given physicians determined clinical improvement at changes as little as 2.3 points. The study falls short of making any clear recommendations to treat everyone with thymectomy for whom surgery is not otherwise excluded.


Journal Club’s Conclusions and Comments

We wondered if there might be variability in how hard one looks for thymic tissue on imaging which would in this study trigger exclusion from the trial on the basis of thymic hyperplasia. The less sensitive the investigation, the greater the chance of entering into the trial patients operated upon with hyperplasia and therefore the greater the expectation of benefit from surgery.

One of the key questions was the duration of benefit of surgery, but with a three year trial this cannot obviously be answered. Will patients want surgery if the benefit is only three years of 15 mg lower prednisolone dose (where the error bars are missing from the figure) and fewer hospitalisations (the latter was not a primary outcome measure and obviously the patients themselves who decided to attend hospital were not blinded)? Probably we will get further updates on the same study results in the NEJM cropping up at intervals.

The lack of clear guidance on management as a result of this trial, probably the only such ever likely to be performed, is a little frustrating. Perhaps they are waiting for a longer term follow up. Our group discussed that current practice is to be selective in offering thymectomy. A young woman who wants to have children and who has already proven resistant to or dependent on high dose steroids, is clearly going to be a better candidate for thymectomy than a man aged 65 with mild easily controlled disease. What we need more guidance on is the tipping point in the balance between those two extremes. Nevertheless the study confirms that at least some patients without thymic tissue on imaging do have benefit over the first three years when compared with modern immunosuppressive regimes.

The journal club meeting upon which this report is based was presented by Dr Peter Arthur-Ferraj, Specialist Registrar in Neurology.



Posted in Myasthenia | Tagged , , , | Leave a comment

Safinamide in Parkinson’s Disease



The rather specific dopaminergic deficit in Parkinson’s disease (PD) has meant that dopaminergic replacement medications have proven to be an effective mainstay of treatment of the condition. However, later on in the course of the disease, such treatment may have increasing limitations resulting from decreasing efficacy and increasing complications such as dyskinesia, postural hypotension and hallucinations or other psychological manifestations.

Most recent developments in pharmacotherapy have therefore consisted of different formulations of or delivery systems for dopamine agonists or levodopa, as well as agents that promote the effects of dopamine.  It is rare that a new class of agent arrives on the scene for treatment of Parkinson’s disease and such an agent is therefore worthy of close attention.

Safinamide is one such agent, an alpha- aminoamide that, as well as having monoamine oxidase B inhibitory action also has a non-dopaminergic action in the form of glutamate modulation. This modulation is probably achieved by blocking N type Ca channel mobilisation and therefore reducing presynaptic glutamate vesicle release.  An action to stabilise Na channels by promoting the inactive state may also be relevant.

The MAO-B action is therefore akin to that of selegeline and rasagiline, though safinamide is reversible and more selective for MAO-B, perhaps reducing the tendency to side effects such as tyramine reactions and obviating the need for dietary restriction of cheese, etc. The action on glutamate is more akin to that of amantadine, a drug that has useful antidyskinetic properties in PD.

A number of studies on safinamide were conducted prior to its recent licence acquisition. First, as drug companies tend to do, the focus was on initiation therapy in early disease. Presumably a greater market share would be gained, and many patients would start on the drug early and remain on it longer during the long course of their disease.

There does not appear to be a major effect of safinamide when used de novo in early disease. When used in early disease as an adjunct to dopamine agonists, one trial (Stocci et al, 2012) in 270 patients found over 6 months that 100 mg had a significant UPDRS benefit versus placebo (-6 vs. -3.6). The dose of agonist was to remain stable, yet an increase was allowed if worsening symptoms! On blood analysis, drug was found in the placebo group in 26% of patients!! Someone had mixed up the bottles… Despite this, the study was published, presumably because these flaws might have negated rather than enhanced perceived benefit. An extension study in some patients failed to reach the primary end point of delay in requiring additional treatment. Another trial (Barone et al., 2013) in 679 patients failed its primary end point of change in UPDRS, but the 100mg (rather than 50mg) dose subgroup may have improved.

In more advanced disease, a study (Schapira et al., 2013) on 549 patients on any medications except MAO B inhibitors and with at least 1 ½ hours “off” time in a day showed improved “on” time without dyskinesia when safinamide was added to their regime in comparison with the addition of placebo.

The study discussed in this journal club, “Randomized trial of safinamide add-on to levodopa in Parkinson’s disease with motor fluctuations” by Borgohain et al., (2014) similarly looks at 699 patients with more advanced disease. Please refer to the Parkinson’s Disease primer for more general background information.


Study Design

This multicentre study first stabilised patients on their levodopa and then continued for 6 months, with a 18 month placebo-controlled extension study in those who had not experienced side effects or whose disease had worsened over the initial 6 months.

Enrolled patients had to have had PD lat least 3 years, be on levodopa with or without other therapies and they had to have at least 1 ½ hours of “off” time a day. Patients with severe dyskinesia or severe dose fluctuations were excluded!

Two doses of safinamide were chosen because of previous evidence that 50 mg may be sufficient for MAO–B action but 100 mg is necessary for gluatamate inhibition.

Assessments included 30 minute interval diary scores of “on” versus “off” and dyskinesia, UPDRS, clinical global impression of change, dyskinesia rating scale when “on”, % change in levodopa (the intention was to keep levodopa unchanged but it could be increased if patients deteriorated) and the PDQ-39 questionnaire. If PD therapy had to increase by 20%, their evaluation was done at this point rather than at 6 months.

A mixed model co-variate statistical analysis was used, comparing versus baseline. The 100 mg dose was analysed first and only if significant was the 50 mg dose versus placebo analysed.

The primary end point, total “on” time without troublesome dyskinesia, was a 1.36 hour improvement after 6 months on 100 mg safinamide, 1.37 hours on 50 mg safinamide and 0.97 hours for placebo. These were both significant versus placebo. There was likewise an improvement in “off” time. The disability measures, PDQ-39 and UPDRS II showed significant improvement only for 100 mg doses. In the extension study there was overall maintenance of benefits and a non significant reduction in dyskinesia. There was no significant increase in side effects.

Authors’ Conclusions

The authors concluded that the drug was successful when used as add-on therapy in improving “on” and “off” time without increasing troublesome dyskinesia, which would be a risk if increasing other types of anti PD medications. This correlates with MPTP treated monkey studies, which showed an improvement in dyskinesia as well as “off” symptoms. However, dyskinesia when reported as a side effect by patients was more common that with placebo in this study, but not more likely with 100 mg doses than with 50 mg doses.

Journal Club Comments

The study numbers were sufficiently powered to produce a meaningful result and the statistical analysis was good, making the main conclusion convincing. The issues of being allowed to change levodopa dose during the study, and then escape the study but still record the outcome if the change was 20%, were discussed. While one suspects that levodopa has stronger anti-PD action and may mask the effect of safinamide, in effect “rescuing” both placebo and safinamide groups, it would tend to decrease the observed benefit of  safinamide versus placebo. One can understand inclusion of this study design element on the grounds of ethics, and also it provides a more real-world setting.

What was strange was the exclusion of more severely dyskinetic patients from the study, given that the main novel pharmacological benefit may be in helping dyskinesia and the paper’s emphasis on measuring dyskinesia, though it was not the primary end point. It is not as if there have not been any other studies already conducted on the drug’s basic action. Perhaps it was felt another study could be got out of addressing this variable, but from the point of view of clinician prescribing, current evidence would not support its use as an antidyskinetic drug (since it was actually a side effect of the drug as reported by patients directly rather than recorded on the diary). Instead it may be a modestly beneficial drug for PD with relatively little action in provoking dyskinesia.

As is always the case, we are hampered by lack of direct comparison with a real life alternative. We would never consider offering a placebo to patients in real life. What we would like to know is does the drug work better in direct comparison with addition of rasagiline or of a dopamine agonist or entacapone. We have a clue that it may be better then simply increasing levodopa dose, but this was not really the primary comparison in the study, merely a possibility allowed by the study design.

The Queens Hospital Journal Club meeting upon which this article is based was perpared and presented  by Dr Stevan Wing, SpR in Neurology.

Posted in Parkinson's Disease | Tagged , , , | Leave a comment

Anti K+ Channel Antibodies in Neuromyotonia


At this Journal Club it was decided to review a historical paper on the pathophysiology underlying autoimmune neuromyotonia. The paper, “Autoantibodies Detected to Expressed K+ Channels Are Implicated in Neuromyotonia”, from Annals of Neurology (1997, 41:238-246), used a novel technique that depended on knowing the gene for the suspected antibody target protein, in this case a potassium channel. The purpose of choosing this paper was partly to highlight how the known range of antibody mediated neurological disease has grown hugely over the subsequent twenty years, and partly to illustrate how positive findings can sometimes be seen in retrospect to have arisen through a degree of serendipity.

Acquired neuromyotonia is now known to be one of a number of neurological conditions that arise through auto-antibodies interfering with voltage gated potassium (KV) channel function. Interference with resting potentials and membrane recovery after action potentials in peripheral nerve results in continual high frequency discharges and continuous muscle activation as cramp, fasciculations and neuromyotonia. Sometimes this can be precipitated by cold, exercise or voluntary muscle activation. Other features included in the spectrum of KV channel auto-immunity include autonomic dysfunction, seizures, psychiatric disturbance and limbic encephalitis. When resulting in a neuropathy  and neuromyotonia, the term Isaac’s syndrome is often used, while a presentation of neuromyotonia with autonomic or CNS involvement is described as Morvan’s syndrome.

Study Design

The techniques used by Hart et al inferred that there would be an affinity of patients’ antibodies for the Kv channel, as it was already known that acquired myotonia results from disturbances of Kv channel function. If there was a known toxin for this channel, as with bungarotoxin for nicotinic acetyl choline receptors, this could be used as a labelled high affinity ligand and form the basis for a radioimmunoassay for detection of circulating antibodies against the channel. Dendrotoxin is a highly specific and high affinity toxin, but binds only a cohort of potassium subunits (Kv 1.1, 1.2 and 1.6).

The first type of assay used in this paper relied upon dendrotoxin; brains containing solubilised Kv channels were treated with radiolabelled dendrotoxin, and then with serum from neuromyotonia patients. An anti-human IgG was used to immunoprecipitate all human antibodies from this solution, which would include any Kv complex-dendrotoxin bound antibodies. When neuromyotonia patient serum was used, the resulting precipitant (which contains any antibody that has bound to its antigen) contained the radiolabel, indicating that the patient antibodies had become coupled to material containing the dendrotoxin and, by inference, become bound to the Kv channel. This result was found in some – but not all – patients (6/12) and reassuringly in none of the control samples (myasthenia gravis, Lambert-Eaton and healthy controls)

Verification that the Kv channel rather than other dendrotoxin-bound material from solubilised brain was provided by demonstrating binding of neuromyotonia patient antibodies to dendrotoxin-bound Kv1 subunits expressed in Xenopus toad oocytes via cRNA after complementary RNA expression. Knowing the gene for Kv1 enabled production of the complementary RNA. Expression of this in the toad oocyte meant that the KV1 protein would now be present in pure form. In this experiment,  4/12 samples were positive of the neuromyotonia cohort (and again, 0/18 controls). This positivity rate was felt to be consistent with the fact that human disease antibodies might be to subunits other than Kv1. The authors offered no information on the correlation between titres on the human brain assay and Xenopus expression system.

The authors then turned to immunohistochemical staining instead of immunoprecipitation. In this assay, antibodies labelled with horseradish peroxidase stain are created that bind to immune complexes. Thus any patient serum anti-Kv channel antibody that has bound to Kv1 channels expressed on the toad oocytes will in turn be bound to by the staining antibody-binding antibodies. The oocytes are then looked at under a microscope. They found positive staining of serum added to different Kv subtypes but not to a number of controls. However, since the oocytes had been fixed, permeabilised and sectioned prior to incubation with patient antibody, one could not confirm that the KV1 channel had actually been expressed on the membrane surface as it would naturally in human neurones. They suggested the technique could be applied to many other putative antigens for pathogenic circulating antibodies, provided the genes for the antigens were known, which is the case for most proteins now.

In another experiment to check for antibody binding to potassium channel subtypes for which dendrotoxin is not a ligand, these Xenopus cells were incubated with sulphur labelled methionine at the time that they were injected with one of three different Kv complementary RNAs, so that when they expressed the channel protein, it would be radiolabelled by the incorporated methionine amino acid, and detectable with autoradiography. The serum of neuromyotonia patients and controls was applied to these preparations and anti-human IgG was used to determine patient antibody-bound Kv material. However, this precipitant did not reveal any labelling with either patient or control serum. The authors suggested that this may be because the antibody binding is conformationally dependent, a feature that somehow did not apply when dendrotoxin had already bound in the other assay. Alternatively, it could also reflect that Kv channels are not really the antigenic target in neuromyotonia – an explanation which has subsequently been confirmed in more recent data.

Historical Context and Journal Club Discussion

Since this paper was published, as mentioned in the background, a more extensive spectrum of disorders associated with potassium channel antibodies has been described, but unfortunately there appears to be no specificity linking disease phenotype to antibodies to particular Kv subunit combinations.

More recently still, the antigenic targets of these antibodies have been clarified to be proteins associated with the potassium channel rather than the channel itself. So the antibodies were not what they were purported by the paper to be after all! It is not surprising therefore that the experiment with directly methionine-labelled subunits yielded negative results. It is not clear why the authors thought the naturally occurring pathological antibodies would bind to a channel better when it had toxin attached to it. But it is also now not clear why the immuno-histochemistry labelling of Kv1 expressing oocytes was positive in some cases, as the actual antigen in most cases was absent. Only in a small minority of neuromyotonia cases have the newer assays demonstrated that the Kv channel proper (and not an associated protein) is the true antigen.

In fact, since 1997, the radioimmunoprecipitation assay by which these antibodies are detected remains largely unchanged from that used before the advance that the paper was supposed to introduce: rodent brain is used as the substrate of Kv channels and still labelled with dendrotoxin. So, since the toxin binds only a small proportion of all Kv channels, there are likely to be many cases of antibodies against other Kv or Kv-associated antigens that are currently undetected by current methods. There is significant scope for improvement in these assays, in terms of range of antigens tested, cross-assay standardisation and, importantly, timescales of test to result.  It was discussed in the journal club how this currently significantly limits appreciation of the potential scope of antibody mediated neurological diseases.

This case was presented and summarised by Dr Sian Alexander, Specialist Registrar in Neurology at Queens Hospital, Romford.

Posted in Inflammatory/ Auto-Immune Diseases | Tagged , , , | Leave a comment

Comparison of New Oral Anticoagulants (NOACs) with Warfarin

comparison of NOACSBackground

Ischaemic stroke is typically either thrombotic (clotting within a cerebral vessel) or embolic (passage of clot material from a more proximal vessel to become lodged in a cerebral vessel). A proportion of embolic stroke events will arise from arterial vessels such as the carotid in the neck, while others will arise from the heart. In atrial fibrillation, the chamber wall does not contract properly and this “stagnant” blood is more inclined to development of thrombus (clot), which may embolise up the arterial tree to the brain (or indeed anywhere else in the body).

While embolisation from high flow vessels may be reduced by antiplatelet agents, such as aspirin, that from low flow vessels (veins and the atria) may be reduced by anticoagulant agents, such as warfarin.

It can be seen, therefore, that some causes of stroke may be reduced by anticoagulant therapy. It will also be readily seen that such therapy is not going to make any difference at all to the statistical majority of causes of stroke, and in fact by its very nature may increase the likelihood of non-ischaemic stroke, namely brain haemorrhage. Nevertheless, there is a rather long-held view that on balance anticoagulation statistically reduces the risk of stroke in many patients who have atrial fibrillation (AF). In fact, this applies not only to patients who have already had a stroke or transient ischaemic attack, or who have structural heart disease making them even more susceptible to thrombus formation, but to patients who have AF as an isolated finding – so-called lone atrial fibrillation.

Two factors have made the issue of anticoagulation for AF topical:

  • Recent evidence has emphasised the view that lone AF is worth treating, as demonstrated in the UK by a recent National Institute of Clinical Evidence (NICE) Guidance document (CG 180).
  • As well as warfarin, there are four (at the time of writing) new anticoagulant agents to choose from and these do not require tedious weekly to monthly blood monitoring.

However, two factors have made the issue of anticoagulation for AF controversial:

  • The new drugs are much more expensive than warfarin
  • Anticoagulation will kill some people and harm others. The very nature of anticoagulants means that they will increase haemorrhage from the bowel or at the time of trauma or emergency surgery and, since some strokes are in fact the result of brain haemorrhage rather than ischaemia, they will even increase the risk of  this type of stroke!

It is not surprising therefore that a meta-analysis of the major studies comparing risks and benefits of warfarin versus novel oral anticoagulants (NOACs) was published recently and has become the subject of much debate. This study, “Comparison of the efficacy and safety of new oral anticoagulants with warfarin in patients with atrial fibrillation: a meta-analysis of randomised trials (Lancet 2014)” by Ruff et al., looks at the four major trials on this subject, all run by the drug companies manufacturing the NOACs.

Before describing the paper, it is worth mentioning wider controversies surrounding these studies.

First, as reported by the British Medical Journal, one of the studies (Rocket-AF) used a defective device to record the clotting effectiveness (INR) of the patients in the warfarin arm. Therefore, the patients using warfarin may have had an artificially bad outcome, not only potentially harming them but compromising the study’s findings. There is debate over when the drug company first knew about this fact.

Second, there was an issue where it was found that blood monitoring of NOACs improves outcome in terms of efficacy and reduction in bleeding complications. Obviously, there is not the same level of risk associated with not monitoring NOACs versus not monitoring warfarin, but there is an argument that some patients statistically might come to avoidable harm through not monitoring. Nevertheless, because the US Food and Drug Administration (FDA) approved the drugs before this was known about, the prescribing recommendation, and the major selling point of the new drugs, need not be changed.


Study Design and Findings

The meta-analysis looked at four studies

RE-LY, comparing dabigatran in two doses versus warfarin

ROCKET-AF, comparing rivaroxaban and warfarin

ARISTOTLE, comparing apixaban and warfarin

ENGAGE AF-TIMI 48, comparing edoxaban and warfarin.

A meta-analysis was felt to be justified on the basis that the class effect of the NOACs is similar; they all have a direct antithrombotic effect, while warfarin acts via inhibition of vitamin K, a precursor for different components of the thrombotic pathway. This results in similar shared benefits of faster onset and offset of action,  more predictable action and fewer drug interactions. Any intra-class differences would therefore be outweighed by differences between subgroup populations. Pooling the data would increase the chance of finding subgroup differences in the balance between efficacy and safety, the main stated purpose of the meta-analysis. In all there were around 72,000 non-valvular AF participants!

Median follow up ranged from 1.8 to 2.8 years and the outcome measures were occurrences of ischaemic stroke, haemorrhagic stroke, myocardial infarction, all cause mortality, intracranial haemorrhage, gastrointestinal bleeding and other major bleeding events.

Some studies had higher baseline risks than others, as expected because they had different CHADS2 scores ( a scale measuring risk of ischaemic stroke from AF).

The headline finding of the meta-analysis was that NOACS had a significantly (19%) reduced stroke risk (i.e. better efficacy) and a significantly  (48%) reduced intracranial haemorrhage risk (i.e. better safety). There was reduced all-cause mortality (10%) but increased gastrointestinal bleeding (25%). The relative efficacy and safety was consistent across a wide range of patients.



Some of the cautionary notes on this study have already been publicised.

First, the study uses one parameter both as a measure of efficacy and safety – intracranial haemorrhage is counted twice! The study does say that the effects of stroke reduction were largely because of reduced haemorrhage; If we are looking at the effects of anticoagulation on stroke, we should be looking at its specific biological action, namely to reduce embolic stroke from the heart in comparison to warfarin, not at reduction in all-cause stroke.

Second, the way data are publicised can shift emphasis, especially for patients. The 0.48 relative risk reduction means a clinician could say that there is less than half the chance of a brain haemorrhage on NOACs compared to warfarin. But this actually equates to a 0.58% risk versus 1.24% risk. The actual risk reduction is therefore 0.66%.  (The absolute rise in gastrointestinal bleed was almost as much at 0.5%, but the latter would still be preferable on balance to intracranial bleed.)

The absolute risks are not annualised risks so they illustrate a point rather than being something useful to quote to patients. In its guidance to UK clinicians, NICE recommends that patients have a choice whether or not to receive anticoagulation. They provide a chart for clinicians so that they can explain the particular annual risk of a stroke based on the CHADS2 score versus a bleeding complication based on the VASC score. These risks are described in terms of “out of 1000 patients, x would be saved from having a stroke each year by taking anticoagulation”.  Converting the figures of this meta-analysis into an annualised form using the same language, about 5 patients per year would have a brain haemorrhage on warfarin, and 3 on NOACs.

In a rationed health economy, we have also to look at costs. In the UK, annual costs per patient for Rivaroxaban and Apixaban are around £700 to £800, and that for warfarin including the clinics for regular monitoring are£283 (NICE CG180).

NICE did a costing exercise (June 2014) based on the CG180 guidance and based on the increased uptake of anticoagulation for AF in general but they assumed that warfarin and NOAC use would increase in parallel (warfarin from 34% to 47% use in AF, and NOACs each from 4.7% to 11.7%). NICE stipulates that patient and clinician choice should determine whether patients go on warfarin or NOAC. But if the figure of 50% reduction in brain haemorrhage is quoted, and it is explained that patients do not have to attend a clinic for a blood test every fortnight, we can all guess what patients who are not paying for their medication will choose!

However, health providers might choose differently. In the UK, some have taken the view that since the excess risk of warfarin was more in a subgroup who had brittle control, warfarin should be given first line in patients for whom vitamin K antagonists are not contraindicated, and switched to a NOAC only if they prove to have brittle control.

Some may find all these economic arguments  laboured and somewhat distasteful – can one put a price on human life? But, again in a rationed economy, some person (or unwieldy committee) has to decide whether a limited amount of money goes on NOACs, or on life-prolonging cancer treatment, or on running blades for childhood amputees. The way these decisions are (supposed to be) made is on the basis of quality adjusted life years (QALYs). The UK Department of Health might have a guide of, for sake of argument, maximum £50,000 cost per QALY gained. The QALY loss of a brain haemorrhage might be 0.5 per year for 5 years, i.e. 2.5. (Some will be trivial, some will be fatal while factoring in a certain life expectancy, some will recover over time, some will die after a finite time.) The prevalence of AF is 1.6% and for 100,000 population, around 1000 should be on anticoagulation, so NOACS would save 2 brain haemorrhages per year in that population. The excess annual cost per 1000 patients is about £500,000. So a rough estimate of cost per QALY is £100,000. One could argue about additional increased efficacy and reduced mortality of NOACs, but perhaps that is subsumed by the haemorrhage reduction anyway, and there would be nearly as many increased GI bleeds as reduced intracranial bleeds.

The other reason why this topic is an emotive issue, and why it has received considerable attention with NICE invoking “patient choice”, is a controversy not related to NOACs versus warfarin as such, but to the knowledge that in a few patients we will be doing potentially terminal harm by starting either treatment. This goes against the physician’s mantra, “First do no harm” and follows the alternative mantra, “The needs of the many outweigh those of the few”. Statistically we know anticoagulation will help more AF patients than it harms, but no-one wants to be one of the few who suffer the brain haemorrhage, or to be the physician who gave them the drug that caused it.

Posted in Stroke | Tagged , , , , , | Leave a comment

Varicella Zoster is the Cause of Giant Cell Arteritis

vzv and gcaBackground

Giant cell arteritis is an inflammatory condition of certain blood vessels that presents in the elderly. Those vessels affected include the:

  • temporal artery resulting in headache and scalp tenderness over the inflamed artery,
  • ciliary artery resulting in ischaemic optic neuropathy,
  • retinal artery resulting in central retinal artery occlusion,
  • arteries to various muscles resulting in jaw claudication and polymyalgia rheumatica.

There may be systemic symptoms of weight loss and fever, and a raised eosinophil sedimentation rate is typical. The main clinical urgency relates to the visual loss, as this may be permanent without prompt treatment with high dose steroids.

Varicella zoster reactivation later in life following initial infection with chicken pox typically results in shingles, but a recognised complication of shingles is an arteritis of large vessels such as the carotid artery. This may result in emboli and consequently present as stroke.

Since both arteritides occur in similar populations, it was natural to speculate that there might be some pathogenic link. However different pathological studies on temporal arteries have yielded different results. Some revealed no VZV on Polymerase chain reaction (PCR) or immunohistochemistry (IHC), while other series have revealed around 25% occurrence in GCA positive biopsy samples , and 0% in GCA negative samples.

This paper, Prevalence and distribution of VZV in temporal arteries of patients with giant cell arteritis in Neurology (2015) by Gilden and a large number of co-authors from all the different centres that contributed temporal arteries, addresses this question with a rigorous examination for the correlation of CGA findings and VZV presence in alternating contiguous temporal artery slices and uses temporal arteries of cadavers as controls.

Study Design and Findings

All investigators took formalin fixed temporal arteries and made 100 5-micron sections. Every other section was examined by IHC for VZV antigen and, in one case, electron microscopy (EM) for virions. If the antigen test was positive, PCR was performed for viral DNA. The remaining sections were examined for GCA inflammation changes. To validate the sensitivity, positive controls were obtained by infecting cadaveric temporal arteries and testing after 14 days in vitro.

A total of 86 histologically confirmed GCA positive temporal arteries were examined, and 16 negative post-mortem controls (i.e. randomly selected, not the ones that were deliberately infected) . All subjects had to be over 50 years. There were no other clinical inclusion or exclusion criteria, but if they were having biopsies presumably the patients had clinical features leading to suspicion of GCA. There was disagreement over the blinded light microscopy findings for GCA in 4 of the presumed positive cases and 3 of the negative controls, so these were removed from analysis. (The findings in these should really have been included in the results; if three additional negative controls out of sixteen all had GCA changes and then analysed and found to have VZV, this would have changed the results in a major way! Presumably at least one pathologist thought they had GCA changes; is this a common coincidental finding?)

VZV was present in 74% of the GCA cases and in one of the cadaveric negative controls. The VZV was present usually in multiple slices with intervening negative areas (skip lesions). Across all 61 VZV positive samples, there were 347 VZV positive skip lesions. The virus tended to be present more in the adventitia than in the intima, and was sometimes present in adjoining skeletal muscle.

In one positive case, VZV virions were found on EM in the artery adventitia.

Only in some cases was PCR positive, perhaps due to the formalin fixation technique.

In 89% of cases where there was VZV antigen, there were light microscopy changes of GCA in adjacent sections. (Presumably, in the remaining cases the light microscopy changes were only in distant sections. If they were not positive somewhere they would not have been included in the study.) In the only post mortem sample out of 13 that had positive VZV there were no GCA changes.

The authors conclude a causative relationship on the basis of:

  • The pattern of VZV presence is patchy (ie skip lesions) in the same way that the typical GCA pattern is patchy.
  • The pattern of VZV was such that there were often GCA changes in nearby sections.
  • The VZV was present in adjoining skeletal muscles, the increased incidence in the adventitia and presence of virions in one case indicating a spread starting outside the artery wall from the cranial nerve ganglia (presumably via the nerve supply to the vessels).
  • The authors never find in other diseases, e.g. meningitis, that inflammation per se is responsible for VZV activation.
  • The pathology and distribution of VZV in these samples is similar to that in other cases of VZV vasculopathy not considered to be GCA.

The authors consider that the variable findings in the literature reflect the fact that the other studies did not take enough sections – the skip lesion problem. Relying on PCR might also result in negative findings. They predict that if they took hundreds of sections per biopsy they would have found VZV in all GCA positive cases, in other words VZV reactivation is the sole cause of GCA.

The obvious implication is that while steroids may suppress the inflammatory response of GCA, such treatment often needs to be maintained for a long time and some cases are refractory. Possibly the steroids are also prolonging the viral activation. Hence aciclivor should be added to the treatment regime.


The journal club were persuaded by the main finding and will be prescribing iv acyclovir in future. Consideration might be given to gancyclovir since the effectiveness of acyclovir is less against VZV than against HSV. Hence we would not rely on oral acyclovir.

Points that were noted included the fact that the authors could have made more of the presence or absence of section-by-section concordance vs discordance regarding VZV and GCA pathology. It is one thing to say that the GCA changes were more likely in sections near VZV presence, but perhaps as important to find a concordant absence of both changes between the skip lesions. This could be tested statistically.

Comment was made about the post-mortem specimens. Might the VZV antigen degrade after death? What was the time between death and fixing the temporal artery?

The reason for choosing post mortem controls is obvious. If there was a negative living control, it would be wondered why they were having the biopsy in the first place. Did they really indeed have mild GCA? However, in our experience there are plenty of TA negative biopsies because the disease is over-diagnosed clinically. If the same rigour of analysing multiple sections was employed, the negative presence of VZV in living tissue in a GCA susceptible population would have been persuasive.

Clearly a study comparing these pathological findings with clinical features and outcome is the next step. However, after reading this study, we would be a little worried ethically about not adding antiviral treatment in a placebo arm of a future controlled trial. The level of evidence ends up being inversely correlated with the likely effectiveness of an intervention!

This paper was presented to our Journal Club by Dr Tim Ham, Specialist Registrar in Neurology, Queens Hospital, Romford, UK.

Posted in Infectious Diseases | Tagged , , , , | Leave a comment

Myasthenia Gravis: Subgroup Classification and Therapeutic Strategies


Myasthenia gravis (MG) is a neurological condition characterised by a fatiguing weakness of certain muscle groups, particularly those that control eye opening, eye movements, speech and swallowing When severe the proximal muscles of the limbs and respiratory muscles may be involved. Acquired myasthenia is an autoimmune disease, where antibodies are directed against the post synaptic nicotinic acetyl choline receptors (AChR) of neuromuscular junctions or against other proteins that affect AChR function.

Diagnosis and appropriate management of MG is particularly important because at any time it can transform suddenly from a relatively benign condition of ptosis and diplopia to a crisis with potentially fatal bulbar dysfunction and respiratory failure. These latter symptoms may in turn reverse with prompt emergency supportive care and immunomodulatory treatment.

The discussed review in Lancet Neurology by Gilhus & Vershuuren seeks to provide an insight into the usefulness of the latest antibody assays in predicting in individual patients the clinical course of MG and response to therapy. Evidence was gathered from the literature on the basis of appropriate searches on Medline and the Cochrane library for English language publications from 1995 to 2015.


The review first describes the pathophysiology of the different associated antibodies. AChR antibodies cross link receptors, accelerating their breakdown. Muscle specific kinase (MUSK) and lipoprotein related protein 4 (LRP4) exist as a complex on the post-synaptic membrane. When activated by agrin protein, this complex affects the aggregation of AchR and the morphology of the terminal. Antibodies to MUSK, LRP4 and agrin influence this process and are therefore are likely to be directly pathogenic. Titin and ryanodine receptor antibodies occur in some patients with thymoma related MG, but may be markers of severe disease rather than directly pathogenic.

Comorbidities may be present in MG, and awareness of these is important. Younger onset patients may have other organ specific autoimmune disease , including polymyositis. Thymoma associated MG is associated with increased risk of haematological malignancies and with a severe autoimmune cardiomyopathy.

Classical subtypes include:

  • Early onset MG with ACh antibodies. This often has ocular involvement and has a female preponderance. Thymic hyperplasia may be present and in these cases the condition responds to thymectomy.
  • Late onset MG with AChR antibodies.  This is also often ocular, but there is only rarely thymic hyperplasia.
  • Thymoma-associated MG. These patients usually have generalised disease and AChR antibodies. There are also other paraneoplastic associations, such as pure red cell aplasia and neuromyotonia.
  • MUSK associated MG. These antibodies are present in 1-4% of MG cases. The condition is usually bulbar or generalised rather than ocular and there is no thymic involvement.
  • LRP4 associated MG. This can be ocular or generalised in presentation.
  • Antibody negative MG occurs in 5% and is heterogenous, probably reflecting different undiscovered causative factors.
  • Ocular MG is defined as being restricted to the ocular muscles; if this remains the case for 2 years, 90% of the time it will remain so. Half of such cases have AChR antibodies, but only very rarely do they have MUSK antibodies.

When symptoms are typical, the review considers neurophysiological testing unnecessary in all cases bar those that are seronegative.

Finally the review discusses treatment options. Immunosuppressive treatment is recommended when symptomatic treatments (anticholinesterases such as pyridostigmine) fail alone to control symptoms. (MUSK antibody associated disease often has a poor response to such treatment.) An extensive review of clinical trials reveals disappointing results in many cases when compared with placebo. Nevertheless a clear treatment plan of steroids combined with immunosuppressive drugs is recommended. Other treatment plans may vary from this. The only information regarding treatment in relation to antibody serology is that rituximab in uncontrolled studies may be particularly effective in MUSK associated MG.

The review concludes with a discussion of new treatments, such as other monoclonal antibody therapies targeting autoantibodies, or antigen specific treatments that encourage the development of immune tolerance.



The review provides a welcome revision of management in an important therapeutic area. However it was felt that there was little specific information on serological-clinical correlations that practically affect management. This was the presumed main hypothesis of the review. The lack of ocular and thymic involvement in MUSK associated disease, and its poor symptomatic response to anticholinesterases, were interesting points.

Other points that arose out of the discussion were:

  • The lack of evidence base for treatment compared to the clear benefits observed in practice does point to the limitation of relying solely on evidence based medicine. It was conjectured that in some cases this may reflect patient selection. If for example, all ocular myasthenic patients are started on immunosuppression, in many cases it may be unnecessary and so demonstrating an improved response compared with placebo may prove difficult. Perhaps clinical focus is understandably upon patients with myasthenic crises or who have recently had myasthenic crises, where the response to treatment is more dramatic and clearly in some cases life-saving.
  • The indication in the review that neurophysiology was only necessary in seronegative patients was surprising. In our practice, we often have neurophysiology results before serology becomes available. In patients with ocular symptoms only, the differential includes cranial nerve palsy, sympathetic lesions, myopathic processes and even muscle tension related symptoms. Identification by neurophysiology alerts clinicians to the fact that the patient is at risk of life-threatening myasthenic crisis. Patients with bulbar involvement may have motor neurone disease or myopathy. Finally, there is a significant false positive AChR occurrence; in patients with low positive AChR  titre in whom we feel that myasthenia is actually unlikely, normal neurophysiology on single fibre EMG jitter study helps to confirm this. While not 100% sensitive and specific, neurophysiology does lend valuable diagnostic support.

This paper was presented to our Journal Club by Dr Salman Haider, Specialist Registrar in Neurology, Queens Hospital, Romford, UK.

Posted in Myasthenia | Tagged , , , | Leave a comment

Clinical Features and Pathology of “Parkinson’s Plus” Syndromes

msa jounral clubBackground

It has long been known that there exist variants of Parkinson’s disease (PD), loosely and perhaps inaccurately described as PD plus syndromes, that may carry features of Parkinsonism but which also have other clinical features. Such conditions have distinct pathology at autopsy.

However, it has also long been known that clinicopathological correlations of these conditions are not perfect; in other words, a patient in life may have clinical features indicating one PD plus syndrome but may be found subsequently to bear the pathology of another.

The subject of this journal club, When Dementia with Lewy Bodies, Parkinson’s Disease and Progressive Supranuclear Palsy Masquerade as Multiple Systems Atrophy by Koga et al. (2015) in Neurology, is a retrospective review of Mayo Clinic brain bank cases labelled as having Multiple Systems Atrophy (MSA) in life.

MSA is a neurodegenerative condition that may have one or both of parkinsonism and ataxia features, and may also have autonomic features, pyramidal features and even features of anterior horn cell disease. According to the Second Consensus Statement (2008), the criteria for probable MSA are:

A sporadic neurodegenerative condition of onset >30 with:

  • Urinary incontinence (plus erectile dysfunction in males) or measuredorthostatic hypotension within 3 min of 30mmHg systolic or 15mmHg diastolic and at least one of:
    • Poorly levodopa responsive Parkinsonism (bradykinesia with rigidity, tremor or postural instability)
    • Cerebellar syndrome

However some patients will have pathologically proven MSA without satisfying these criteria, while in others the clinical picture will be confused by coexisting conditions in this age group, such as Alzheimer’s disease (AD) or cerebrovascular disease.


Study Design

The study reviewed the autopsy results of 134 cases that had consecutively been submitted to the brain bank with a clinical label of MSA. Patients came from 37 US states. The pathological assessments were done using a standard protocol. In 125 patients there were useful clinical records, and in some cases further information was gained by questionnaires sent to living relatives.


Study Findings

A pathological diagnosis of MSA was confirmed in 62% of cases. Of the remaining 38% of cases, 37% had Dementia with Lewy body (DLB) pathology, 29% had PSP, and 15% had PD. Two of the 134 total had Corticobasa Degeneration (CBD), two had cerebrovascular disease and five were “miscellaneous”.

On retrospective assessment of clinical features according to the above criteria, only 49 patients had probable MSA and 35 possible. (But incomplete records do not mean that patients did not have particular clinical features). Once this had been done, 71% of probable MSA patients had pathological MSA, and 60% possible MSA patients had MSA pathology.

The paper describes pathological changes in some detail. In the same way that there are, according to Braak, “stages” or at least grades of neurofibrillary tangle involvement in Lewy body disease, there have been described five phases of A beta amyloid deposition in Alzheimer-type disease. These range from phase 1 where deposition is exclusively in neocortex, to phase 5 where there is widespread involvement even in the cerebellum.

In pathological MSA, 8% also had Lewy body pathology. Overall the median Braak stage was I (not 0). A quarter of the MSA brains had phase 1 or worse A beta phase of Alzheimer’s.

With pathological diagnosis as the reference point, the features that were more common in MSA than in DLB were urinary continence, ataxia, nystagmus and pyramidal signs. Cognitive impairment and visual hallucinations were more common in the latter.

Comparing MSA vs PD, incontinence was more frequent and visual hallucinations less frequent.

Comparing MSA vs PSP, urinary incontinence, constipation, orthostatic hypotension and REM sleep behaviour disorder were more frequent, and vertical gaze palsy less frequent.

Levodopa responsiveness and mini-mental state score actually did not distinguish these diagnoses.

The main errors related to assuming that orthostatic hypotension automatically resulted in an MSA diagnosis instead of DLB or PD, and assuming that ataxia resulted in an MSA diagnosis instead of PSP. Severe dysautonomia early in the course of PD should not be considered an exclusion criterion for that diagnosis.

Imaging has poor sensitivity. Only 38% of pathological MSA had imaging changes. The hot-cross bun sign was rare. There were similar rates of abnormality in PSP.



As suggested by the authors, a limitation of the study is that retrospective post mortem analysis suffers from clinical signs being recorded at different stages of disease advancement and there is a selection bias in those that come to autopsy (such as atypical cases).

Our feelings were that for the above reasons the study cannot be used to determine real diagnostic accuracy. The “improvement” in diagnosis from 62% to 71% when a movement disorders specialist applies probable diagnostic criteria carries little meaning, given the limited data available from those who examined the patient in life. A “brain bank” is only as good as the accuracy and detail of clinical label attached to the specimens.

It was pointed out, though, that the very wide geographical distribution of specimens, which included those not from academic centres, does reperesent a cross-section of patients in the US labelled as having MSA.

We wondered if the difference between PD and DLB is essentially quantitative. DLB is rather arbitrarily defined according to dementia changes manifesting before extrapyramidal changes, otherwise it is considered PD dementia. Perhaps “diffuse” Lewy body disease is a better clinical label. Pathologically there is likely to be a borderline state between the localised involvement of PD and the diffuse involvement of DLB, and indeed if the Braak hypothesis is correct, this overlap may apply to all patients at certain stages of disease progression.

Our final point was a philosophical one about what is the gold standard of diagnosis. Is it necessarily always pathology, which presumably accurately reflects the pathophysiological process that led to the observed pathology? What if there is dual pathology, as reflected in a number of specimens in this study? Which supercedes the other? Is it simply relative severity? If one set of clinical features can reflect either one or both of two different pathological appearances, what is actually more important for the patient and clinician? Would we deny a patient a trial of cholinesterase inhibitor for their dementia and hallucinations if we somehow knew that their pathology was MSA or if their Lewy bodies were localised to the brainstem? Would we not treat their autonomic symptoms if their pathology was PD? Would we fail to check a clinical MSA patient for sleep apnoea if their pathology instead revealed Lewy bodies?

While pathology might be the gold standard when conducting clinical trials, in normal clinical practice it is the clinical features guiding practical management and prognostication that are of primary importance. The broad clinical labels of system involvement still help to classify patients according to their present and future clinical needs.

This paper was presented to our Journal Club by Dr Gemma Cummins, Specialist Registrar in Neurology, Queens Hospital, Romford, UK.

Posted in Parkinson's Disease | Tagged , , , , | Leave a comment