Journal Club Review “Certainty of Stroke Diagnosis: Incremental Benefit with CT Perfusion over Non-Contrast CT and CT Angiography”

BackgroundCT Perfusion Journal Review

The accompanying primer,  Thrombolysis for Stroke and role of CT perfusion Imaging, describes the difficulties and potential shortcomings of thrombolysis for acute stroke and the way that CT perfusion may improve patient selection for thrombolysis. This paper, by Hopyat et al. (Radiology 2010) describes a related problem: the risks of thrombolysis, mainly constituting secondary haemorrhage, are greater when reperfusing a large area of infarcted brain. In the second European Cooperative Acute Stroke Study (ECASS II), failure to recognise involvement of more than one-third of the middle cerebral artery territory resulted in a high risk of haemorrhage when such patients received thrombolysis. CT perfusion may allow better identification of this situation and avoidance of thrombolysis. In addition, CT perfusion may aid in identifying the baseline stroke size for prognostication and research purposes, in positive confirmation of ischaemia during a TIA and, as discussed in the primer, in identification of stroke mimics. The study uses an incremental protocol with up to date CT perfusion technology to assess its use in positive identification of stroke.

Study Design

The study took 191 consecutive patients with presumed stroke/ unresolved TIA who were admitted within 3 hours of symptom onset. Unenhanced CT, CT angiogram and CT perfusion were assessed in that order by non-expert reviewers. A final diagnosis of stroke was established about a month later by an experienced clinician with the aid of a subsequently-performed MRI with diffusion weighted imaging (DWI).


According to the final diagnosis made retrospectively, 64% of the patients had stroke, 18% had TIA and 17% were stroke mimics.

The sensitivity, averaged over all patients and within and across image reviewers, of correct identification of stroke by unenhanced CT was 52.5%, by unenhanced CT and CT angiography was 58.3% and by unenhanced CT, CT angiography and CT perfusion all together was 70.7%; using all three was significantly better than using one or two modalities (p=0.0003 and p=0.013 respectively).

This was not at the cost of reduced specificity (i.e. false positive errors), which was around 85% for all three conditions. Rather than give an all or none answer, the reviewers scored their confidence levels for diagnosis of stroke, and this allowed calculation of receiver operating characteristic (ROC) curves for unenhanced CT alone, unenhanced CT and CT angiography and all three together.

Receiver operating characteristics plotting sensitivity against false positive rate (i.e. 100-specificity) determined by reviewers scoring their confidence level in diagnosing stroke in various different patients. Unenhanced CT alone is blue, unenhanced CT plus CT angiography is red, and unenhanced CT plus CT angiography plus CT perfusion is orange.

Receiver operating characteristics plotting sensitivity against false positive rate (i.e. 100-specificity) determined by reviewers scoring their confidence level in diagnosing stroke in various different patients. Unenhanced CT alone is the blue trace, unenhanced CT plus CT angiography is in brown, and unenhanced CT plus CT angiography plus CT perfusion is in orange.

For example a reviewer at a very conservative level, requiring a high confidence level for positive diagnosis, may rarely identify stroke; he will have high specificity when he does label a case as having a stroke, but very low sensitivity. A good diagnostic tool will be one where there is a larger area under the curve of sensitivity versus 100% minus specificity. In other words, accepting just slightly less than 100% specificity makes the sensitivity rise very dramatically. It is seen that using all three modalities together improves sensitivity over unenhanced CT alone at all levels of specificity, but at very high levels of specificity, CT perfusion does not improve performance over CT angiography.

Inter-observer agreement (Cohen kappa) was only between 0.28 and 0.44 for unenhanced CT alone, and between 0.68 and 0.78 for all three modalities together. Intra-observer agreement was similarly better using all three modalities together.

Strengths and Weaknesses of Study

The authors attempted a “real-life” situation analysis using an incremental protocol, a realistically early time of imaging, inexperienced reviewers and a range of stroke severities that included mild stroke and TIAs. They demonstrate clear superiority in these circumstances. The circumstances also explain why absolute performance may have been lower than in other studies.

The authors cite the advantages of CT perfusion, namely being done just after standard CT and taking just 1-2 minutes extra to perform and about 5 minutes extra to process. In their hands the radiation dose was only that of another unenhanced CT head. The disadvantages they cited are the confounding effects of chronic internal carotid artery occlusion and chronic ischaemic changes making it hard to determine what is new and what is old.

They consider the addition of CT perfusion well adapted to triage of stroke patients but are cautious about the benefits of identifying penumbra because of the absence of actual evidence that reperfusing penumbra improves outcome.

However, the “real-life” analysis situation might not be without shortcomings in interpretation because real life may be different in different units. Certainly in many stroke units there will be individuals on hand to assess imaging in real-time who may have several years’ experience rather than the one year’s experience of the study’s reviewers. Their lack of skill may have overestimated the extra sensitivity of CT perfusion.

While it is also laudable that they have not selected for their study only patients whose stroke was clinically obvious on admission, it does seem strange that there were so many TIAs when most TIAs do not usually last more than an hour. The mean delay was 117 minutes +/- 59 minutes, so generally there was a 1 to 3 hour window. Some clinicians might delay imaging a little if the patient attended within an hour with as yet not improving symptoms. It would not be a fault of imaging, as such, if a TIA was identified as stroke simply because the scan was performed early enough to detect the ischaemia. A more experienced radiologist might better distinguish ischaemia from established infarction on CT perfusion by the lack of reduction of cerebral blood volume, but this was not specifically examined in this study.

As a tool for ruling out stroke mimics, CT perfusion is clearly and unsurprisingly better than unenhanced CT (which was never intended for this purpose). But with sensitivities around 75% and specificities around 85%, it can hardly be considered a gold standard. Should we thrombolyse on that basis, given the 6% risk of causing harm from intracranial haemorrhage (though the harm engendered on thrombolysing the normal brain of some stroke mimics is likely to be low compared to thrombolysing an extensive established infarct)?

The performance of CT perfusion in positive diagnosis might in fact come up short compared with that of an experienced clinician examining the patient shortly after initial triage, and then one wonders whether that clinician ought not then to rely on his clinical judgement alone or upon an early MRI with DWI, accepting that this might not be universally feasible.

The introduction in the paper started by describing early major middle cerebral artery infarction as a relative contraindication for thrombolysis and how CT perfusion may help identify this. I cannot help but wish this is what the study actually investigated rather than detection of stroke mimics, but it does at least provide a good guide as to how to go about conducting such an investigation rigorously.

Posted in Stroke | Tagged , , , , , , , , | 1 Comment

Primer on Thrombolysis for Stroke and Role of CT Perfusion Imaging

Stroke-Header_FASTStroke, defined as a sudden vascular event resulting in localised brain damage (World Health Organisation, 1978), is without doubt a major challenge in health care, being the third most common cause of mortality in developed countries and the single greatest cause of lasting disability (Mant et al., 2004). In the UK, stroke patients occupy 2.6 million days in hospital beds a year, equivalent to one in five total acute hospital beds and one in four long-term beds (National Audit Office, 2005). Over the last decade, there have been increasing efforts to organise acute stroke care into dedicated stroke units and to raise public awareness that stroke is a medical emergency to be managed in a timely fashion (e.g. the FAST campaign).

The development of thrombolysis has been one of the drivers for management of stroke as an emergency. This “clot buster” treatment may be given intravenously to dissolve the thrombus or embolus in a cerebral artery and allow reperfusion of the territory supplied by the artery before those areas of the brain become irreversibly infarcted. Timing is critical for such treatment to be effective; if given too soon after symptom onset, the thrombolysing agent (tissue plasminogen activator (TPA)) may be unnecessary as the patient may in fact be suffering a transient event that would reverse spontaneously, and if given too late the brain tissue will already be dead.

alteplase-actilyse-for-ischaemic-stroke-500x500The standard European criteria for thrombolysis, developed from the major multicentre study (Safe Implementation of Thrombolysis in Stroke Monitoring Study, abbreviated to SITS-MOST (Lancet. 2007)) that validated its use, originally stipulated a time window of 3 hours after symptom onset and excluded patients whose symptoms were rapidly resolving. In practice, giving the treatment too early is not a major concern as most self-resolving events, called transient ischaemic attacks (TIAs) last less than an hour and it is very rare to be ready logistically to thrombolyse within an hour of symptom onset.

Outcome following Thrombolysis

Unfortunately, thrombolysis is not a panacea even within this narrow time window. A fair comparison is achieved with a randomised double-blind study against placebo, but because of widespread use such studies have not recently been performed. The original positive trial (National Institute of Neurological Disorders (NINDS) Stroke Study Group, 1995) showed no clear clinical differences after 24 hours but what was described as “at least 30% better outcome” at 3 months (global odds ratio 1.7 (95% confidence interval 1.2 to 2.6). By way of example, the percentage of patients achieving 0-1 on the modified Rankin score (meaning no or minimal disability) was 39% vs 26%, in other words 13% more patients had  excellent outcome after thrombolysis than after placebo. There was no improvement in mortality.

However, there have been concerns over the fact that in this study the placebo patients had a worse severity stroke at onset, that other studies have shown unclear benefit, and that some studies have relied upon open label self-reporting by patients to measure outcome.

Underpinning these concerns is the risk of haemorrhage associated with intravenous thrombolysis. Thrombolysis was originally developed for coronary thrombosis in myocardial infarction; the brain is an organ far more sensitive to insult and reperfusing infarcted brain may make it particularly susceptible to haemorrhage, with far worse consequences than a haemorrhage into myocardium. In the NINDS study, 7% of patients had a symptomatic intracerebral haemorrhage (meaning neurological deterioration or other clinical suspicion in presence of haemorrhage on CT not seen pre-treatment) within 36 hours after thrombolysis, versus 1% of patients given placebo. In 3% of thrombolysed patients, the haemorrhage was fatal.

The SITS-MOST study was designed to look at safety of thrombolysis given according to the same protocol and collected data on 6483 patients and found a similar figure of 7.3% patients significantly worsened (<=4 points higher on NIHSS score) within the first 7 days by intracranial haemorrhage.

So when counselling patients on giving thrombolysis, we should say that within the 3 hour window, out of 100 treated patients, around 12 will have a better outcome (more likely to be disability free or minimal disability), 4 will be made worse because of brain haemorrhage, 2 will die from brain haemorrhage and 82 will be unchanged. It does not sound as good as quoting 30% better outcome (taken as the increased proportional percentage gain rather than absolute percentage gain over placebo).

Recent Changes to Prescribing Guidelines

More recent studies have explored widening the window of thrombolysis to 4.5 hours, or even longer in certain circumstances. The third European Cooperative Acute Stroke Study (ECASS III, 2008) randomised patients at 3 to 4.5 hours after stroke onset to thrombolysis or placebo, and found a 52.4% versus 45.2% good outcome; the significance level for this 7% improvement was only 0.04 – the lower 95% CI for the odds ratio of better outcome (according to their chosen criterion on Rankin) was 1.02! If other Rankin criteria were chosen, e.g. 0-2 versus 3-6 instead of 0-1 versus 2-6), no significant improvement would be demonstrated. In fact the chance of being dead or severely disabled at 3 months (modified Rankin scores 5-6) was non-significantly higher if thrombolysed (14.8% versus 13.4%). Concern has also been voiced that, despite randomisation, the placebo group had on average a more severe stroke before thrombolysis (one point worse on NIHSS), and were more likely to have had a previous stroke. The risks of intracerebral haemorrhage were comparable to data from patients thrombolysed within 3 hours.

In counselling a patient within this time window, we would therefore have to add that because the time is more than 3 hours after thrombolysis, the chance of improvement to the state of no or minimal disability increases by around 7% instead of 12%.

Cost effectiveness analysis of patients thrombolysed in this time window show limited favourability but are based on limited evidence; of course fatalities reduce cost compared to disability so I find such analysis morally inappropriate.

In the UK, the National Institute of Clinical Excellence (NICE) guidelines for stroke were updated in 2013 to increase the thrombolysis window from 3 hours to 4.5 hours. There is also now no exclusion of posterior territory infarction and debate over excluding patients over 80 years. I personally have reservations about this, and consider it a situation where we have permission under licence to give it if we feel in our judgement it is clinically appropriate. Despite the trumpeting of trial data, there are ethical reservations about giving a treatment that will help a modest proportion of patients but harm a significant proportion too. The haemorrhage risk is more than those surgeons and anaesthetists typically quote for surgery.

Unsurprisingly, this situation polarises medical opinion. Outcome data on thrombolysis have come under intense scrutiny and been subjected to endless meta-analysis and debate. What is really needed is less spin on statistics and more information on predicting a good outcome of thrombolysis in an individual patient who has just had a stroke.

Current guidelines for selecting patients for thrombolysis depend on a clinical diagnosis of ischaemic stroke, a clinical scale of stroke severity, various exclusion criteria and a CT scan of the head. This CT scan will not demonstrate the stroke; the changes of stroke after 3 hours are too early to be detected on CT, which is simply showing the reduced density of infarcted brain. Instead, the CT excludes a haemorrhagic stroke where thrombolysis would be pointless and dangerous, or an established large stroke where thrombolysis may be too late and also associated with increased risk.

An alternative investigation that positively diagnosed stroke within the thrombolysis time window would be very useful to exclude “stroke mimics”, such as patients with acute unilateral muscular weakness from spondylosis or patients who are imagining that they are having a stroke and reproducing its clinical features (functional stroke). What would be even more useful is an investigation that could positively identify ischaemic but potentially retrievable brain from brain that was already infarcted and might only result in haemorrhage if suddenly reperfused by thrombolysis. This retrievable brain is known as the “penumbra”, alluding to the surrounding region of partial rather than complete shadow cast by an object in front of a non-point light source .

CT Perfusion Imaging

Of such investigations, the most promising may be CT perfusion. This requires only the hardware for standard CT, with an intravenous iodinated contrast injection, and may be performed more rapidly and be more easily tolerated than MRI. The limiting factor is likely to be user dependence and the quality of the analysis software.

Technique of CT Perfusion Imaging

After a bolus injection of contrast, a sequence of images are taken that measure the rise and subsequent fall in contrast density as the bolus travels through the cerebral vasculature. Two reference time plots are normally taken; that for the input transit time is the A2 segment of the anterior cerebral artery as it passes perpendicular to the axial plane of imaging, and that for the output transit time is the superior sagittal sinus. For each region of interest voxel, four parameters are then calculated:

  • cerebral blood flow (CBF)
  • cerebral blood volume (CBV)
  • mean transit time (MTT)
  • time to peak contrast enhancement (TTP).

These are then mapped onto an axial slice of the brain to convey visually how the different parameters vary across brain regions, with high values represented as red and low values represented as blue.

Flow dynamics tells us that three of these parameters are interdependent:

  • MTT = CBV / CBF

So if there is a thrombus reducing flow to a region of brain, as the CBF is lowered, the MTT increases in parallel if the CBV stays the same.

Normal grey matter has a higher CBF and CBV, so cortical areas of gyri tend to look more red on both CBF and CBV. Because the increases are similar, the changes cancel out on MTT so the MTT tends to look more blended between white and grey matter. The venous sinuses also look very red on CBF and CBV and similar to background on MTT, presumably because these voxels are purely blood so have relatively high blood volume as well as flow. (One might expect arteries to have higher flow, and therefore blue on MTT, but the resolution of CT perfusion may not be great enough to identify arteries in cross-section.)

In the early hours after an acute stroke, it is considered that an “umbra” of infarcted brain may be surrounded by a “penumbra” of ischaemic brain that will shortly become infarcted but is potentially salvageable on reperfusion. CT perfusion may allow differentiation of the two because the CBV reduces more in infarcted brain. So:

Infarcted Brain ↓↓↓ CBF ↓↓ CBV ↑↑ MTT
Ischaemic Brain ↓↓ CBF slight ↓ CBV ↑↑ MTT
CT perfusion

In a patient 70 minutes after stroke onset (NIHSS score 10), the unenhanced CT, not shown, is normal. The cerebral blood flow (top left) and cerebral blood volume (top right) show reductions in the arrowed area. There is a corresponding increase in mean transit time (bottom left) and a DWI weighted high signal area several days later on MRI (bottom right). (Figures taken from Hopyan et al., 2010.)


Cerebral blood flow is obviously reduced if there is a proximal thrombus, and in infarction there is reduction in cerebral blood volume. This could be because of tissue swelling raising local intracranial pressure and constricting capacitance vessels, because there is a certain elasticity in vessels so that constriction will follow from reduced flow, or because of reflex vasoconstriction of capacitance vessels in damaged brain. The reduced CBV is still less than the dramatically reduced flow, so that MTT is significantly prolonged. The infarcted area appears more blue on CBF and CBV and more red on MTT.

In ischaemic brain, the CBV is relatively preserved, perhaps because the affected brain area is not as swollen or perhaps because of preserved reflex capacitance vessel dilatation in an attempt to improve perfusion of these areas. Cerebral blood flow is reduced (blue), but there is now a mismatch between CBV (relatively normal) and MTT (clearly red).

Problems in interpreting CT perfusion

  • Image processing is complex and user dependent. There may be poor selection of the anterior cerebral artery and superior sagittal sinus reference points
  • If the protocol is poorly designed, the radiation dose may be massive
  • Many protocols do not analyse the whole brain, so clinical knowledge is required to determine if the area of interest is middle cerebral artery territory. Brainstem areas cannot easily be assessed.
  • The resolution of CT perfusion is such that small strokes may not be visualised
  • If there is extracranial vessel occlusion, e.g. carotid artery, the hypoperfused area may give a false impression of acute infarction. The same applies to areas of leukoaraiosis. Thus CT perfusion must be interpreted in the context of unenhanced CT appearances and preferably with CT angiography.

Practical Uses of CT Perfusion

  • Identification of penumbra. If a patient was outside the 3-hour time window for thrombolysis, or the time of onset was unknown, but was otherwise a good candidate, a CT perfusion scan revealing a relatively large penumbra with normal CBV and prolonged MTT would indicate salvageable brain that might benefit from thrombolysis.
  • Positive identification of stroke. CT perfusion reveals changes very early after stroke onset, but there may be poor sensitivity because of lack of clarity over the territory of interest, the possibility of posterior circulation stroke and poor resolution of a small stroke, e.g. lacunar infarction.
  • Measuring cerebrovascular reserve. In the non acute setting, CT perfusion before and after administration of intravenous acetazolamide can help to identify brain areas that are chronically ischaemic. Acetazolamide is a vasodilator, but will have less effect on ischaemic areas because such areas already have ongoing maximal compensatory vasodilation. Thus, after acetazolamide, there will be less increase in CBF in ischaemic areas compared to normal neighbouring areas, less increase in CBV (though this is generally increased throughout the brain), and most clearly an extra prolongation of MTT (more red) in areas that may already have somewhat prolonged times compared to normal areas.
  • Identifying vasospasm. In the situation of subarachnoid haemorrhage, areas of brain suffering reactive vasospasm react like the penumbra of a stroke and indicate that measures taken to reduce vasospasm may reduce the risk of lasting focal neurological deficit after subarachnoid haemorrhage.

An accompanying Journal Club Review looks at a study that investigates the use of CT perfusion in acute stroke primarily in terms of stroke diagnosis.












Posted in Primer Posts for General Readers, Stroke | Tagged , , , , | 1 Comment

Journal Club Review: Cervical Vertigo

bronstein front pageBackground

Cervical pain from spondylosis or muscular problems is a very common symptom in the general adult population, estimated in a recent study to have a point prevalence of 4.9% and a global burden of 33.6 million disability-adjusted life years (Hoy et al., 2014). Symptoms are commonly recurrent within individuals, returning in from 50-85% of cases within 5 years of initial presentation (Haldeman et al., 2008).

The most common aetiologies of cervical pain are joint disease resulting in spondylosis and acute or chronic muscular injury. The muscles of the mobile cervical and lumbar spine tend to develop spasm as a consequence of joint or muscle inflammation and this further exacerbates injury, resulting in a vicious cycle of pain.

Vicious Cycle of Neck pain

Vicious cycle of neck pain

Diagnosis and management of cervical pain is often complicated by a number of associated symptoms, including headache, dizziness, tinnitus and ear discomfort (Baron et al., 2011). While headache of tension type character, often occipital and radiating anteriorly to the frontalis or temporalis areas, is well-established in relation to neck pain with around 20% of cases of chronic headache having a cervical basis, the other associated symptoms are more controversial.

This article focuses on one associated symptom in particular, namely cervical vertigo. The review article by Brandt and Bronstein, published in JNNP in 2001 presents a comprehensive account of the scientific basis underlying the condition, and in the article I will go over this review and more recent studies on the subject.

Cervical Vertigo Definition and Terminology

Cervical vertigo may be defined as:

A  perception of vertigo or imbalance resulting from cervical spondylosis and muscle spasm, normally a chronic tendency to brief attacks and especially brought on by head movement.

Other terms that describe the same thing are cervicogenic vertigo, cervical imbalance and cervical dizziness.

Vertigo arising from the neck presents a particular diagnostic challenge as other potential causes may require alternative management and in some cases will require urgent attention (table 1); the common occurrence of cervical pain means its association with imbalance may be coincidental.

Table 1. Differential diagnosis of vertigo

Table 1. Differential diagnosis of vertigo

Diagnostic Confusion

The two conditions most commonly confused with cervical vertigo are benign peripheral positional vertigo (BPPV) and vertebrobasilar insufficiency. They all typically present with vertigo whenever the head moves in a certain way suddenly, rather than a discrete episode as in Meniere’s syndrome, and there are no other cranial nerve features as may occur with a vestibular Schwannoma. Nowadays the availability of MR imaging means that the latter may have already been excluded before specialist referral.   The distribution of cases of such vertigo between these three apparently nosologically and pathophysiologically distinct entities remains controversial, and indeed some argue that cervical vertigo does not exist at all.

In truth, the conditions themselves may overlap. Patients with cervicogenic vertigo are likely to have cervical spondylosis making them susceptible also to vertebrobasiliar insufficiency, and if the latter is demonstrated clinicians may be tempted to consider it the hierarchically dominant or sole diagnosis even though many of a patient’s attacks may be cervicogenic. Conversely, patients with BPPV are likely to stiffen their neck as a protective mechanism and may consequently develop cervicogenic vertigo even after their BPPV has resolved. Finally, in post-traumatic cases vertigo may result from a combination of dislodgement of otoconia into the lumen of the semi-circular canals producing BPPV, damage to the otolith organs which are vulnerable to mechanical acceleration, and whiplash injury producing cervicogenic vertigo.

Overlapping nosological entities of vertigo

Overlapping nosological entities of vertigo

Clinical Presentation of Cervical Vertigo

When considering the complaint of “dizziness”, it is important to define more closely what the patient actually experiences. When many patients describe dizziness they are actually referring to presyncopal symptoms. True vertigo is a perception that the environment is moving in a rotatory direction or swaying to and fro. Finally, dizziness is sometimes used to describe a spatial disorientation or perception of imbalance. Sometimes it is the perception that appears in itself to be pathological – the patient feels unsteady perhaps more than actually being unsteady; this must be distinguished from patients whose perception of imbalance is a relatively more accurate and objective appraisal of their actual unsteadiness as a result of ataxia or loss of postural reflexes.

The typical patient with cervical vertigo falls into the category of those who describe a perception of spatial disorientation or imbalance. Rather than true vertigo, they report positional unsteadiness, imbalance, giddiness, or a feeling that the ground is sliding underneath them. As is typical for peripheral vertigo, head movements precipitate the symptoms, often neck extension or rising from supine. They may have an excessively cautious gait for their apparent objective level of balance impairment, grabbing hold of walls or “furniture walking”, and this may lead to a mistaken diagnosis of psychogenic vertigo. For such symptoms to be considered cervicogenic, there should obviously be a history of neck pain. However many specialists recommend caution in making a diagnosis of cervicogenic vertigo whenever there is neck pain, especially in cases where the description is of true vertigo. The musculature is obviously not the only structure in the neck that can give neck pain and vertigo and if post traumatic vertebral artery dissection must be excluded. Similarly, vertebrobasilar insufficiency (see below) may result from pinching of the arteries in their course through the cervical vertebrae and, while this would typically also result in other transient brainstem symptoms, a presentation with vertigo alone has been reported.

On examination, patients with cervical vertigo sometimes have their symptoms set off on testing eye movements, and they may have a reluctance or a restriction on testing range of head movement. Instead they may tend to turn the trunk with the neck. Vestibulo-ocular reflex testing or the Hallpike’s test may reproduce symptoms but without nystagmus or with a very slight nystagmus. Instead, to isolate the influence of cervical afferents the patient should be placed on a rotating stool and the head gently fixed by the examiner’s hands while the trunk is rotated back and forth. Cervical nystagmus of immediate onset may result, changing direction with the direction of rotation. However, this sign is unreliable, as discussed below.

Important Alternative Diagnosis: Benign Peripheral Positional Vertigo

Bingin peripheral positional vertigo (BPPV) is thought to relate to otoconia floating freely in the semicircular canals, usually the posterior semicircular canal on one side; head movement in the plane of this canal results in ongoing stimulation and generates vertigo and nystagmus.

Onset of BPPV is usually subacute or chronic, and characterised by brief episodes on making certain head movements. A more severe acute onset of continuous positional vertigo usually points instead to vestibular neuritis, also called labyrinthitis.

The Hallpike’s test is positive in BPPV, with rotatory nystagmus in an extorting direction in the lower eye (top of the eyes jerking towards the floor) and usually adapting after several seconds.

Hallpike's Test and Epley Manoeuvre (Fife et al., 2008)

Hallpike’s Test and Epley Manoeuvre (Fife et al., 2008). Otoconia are loose in the right posterior semicircular canal (arrowed fig. 1). The patient’s head is turned 45 degrees to the right so that the posterior canal is in the plane of motion (and the effect of the posterior canal on the other side is negated) when the patient is lain flat (fig. 2). The consequent nystagmus is extorting in the right eye.


Vertebrobasilar Insufficiency

This is an oft cited but rarely demonstrated syndrome thought to relate to pinching of the ipsilateral vertebral artery when a patient with cervical spondylosis turns their neck. The tortuous course of the artery through the transverse foramina of cervical vertebrae C6 to C1 and then across the posterior arch of C1 makes the artery particularly susceptible to such compression.

Course of the vertebral artery

Course of the vertebral artery

In the related Barré Liéou syndrome (1926), the artery is not directly pinched but irritation of the sympathetic plexus around vertebral arteries causes reflex vasoconstriction. However, the existence of this sympathetically mediated phenomenon remains doubtful.

On examination, it is suggested that if the head is moved to one extreme, compared to cervical vertigo, the nystagmus of vertebrobasilar insufficiency would start only after a delay of several seconds to minutes. However, given it is assumed that a perhaps already atherosclerotic artery is being badly pinched, I cannot help but think that such a test should only be performed in a catheter lab! Indeed, it would only be during formal arterial angiography demonstrating occlusion with the head turned to one side but not the other coul done really make a confident diagnosis of this condition.

It is considered by some that the phenomeon of disruption of cervical afferents mediating the cervico-ocular reflex does not exist and “cervicogenic vertigo” always results from vertrbrobasilar insufficiency. However, a review of the anatomy of the blood supply to the brain via the circle of Willis reveals many collaterals and means that ischaemia will result only when there is already occlusion of the contralateral artery, and probably additional significant atheromatous narrowing of the anterior circulation.

Circle of Willis

Circle of Willis

Such a situation must be rare and is different of course from the aetiology of a vertebrobasilar territory transient ischaemic attack, where an embolus from these arteries passes up and lodges into a smaller artery without collaterals. This is why vertebral or carotid artery occlusion in the neck carries a much lower risk of stroke than does a stenosis where emboli may still pass up through the narrowed lumen.

In addition, if there was transient ischaemia from hypoperfusion, why would it selectively result in vertigo and no other brainstem features such as ataxia, dysarthria, collapse, hemianopia or loss of consciousness? Nevertheless, some cases of likely vertebrobasilar insufficiency have been reported to present with vertigo without other brainstem features (Dvorak & Dvorak, 1990). Conceivably if the blood supply to an already stenosed anterior inferior cerebellar artery, which comes off the basilar artery not the vertebral artery, was critically dependent on one remaining patent vertebral artery, pinching of the latter could result in transient ischamia only of the inner ear structures, resulting in a peripheral nystagmus with or without deafness and tinnitus.

The main point is that while vertebrobasilar insufficiencty may indeed exist, the circumstances required for it to occur seem too rare to account for all cases of presumed cervical vertigo.

Scientific Basis of Cervicogenic Vertigo

Signals important in balance control, including vision, eye position, vestibular signals and processed postural information and perceptual information, are integrated in the vestibular nuclei located in the pons. These nuclei in turn output to postural control centres, to the eye movement apparatus to control compensatory eye movements and to perceptual processes.

Inputs to the Vestibular Nuclei

Inputs to the Vestibular Nuclei

Vertigo is a false perception of movement, and typically results not from a deficit but a mismatch of balance signals. This mismatch may be between defective and normal vestibular canal signals on either side of the head, or between vestibular and visual signals. For cervical vertigo to exist, there would therefore have to be a physiological basis not only in functionally important cervical signals inputting head position on the trunk to the vestibular nuclei but also in a process that compared these signals with vestibular or visual signals at a perceptual level so that a mismatch could lead to vertigo.

Why have cervical signalling for balance?

Visual inputs signal movement and position relative to the retina, while vestibular inputs signal movement and position relative to the head; however, the balance system needs information primarily on the centre of mass which mainly reflects the trunk. In essence, afferents from muscle spindles and joint receptors in the neck would allow determination of centre of mass by correction of the vestibular signal for head position with respect to the trunk.

Balance Responses. 1) Whole trunk movement to left may leave head behind from inertia. Stretch of left neck muscles signals head on trunk movement to right (red). Lateral semicircular canal signals partial head movement to left (blue), as does retinal slip signal (green). The cervico-ocular reflex (COR) will result in slow phase eye movement to left. This acts to compensate for relative head movement to fix gaze, or shift gaze to overall body facing. Any actual left head movement will result in vestibule-ocular reflex (VOR) slow phase to right and optokinetic reflex (OKR) slow phase to right. Overall eye movement will be integrated in the vestibular nuclei as the difference between COR and an amalgam of VOR and OKR.  The separate head on trunk, head in space and retina in space movements may reach level of perception. For an overall perception of trunk motion, leftward head perception, must be added to rightward head on neck perception (which reflects a leftward trunk under head movement). Other reflexes in action include direct cervico-collic stretch reflexes that will turn the head left in response to head on trunk movement, and from the vestibular nuclei an integrated vestibulo-collic reflex that will stabilise the head on the trunk and integrated postural reflexes that will stabilise trunk positioning. 2) Experimental blocking of afferents of right neck will lead to unopposed stretch signalling on left, simulating right head on trunk motion. This will generate an unopposed COR signal slow phase eye movement to left, so fast phase of spontaneous nystagmus is on the same side as the block. 3 Vibration applied to the neck muscle stimulates stretch reflexes without any vestibular or ocular involvement (unless the stretch actually secondarily moves the head).

Balance Responses.
1) Whole trunk movement to left may leave head behind from inertia. Stretch of left neck muscles signals head on trunk movement to right (red). Lateral semicircular canal signals partial head movement to left (blue), as does retinal slip signal (green). The cervico-ocular reflex (COR) will result in slow phase eye movement to left. This acts to compensate for relative head movement to fix gaze, or shift gaze to overall body facing. Any actual left head movement will result in vestibule-ocular reflex (VOR) slow phase to right and optokinetic reflex (OKR) slow phase to right. Overall eye movement will be integrated in the vestibular nuclei as the difference between COR and an amalgam of VOR and OKR. The separate head on trunk, head in space and retina in space movements may reach level of perception. For an overall perception of trunk motion, leftward head perception, must be added to rightward head on neck perception (which reflects a leftward trunk under head movement). Other reflexes in action include direct cervico-collic stretch reflexes that will turn the head left in response to head on trunk movement, and from the vestibular nuclei an integrated vestibulo-collic reflex that will stabilise the head on the trunk and integrated postural reflexes that will stabilise trunk positioning.
2) Experimental blocking of afferents of right neck will lead to unopposed stretch signalling on left, simulating right head on trunk motion. This will generate an unopposed COR signal slow phase eye movement to left, so fast phase of spontaneous nystagmus is on the same side as the block.
3) Vibration applied to the neck muscle stimulates stretch reflexes without any vestibular or ocular involvement (unless the stretch actually secondarily moves the head).

Cervico-Ocular Reflex Nystagmus

In the same way as vestibular signals are responsible for vestibulo-ocular reflexes and vestibular vertigo, a functionally important cervical balance pathway that could result in cervicogenic vertigo might be expected to be associated with a demonstrable cervico-ocular reflex, where stimulation of spindle afferents results in reflexive compensatory eye movements. In other words, neck proprioception, if input to vestibular nuclei, may result not only in perception of motion, but a compensatory eye movement that may be recorded by electronystagmography, infrared or video systems.

Trunk rotation, e.g. to left, under a fixed head in the dark would be interpreted by neck proprioceptors as head movement to right and would generate a compensatory slow phase to left. Fast phase of nystagmus would therefore be to the right, the opposite side to trunk rotation.

More physiologically, if the trunk turned and the head was not fixed but lagged behind by inertia, the reflex would make the direction of gaze follow the direction of trunk movement even if the head did not.

Evidence for Cervical Balance Signals: Anatomical Connections

The deep short intervertebral neck muscles are rich in muscle spindle afferents that are able to provide a signal of head on trunk position or head on trunk movement (Cooper & Daniel, 1963) and there is anatomical demonstration of connectivity to the vestibular nuclei and neighbouring brainstem reticular formation areas (Ciriani et al., 1992).

Evidence for Cervical Balance Signals: Vibration Induced Responses

Selective stimulation of cervical afferents by vibration over the neck muscles simulates a stretch reflex; unilateral stimulation is indeed found to result in postural responses, apparent movement of a visual target and a weak deviation of perception of subjective vertical to give the illusion of ipsilateral head tilt.

There is also an associated cervico-ocular reflex of low amplitude. Furthermore it is found that this response is increased after a unilateral vestibular lesion, building up over several weeks as a presumed compensatory enhancement. The automatic postural responses are greater than the perception of motion, unlike the major perception of motion that results from caloric testing of vestibular function, and thus fits with cervicogenic vertigo constituting more a sense of imbalance than actual vertigo.

Evidence for Cervical Balance Signals: Disruption of Cervico-Ocular Reflexes

Interference with cervical afferents in an attempt to mimic the situation in cervicogenic vertigo also yields unclear results. Local anaesthesia of the deep neck muscles in humans results in gait deviation, a tendency to fall with a positive Romberg test to the injected side, a perception of altered position and an unsteadiness on sudden head movement that lasts for several hours after the injection. These findings are confirmed on therapeutic C2 level anaesthetic block to treat patients with cervicogenic headache.

However there is no associated nystagmus (ie cervico-ocular reflex), nor any actual vertigo. Some of the effects could reflect an imbalance in muscle tone as a result of cervicocollic reflexes rather than the cervico-ocular reflex. Nevertheless, the pattern fits with the perception of imbalance or “quasi-vertigo” on head movement rather than the true vertigo of vestibular dysfunction.

Evidence for Cervical Balance Signals: Physiological Responses

As described above clinically, on testing trunk rotation under fixed head in the dark, there is sometimes a weak cervico-ocular reflex. However, if the head is not fixed, there may be head movement, limited by inertia but brought on by tissue elasticity and by cervico-collic reflexes.  Any actual movement will result in vestibular ocular reflexes and vestibulo-collic reflexes that will secondarily stabilise the head. If tthe head strongly fixed, perceptual processes or pressure detection on the side of the head may also suppress any illusion of head rotation. —

It is therefore not surprising that physiological stimulation of putative cervico-ocular reflexes in normal human adults using trunk movements with a stabilised head produces less clear perception of motion than does muscle vibration and unreliable cervico-ocular reflexes. Nevertheless, under carefully controlled conditions, such as sinusoidal trunk movements with the head fixed by a bite bar, a reliable cervico-ocular response can be recorded and compared with analoguos vestibular and optokinetic responses.

Infra-red recordings of cervico-ocular reflex, vestibulo-ocular reflex and optokinetic reflex resulting from sinusoidal movements (0.04 Hz, ± 5° amp). This isolates the slow phase component as there is no need for resetting saccades of nystagmus when tracking a back and forth sinusoid.

Infra-red recordings of cervico-ocular reflex, vestibulo-ocular reflex and optokinetic reflex resulting from sinusoidal movements (0.04 Hz, ± 5° amp). This isolates the slow phase component as there is no need for resetting saccades of nystagmus when tracking a back and forth sinusoid. (Kelders et al., 2003)

Mean amplitudes of reflex responses at different sinusoid stimulus frequencies. Gain of COR is lowest (VOR low at slow frequencies but increases with higher frequencies). Phase of VOR and COR are more variable and COR lags behind trunk rotation at higher frequencies.  With old age, VOR and OKN gain decrease; there is a compensatory increase in COR gain, as there is after vestibular  dysfunction.

Mean amplitudes of reflex responses at different sinusoid stimulus frequencies. Gain of COR is lowest (VOR low at slow frequencies but increases with higher frequencies).
Phase of VOR and COR are more variable and COR lags behind trunk rotation at higher frequencies. With old age, VOR and OKN gain decrease; there is a compensatory increase in COR gain, as there is after vestibular dysfunction.

Sinusoidal movements of slow frequency and small amplitude generate cervico-ocular reflex (COR) of lower gain than VOR and OKN, and a tendency to lag behind the movement at higher frequencies. With old age, VOR and OKN gain decrease but there is a compensatory increase in COR gain, as there is after vestibular dysfunction. It is tempting to speculate that the same might apply to patients with clincal cervical vertigo.

Studies on patients with cervical vertigo

Having experimentally demonstrated the functioning of cervical balance signals in normal subjects, the next step is to demonstrate disordered signalling in patients with presumed cervical vertigo.

Such patients do have myofascial trigger points for their pain that exhibit spontaneous EMG activity compatible with hyperactive muscle spindles (Hubbard & Berkoff, 1993). However, no correlation is found between the magnitude of physiological cervico-ocular reflexes and the severity of clinical cervicogenic vertigo. Perhaps, if there is an abnormality of cervico-ocular reflexes associated with cervical vertigo, it does not simply relate to the gain (amplitude) of what is after all a physiological rather than pathological phenomenon, but to a mismatching of signalling from either side or to a failure to calibrate such signals with vestibular and visual balance information.

—What has been reported in patients with cervical pain (with or without vertigo) is that they tend to have poorer postural control based on vibration or galvanically induced body sway and when such patients are treated with physiotherapy there is improvement in their dizziness and imbalance as well as in their cervical pain (Karlberg et al., 1996).

Does Cervical Vertigo Exist?

—There appears to be a scientific basis for the notion that stretch of neck muscles influences balance mechanisms, and a physiological cervico-ocular reflex, especially in controlled conditions, has been demonstrated. However, there has as yet been no demonstrated abnormality in this reflex in patients with cervical vertigo. This lack of a reliable diagnostic test, unlike the clear abnormality of vestibulo-ocular reflexes in vestibular vertigo, hampers study of the condition because of the consequent problem with defining a patient population.

Given the lack of any diagnostic abnormality in such patients other their associated neck pain, the question may then be asked why all patients with neck pain do not get vertigo. This has been felt to signify that the condition does not actually exist. However, there are innumerable examples in medicine where patients do not have to have the “full house” of clinical features to have a syndrome. There may be additional factors that trigger vertigo, such as the nature and asymmetry of muscle spasm, previous vestibular problems or a constitutional tendency to heightened vertiginous perceptions as is found in visual vertigo. Rather than proof the condition does not exist, it might be more constructive to consider patients with neck pain without vertigo a good control population. Using healthy subjects as controls runs the risk of identifying abnormalities that are more the direct result of pain than a manifestation of cervical vertigo.

—Since physiotherapy appears to help cervical vertigo as well as pain, it might be regarded that the diagnosis is somewhat pointless since management is the same as if a patient presented with pain alone. However, there is an important differential diagnosis of vertigo that could be occurring coincidental to cervical spondylosis. And management may be the same precisely because the condition is poorly defined. There could be future refinements of physiotherapy techniques if the subset of patients with neck pain who also have vertigo were better understood.

As mentioned above, the contrast with vestibular vertigo where there is an obvious abnormality of vestibule-ocular reflexes, is one factor that has thrown doubt upon the entitiy of cervical vertigo. However, this contrast should in fact be expected given that patients with cervical vertigo actually complain of imbalance and vague giddiness more than true vertigo. Part of the problem with recognising cervical vertigo as a nosological entity may be that the term itself is a misnomer: cervical imbalance may be a more accurate name, serving to remind clinicians that spasm of neck muscles may result in imbalance through cervico-collic as well as cervico-vestibular pathways and that true vertigo in the context of neck pain bears further investigation of other causes of vertigo where the spondylosis is coincidental.

Other Symptoms Associated with Cervical Spondylosis

—I have commonly found in clinical practice that certain symptoms often cluster together in the same individuals. In the presence of cervical spondylosis, there seems more than a chance occurrence not only of vertigo, but headache and tinnitus. While it could be argued that their vertigo is always benign peripheral positional vertigo, their headache is migraine and their tinnitus is from cochlear degeneration or “functional”, Occam’s razor and common sense encourages us to look for a single unifying cause.

I tend to call these associations with cervical spondylosis CHIT syndrome (Cervical Headache, Imbalance and Tinnitus).

Headache is well-described in association with cervical spondylosis, and this review has discussed the likely association with vertigo. Tinnitus seems a very unlikely association, but has in fact previously been cited as being linked with cervical vertigo (Brown, 1992) or abnormalities in the neck muscles (Reisshauer et al., 2006).

It initially seems bizarre how an auditory symptom could be associated with spondylosis. However, there is an interesting physiological phenomenon of muscle contraction called the Piper rhythm. When muscles contract steadily and strongly, their individual motor units tend to fire synchronously in a tuned rhythm at around 40 Hz so that the whole muscle vibrates at this frequency. This frequency, and most likely harmonics thereof, can actually be heard by placing a stethoscope over the belly of the contracting muscle. It can be demonstrated using frequency analysis of electromyogram, tremor and electronic stethoscope signals that what is heard does indeed relate to this motor unit activity.

Power speectral estimates and coherence analysis of 50% MVC of 1DI against an elastic resustance. Peaks at 10, 22 and 41 Hz in accelerometer tremor record and rectified surface EMG power spectra. Coherence analysis reveals strong coherence especially at these peaks. Upper horizontal line is the 95% confidence interval for significantly greater coherence compared to the whole spectrum – only lower 100 Hz of spectrum shown. Lower horiz line is 95% confidence interval for non-zero coherence. There is a constant linear phase lag of tremor behind EMG at all frequencies, indicating a value of 6.5 ms lag.

Power speectral estimates and coherence analysis of 50% maximum voluntary contraction of first dorsal interosseous muscle against an elastic resistance. There are peaks at 10, 22 and 41 Hz in accelerometer tremor record and rectified surface EMG power spectra. Coherence analysis reveals strong coherence especially at these peaks. Upper horizontal line is the 95% confidence interval for significantly greater coherence compared to the whole spectrum – only lower 100 Hz of spectrum shown. Lower horizontal line is 95% confidence interval for non-zero coherence. There is a constant linear phase lag of tremor behind EMG at all frequencies, indicating a value of 6.5 ms lag.


Same subject as in above figure under identical conditions. The microphone is not as sensitive at 10 Hz as 40 Hz, hence the larger 40 Hz peak compared with tremor and EMG.

Same subject as in above figure under identical conditions. The acoustomyogram (AMG) is recorded by an electronic heart sounds monitor placed over the belly of first dorsal interosseous during contraction against elastic resistance. The “sounds” are generated directly by muscle activity as seen on the EMG power spectrum in previous slide. The microphone is not as sensitive at 10 Hz as 40 Hz, hence the larger 40 Hz peak compared with tremor and EMG.

It is temping therefore to speculate that the tinnitus of cervical spondylosis relates to overactivity of the sternomastoid muscles that originate just behind the external ear. Rather than the tinnitus sounds being imaginary or related to cochlear damage, patients are actually hearing their own muscles contracting. Certainly this interesting notion bears further investigation.

Posted in Vertigo | Tagged , , , | Leave a comment

Journal Club Review: Driving after a Single Seizure

BMJ seizuresBackground

One of the main issues facing a patient diagnosed as having had a first epileptic seizure without any sinister underlying lesion – often a young adult and otherwise well – is the driving ban. One can only be sympathetic to the impact that it may have for some on travelling to work or actually performing their job. Some react with understanding, while others have the attitude that they will never expose themselves or others to harm even if the risk is tiny and they later become legally entitled to drive. A few react with incredulity: “I totally lost consciousness without warning, may do so again at any time, and you are ruining my career or social life by preventing me from driving for several months?!”

This can be a difficult conversation for clinicians, but at least one can remind oneself that the conversation might have been more difficult if the cause of their seizure was a brain tumour rather than cryptogenic, in which case they might only be alive for several months.

Two other points can help. First, in the European Union and in most other countries the rules are standardised and set by government authorities. The physician is only explaining the law of the land. In the US, some states have similar standard rules while others, perhaps unfortunately, do leave it to the doctor or to a medical review panel. Second, these rules were developed and modified after extensive review and consultation. Briefly communicating this process may help the patient to appreciate that they are designed to protect, not to punish. The paper reviewed here describes statistical data on risk of seizure recurrence that were used to help develop a consistent European Union Guideline, which informs the UK’s Driving and Vehicle Licencing Agency (DVLA) guideline (2013) and could be used to help doctors who must form their own guidelines.

The paper was published in the good old British Medical Journal (2010) and reanalyses data from the MESS (Multicentre Early Epilepsy and Single Seizure) study (2005), specifically on patients over 16 years of age who had a single unprovoked seizure and looks at the 12-month risk of recurrence at certain time points after the index seizure. In other words, if a patient has already gone some months following an initial seizure without a subsequent seizure, how likely are they to remain seizure-free for another 12 months?

This website had an accompanying commentary that discusses the original MESS study in more detail and the wider issues around prognosis and management after a single seizure. Clearly, the data in this paper are helpful for prognosis, but only in patients who have already gone a certain period seizure-free after their initial event.

Study design

The original MESS study’s inclusion criterion was that both patient and physician were uncertain about whether or not to start antiepileptic medication. Exclusion criteria included previous treatment with antiepileptic drugs or the presence of a progressive neurological disease. Out of around 1800 patients meeting the criteria, 1400 were enrolled; the others refused on the basis that they did not want to be randomised. Demographics showed no particular bias in these patients.

Patients were randomised to immediate treatment – the drug of the physician’s choice as early as possible after seizure (usually carbamazepine or sodium valproate) – or to deferred treatment, generally if the patient had a second seizure.

Where there were around 720 with single seizures in each arm in MESS, in the BMJ reanalysis there were around 320 in each arm who were 16+ and who had had only one seizure at the time of randomisation and whose date of seizure, as opposed to date of randomisation in the MESS study, was known.


The main finding of the BMJ reanalysis was that in the immediate treatment group the risk of recurrence in the next 12 months, having already gone 6 months without a seizure after the first seizure, was 14% (95% Confidence Interval (CI) 10-18%). In the deferred treatment group the risk rose to 18% (95% CI 13-23%). In the deferred treatment group, if the patient had already gone 12 months without a second seizure, their chance of recurrence dropped to 10% (95% CI  6% to 15%).

The overall general principle regarding driving has been arbitrarily set that if the risk of a seizure is less than 20% over the next year, then it is permissible for the patient to drive a private vehicle and if the risk is less than 2% they may drive a public or heavy goods vehicle. This is not a medical but a policy decision, presumably taking into account the proportion of time that the average person spends driving and the likelihood of risk to self and others should an accident occur as a result. The role of clinicians is simply to provide guidance on which patients have a 20% or greater risk.

It can be clearly seen from the data in this review that if a patient starts treatment, their 12 month risk 6 months after a seizure is lower than 20%. Therefore they may be allowed to drive at 6 months. The same applies to patients not on treatment – if one takes the mean estimate of risk of 18%. However, if a clinician were to be asked, “At what time would you be confident that the risk of recurrence in the next 12 months would be less than 20%”, he or she should use the upper confidence limit for the risk and so the 23% figure for patients not on treatment is too high. Only if patients not on medication have already gone a year without seizures is the upper confidence interval of 15% acceptable.

Strengths and weaknesses of the study

As mentioned in the paper, a potential weakness is that the data were taken from a randomised controlled trial (MESS) of patients having immediate vs. delayed treatment. From looking at the inclusion and exclusion criteria, one might suspect a selection bias that clustered patients of intermediate severity – those who definitely wanted medication or definitely didn’t want medication were excluded. So the risks in the low-risk subcategories might be overestimated and those in the high risk subcategories underestimated.

It could have been a problem that there was an inconvenient delay between seizure and randomisation in MESS of around 3 months. This would rule out patients who had a second seizure in that time. But 3 months is half of the six month seizure free period in which we are interested! Fortunately, in this paper the investigators back-tracked to get the actual seizure time rather than randomisation time; this means that the six month free period is an accurate reflection.

But if one wanted to generalise the findings to prognostication of seizure risk, surely something that the patients will want to know about, if one is making this prognostication on a patient just after their seizure (which should typically be the case as all patients having a seizure should be promptly reviewed by a specialist), then we cannot use the figures from MESS (which included children) or those reviewed here. All we can do is wait three months, say in a subsequent clinic, and if they have not had a seizure in that time, the figures reviewed here can be used. A more full discussion on prediction of risk and decisions on treatment is in the accompanying commentary on management after a first seizure.

Finally, there is the issue of validating seizures in the outpatient department, as was done in the study. Clinicians more inexperienced than those used in MESS might make more mistakes in correctly identifying seizures, or patients might deny or forget seizure occurrences. This is likely to be more of a problem in real life than in a trial. So we cannot say that MESS is overestimating risk, but we can say that MESS does not simulate the real life underestimation of risk that may occur in daily practice.

Different risks in different patients

If the policymakers wanted to finesse the guidelines to take into account other factors, there are adjustments that could be made. In a univariate factor model, it was found that remote symptomatic seizures (seizures occurring as a result of a brain insult e.g. head injury, encephalitis, neurosurgery, that occurred some time before the seizure) were associated with significantly higher risk, as were presence of neurological deficit, seizures while asleep, abnormal electroencephalogram (EEG), and lack of brain imaging information.

Calculating the risks for these subcategories reveals that, if taking the upper confidence intervals, remote symptomatic seizures, neurological deficit, sleeping seizures and abnormal EEG all shift the risk above the 20% threshold after 6 months seizure freedom, and the first two are above the threshold even after 12 months seizure freedom. However, the data numbers are getting small and estimations more inaccurate.

A multivariate analysis of various combinations of factors, much in the same way as risk of osteoporosis can be calculated, is a better way of addressing this issue.  This is shown in table 5 of the paper (below), noting that they have excluded patients with a first degree relative with epilepsy and sleep seizures. The latter are a special case; while recurrent seizures are more likely (because they may reflect particular epileptic syndromes) they are also more likely to recur in sleep and so be less relevant for driving risk. The UK DVLA rules now in fact permit driving with continuing sleep seizures provided a pattern of seizures only while asleep has been established for at least 1 year.

multivariate seizure risk factors

One can see, for example, that a non-remote symptomatic seizure with an abnormal EEG has an upper confidence interval of risk of 23% at 6 months even if imaging is normal. One might argue that the current blanket rule of 6 months is rather lenient for patients with an abnormal EEG or with a remote symptomatic seizure, especially if the patient is not on antiepileptic medication.

A careful view of the wording of the current UK Driving and Vehicle Licensing Agency guidelines in fact includes a clause “provided no risk factors indicate a more than 20% risk of a recurrence over the next 12 months”. If this is interpreted as being confident that the risk not more than 20%, then all the above-mentioned categories would entail a 12 month not 6 month ban, and we would be needing EEGs on everyone to inform this decision. If it is interpreted as being most likely risk level, then abnormal EEG still entails too high a risk if not on medication (23%), as does abnormal imaging if remote symptomatic and not on medication (22%). Only if it is interpreted as being possible that the risk is as low as 20% and the patient was started on medication and the seizure type was non-remote symptomatic is an EEG not necessary because it is only in this circumstance where the lowest confidence interval of risk is not above 20% whether or not the EEG is abnormal.

Data from other studies

A population rather than outpatient based study on 252 patients who had a single seizure as their index seizure (National General Practice Study of Epilepsy (1990)) found a 37% risk of a second seizure within 12 months, and an 18% risk if the patient had already been seizure free for 6 months. This shows just how much the risk level reduces if the patient has already undergone a modest seizure-free period. Factors increasing the risk  of recurrence were symptomatic seizures, neurological deficit, and no antiepileptic drug treatment. The findings are therefore comparable to the reviewed data.


This paper clearly does what it intends; to ascertain whether, after 6 or 12 months seizure free following a first seizure, the level of risk of a seizure over the ensuing 12 months is greater or lower than the policy threshold of safe private vehicle driving of 20%.

The analysis provides a rationale for the duration of the driving ban that might help some patients better come to terms with what may seem a punitive measure.

Partly as a result of this study, a number of changes have been made to the UK’s DVLA regulations (2013) regarding epilepsy:

  • The ban following a single seizure is reduced to 6 months from 12 months.
  • If a pattern of sleeping-only seizures is established for 1 year (formerly 3 years) the individual is allowed to drive.
  • If a patient was seizure free on medication, and then a seizure occurred as a result of a medication change, the patient can return to driving after only 6 months if they go back on the original medication.
  • If a patient has only ever had seizures that do not affect conscious level or ability to drive, they can drive a year after this diagnosis is established even if they continue to have these seizures.

However, the multivariate analysis of risk factors does raise some issues about higher risk categories, and draws attention to the clause in the DVLA guidelines “provided no risk factors indicate a more than 20% risk of a recurrence over the next 12 months”. I am not sure how many clinicians actually apply this rule.

Could they be sued if a patient had a single remote symptomatic seizure, was started on medication, and had a second seizure 11 months later resulting in a fatal road accident if the clinician had not performed an EEG, or if the EEG was performed and found to be abnormal?

Could they be similarly sued if they had any kind of seizure, but had not started medication and the EEG was not performed or was abnormal, or if both EEG and MRI were abnormal?

Or is there a “get-out argument” that one would have to use the lower confidence estimate of risk to prove that the risk was greater than 20%? In some categories even the lower confidence estimates are above 20%. Happy days for lawyers, if not for everyone else…

Posted in Epilepsy | Tagged , , | 1 Comment

Journal Club Commentary: Management of Single Seizures

MESS studyIntroduction

For this edition of the Neurology Online Journal Club I wanted to review not one but a series of papers to address a specific issue, namely predicting the risk of seizure recurrence after a single seizure and predicting how much this is reduced by starting anti-epileptic medication. I started with the Multi-centre study of early Epilepsy and Single Seizures (MESS) study, but there is more than one report on the same data set, and its main points prompted a more detailed look at other literature on the subject and my personal views. Hence I have described this as a commentary.

There is an accompanying Journal Club review that deals specifically with risk of seizure recurrence in relation to driving.


Epilepsy is certainly one of the more common conditions managed in neurology and indeed in general medical practice. The lifetime prevalence of seizures (% of people who will have a non-febrile seizure at some point in their lives) is 2-5%, and the prevalence of active epilepsy is around 0.5%. A first seizure often presents as a sudden, shocking event in a previously well person, and often leaves the patient in a similarly well state with the expectation of returning to a reasonably normal life – and yet bewildered and worried. As a result, it is a condition where in my view counselling of the patient regarding management options and involving the patient in decision-making is particularly important.

A specific issue with epilepsy management is that typically there are no ongoing symptoms or abnormal clinical signs. The patient may be starting treatment, exposing them to potential side effects, without making them feel better at all. We may have no idea whether or not the drug is working until it manifestly fails much later in the form of a recurrent seizure, and even then we are not sure what would have happened if we had not started treatment, or had started a different treatment. In this respect epilepsy management is more akin to management of episodic headache or TIA than of Parkinson’s disease or chronic pain.

When management revolves around predicting and minimising risk, statistics inevitably play a part. Clinicians need to have the communication skills to explain clearly to patients in broad terms the likely risks of seizure recurrence in different circumstances, and of course that means knowing those risks and understanding basic statistics themselves. Knowledge of risks is covered in this review, but communicating them remains a challenge. (For example, in the UK a survey revealed that the majority of adults did not appreciate that it was equally likely for one to roll a 6 on a die as any other number, or that a previous coin toss does not affect the result of a subsequent one.)

The key questions to which patients and clinicians need answers are:

  1. What is a specific patient’s risk of a further seizure over a certain time period? This estimate should factor in whether or not this was their first seizure, the seizure type and aetiology, the time they have already gone without a seizure and other factors that determine risk such as EEG, imaging abnormalities and family history of epilepsy.
  2. How much is this risk reduced if the patient goes on antiepileptic medication?
  3. If starting medication, and there are no further seizures, when should this medication be stopped again?

Risk after a first seizure

The FIRST study (First Seizure Trial Group Study) in 1993 reported recurrence risks of 18%, 28%, 41%, and 51% at 3, 6, 12, and 24 months if not given medication, and 7%, 8%, 17%, and 25% if given medication.  Randomisation on or off medication was done within 7 days of their seizure, so this is nicely applicable to an “early clinic” or inpatient decision. The odds ratio of reduction of risk by medication was 0.4 (i.e. seizures were only 40% as likely on medication as off medication).

The largest single study on risk of seizure recurrence with randomisation for initial treatment was that conducted by the Multi-centre study of early epilepsy and Single Seizures (MESS) study group; here the risk of recurrence in the 404 randomized to immediate treatment was somewhat lower at 18%, 32%, 42%, and 46% at 6 months, 2, 5, and 8 years after randomization versus 26%, 39%, 51%, and 52% in the deferred treatment group.

Cumulative risk of recurrence years after a seizure. Note that it is the top figure that specifically refers to a first seizure.

Cumulative risk of recurrence years after a seizure. Note that it is the top figure that specifically refers to a first seizure.

A key difference between the studies is that in the MESS study patients were randomised generally 3 months after their initial seizure. The six month figure is therefore the risk from 3 to 9 months after a seizure, having already gone about 3 months without a seizure.

Further analysis published in the BMJ (2010) of a subgroup of MESS study patients  looked specifically at implications for driving and is the subject of a complementary journal club review. This subgroup naturally consisted of those over 16 years of age and those who could have their seizure-free period dated back to their first seizure rather than to time of randomisation; it was found that the 12-month risk of a seizure, having already gone 6 months without a second seizure, was 18% off medication and 14% on medication and this difference did not reach statistical significance.

The lower risk found in the MESS study than in the FIRST study is supported by a prospective study without treatment randomisation (Hauser et al., 1998) and largely on adults; the risk of a first recurrence was 21%, 27%, and 33% at 1, 2, and 5 years after the initial seizure. In those who recurred, the risk of a second recurrence was 57%, 61%, and 73% at 1, 2, and 5 years after the first recurrence. The risk of a second recurrence approached 90% after remote symptomatic seizures (those that are secondary to a brain insult at a previous time and therefore indicating an ongoing risk) and was 60% following cryptogenic/idiopathic seizures.

A problem with comparison and interpretation of study data is in patient selection. While there were 1443 patients randomised in the MESS study, another 404 did not consent to randomisation. Those where the risk might be considered lowest might not want to consider taking medication, while those at high risk might not want to take the chance not to have medication. Furthermore, an actual selection criterion was that for ethical reasons both patient and clinician had to be unsure about whether or not to start medication to be invited to participate.

It is likely that low risk groups in such a study will have overestimated risk, while high risk groups might have underestimated risk and underestimated treatment effect. This possible shortcoming is important in guiding actual practice. If there is a policy from opinion leaders that treatment is not warranted for first seizures, this might get interpreted rigidly by others as a blanket rule and those patients at high risk after a first seizure – the very patients who might not have enrolled on the study – might not even get counselling about the possibility of taking medication.

Finally, different studies may have differing proportions of different seizure type. The MESS study took anyone over the age of 1 year, and there may have been a relatively high proportion presenting with a single minor complex partial seizure.

Decision to treat

Most epileptologists do not treat a single seizure. In fact they define epilepsy as two or more seizures, to try to exclude the significant proportion of individuals who have a single seizure and no further attacks.

Perhaps this conservative strategy is because of the side effects of antiepileptic drugs. These include potential teratogenicity if falling pregnant while on the drug, long-term effects contributing to osteoporosis, possible long-term effects on fertility and possible long-term effects on cognition (mainly mooted in children).

However, there are now many antiepileptic drugs from which to choose, increasing the chance of finding one to suit, and modern drugs may minimise many of these risks. If one looks at the side effects of most drugs taken for any length of time, the list looks at least as scary as that for modern antiepileptics. For example, most anti-migraine drugs also have potential teratogenicity.

If a cardiologist said to a patient who had just had a heart attack, “Well you could have secondary prevention to reduce your risk of a subsequent myocardial infarction (MI) over the next year from 41% to 17% (using the FIRST trial data), but we won’t bother because we don’t really say you have heart disease until you get your second MI”, they would be dialling up for a second opinion before he or she had finished the sentence! And secondary preventatives such as beta-blockers, antiplatelets and statins, and certainly coronary stenting procedures and coronary artery bypass grafts, are not without their risks either.

While the mortality associated with a generalised tonic clonic seizure is lower than that for an MI, it is not insignificant. Quite apart from the circumstances of the attack potentially posing a risk, there is a small but well-documented risk of sudden unexpected death in epilepsy, thought to relate to a number of factors including the extreme autonomic disturbance that occurs during the attack. The event may occur in a young completely healthy person out of the blue, reflects a total loss of self-control, may be potentially embarrassing and stigmatising, and may leave the patient exhausted or potentially even in a psychotic state for days afterwards. I think any trivialisation of a seizure in comparison with an MI can only reflect an age-old prejudice against neurological disease that it is “difficult”, “untreatable” and not suffered by “normal” people.

But other data presented here show that if for some reason an adult patient only saw someone in a position to advise on antiepiletic treatment about six months after their first seizure (the BMJ trial went back to the seizure date not the recruitment date), and they had not had a second seizure in that time, the 12-month seizure risk figures are only 18% vs 14%. This presents a completely different picture of risk of treatment side effects versus reduction of risk of seizures.

Stratification of Risk

Another follow-up to the MESS study (2006) stratified risk of seizure recurrence according to a scoring system (below).

Scoring system for sratification of risk of recurrence after a single seizure according to the MESS study data.

Scoring system for stratification of risk of recurrence after a single seizure according to the MESS study data.

Half of the patients in the MESS study were used to investigate these risk factors to develop the scoring system, and the other half were used to see if subgroups divided post-hoc according to this risk stratification would have differing benefits from medication. It was found that all but the lowest risk subgroup would benefit from medication (see below); in fact it bizarrely seems in the lowest risk category that avoiding treatment is non-significantly protective (p=0.2).

Kaplan-Meier derived estimates of probabilities of seizure recurrence divided according to different risk groups

Kaplan-Meier derived estimates of probabilities of seizure recurrence divided according to different risk groups. Start and delayed treatment refers to treatment started at randomisation or delayed until subsequent seizures.

This information could therefore provide a basis for individualising risk assessment and individualising decisions to treat on that basis, or at least providing a default strategy. However, it would be applicable only to patients seen in a clinic fully three months after their seizure who had not already started medication or had another seizure in the meantime.

When to stop treatment

If one is to embark on treatment, perhaps controversially so after a first seizure, when does one stop?

Antiepileptic drugs are probably only protective while being taken. This is indirectly illustrated by long-term remission figures in the MESS trial. Initial treatment decisions did not affect the overall figure of 92% of patients being at least 2 years seizure free 5 years after enrolment. In other words if treatment was deferred until a second seizure, they were as likely eventually to go into remission, but had obviously had more than one seizure while getting to that point and might still be on medication at that point.

One rationale would be to treat for as long as the drug appears from population studies to be significantly reducing the risk of a subsequent seizure.

The longer the patient is seizure-free, the closer data taken from patients with single seizures recruited 3 months late will correspond to those taken immediately, so the more accurate the original MESS data become. We see that from this study’s long term follow-up, the 2 year risks were 32% vs 39%, 5 year risks were 42% vs 51%, and the 8 year risks 46% vs 52%. There is probably a diminishing return over time, but it is difficult to draw a firm conclusion as to significance of this reduced risk at different times.

Most studies specifically looking at timing of antiepileptic withdrawal are on patients who had had more than one seizure, precisely because most clinicians do not start treatment for a single seizure in the first place! Obviously the findings cannot be applied to those who had a single unprovoked seizure, because the overall risk is lower in this group.

One study (JNNP 2002) on patients who mainly had multiple seizures but which at least selected patients on monotherapy, and so tended to reflect patients more easily controlled, found that after 2 years the 12 month recurrence risk was 9% continuing on medication vs 26% stopping medication; on a multivariate analysis, the hazard ratio was 2.6 (CI 1.5 -4.8), and the hazard ratio dropped to 1.6 (1.0 – 2.6) if 3 to five years seizure free and to 1.0 if >5 years seizure free. So after multiple seizures there is clearly no excess risk from stopping medication only if seizure free for >5 years.



We have conflicting risks, conflicting risk reductions from medication and data that apply only in specific circumstances.

What we need is a large multi-centre study that:

  • Randomises patients immediately, so we can make an informed treatment decision at an appropriate time when the recurrence risk is highest
  • Subdivides into age groups, as the paediatric population and geriatric population may have different seizure aetiologies from young adults, and even different clinicians.
  • Subdivides according to generalised tonic clonic versus complex partial seizures. The latter are by no means as severe and dangerous, and one might imagine that if the first seizure is complex partial, there may be a higher chance of a subsequent one being of the same type.
  • Stratifies risk as in the MESS study, taking account of EEG, MRI, neurological deficit and cognitive impairment.
  • Uses more modern drugs – nowadays lamotrigine and levetiracetam are common first-line agents, as opposed to carbamazepine and valproate which were the drugs that mainly featured in the MESS study. While these are admittedly not clearly more efficacious, they are better tolerated.
  • Includes an analysis of the side effects of drugs in those randomised to treatment, and the quality of life impact of these side effects and of the “inconvenience” factor of taking regular medication.

Given the current lack of clear data, we are left with clinical judgement and patient preference.

My practice with regard to a patient who has just had a generalised tonic clonic seizure is largely to ignore the data from MESS indicating that treating a first seizure non-significantly increases risk when the EEG and neurological examination are normal. How much were the data distorted by being randomised 3 months after the seizure? How many in this category had a complex partial seizure? A particular problem is that often I am not going to get an EEG within a week of the seizure, so a major risk stratification factor is unknown at the most important time to start treatment. I quote the FIRST trial as a “worst case scenario”, something like:

The risks of recurrence could be as high as 41% over the next year and medication could reduce this to 17%. However, given your neurological examination and imaging (and possibly EEG) are normal, and there is no particular evidence of a recurrent epileptic syndrome (e.g. clear family history, developmental delay, juvenile myoclonic epilepsy), the risk may be appreciably lower and the benefit of medication therefore appreciably less. The risk, which includes a slight risk of sudden death as a result of a second seizure, must be balanced against the risk of side effects of taking medication.

Particular factors relevant for you might be the further 12-month driving ban after a subsequent seizure, and teratogenic risk of drugs if you fall pregnant while taking them. (Though lamotrigine and levetiracetam have rather favourable teratogenic risk profiles.)

Then, when it comes to stopping medication, as this should really be addressed before starting:

Since you have only had one seizure, we would empirically consider you in the generally accepted “best category in whom one would initially treat” and advise at least 2 years treatment assuming no further seizures. This 2 year figure is somewhat arbitrary, reflecting that FIRST demonstrated continued risk reduction two years after starting medication but did not investigate a longer period.

If the patient has had a single complex partial seizure and no risk factors, I would explain:

For this relatively minor seizure type there is a lack of evidence for treatment and most patients are not treated. Only if you are very keen on treatment, e.g. regarding driving, would I offer it to you after counselling on potential drug side effects.

If the patient is in the medium or high risk category according to the stratification of the MESS data, in other words neurological deficit, developmental delay, cognitive impairment, features of an epileptic syndrome, or if I have an EEG already and it is abnormal, or perhaps an epileptogenic lesion on an MRI scan to boot, I will tend to use the MESS data:

A potentially risky time for seizure recurrence is in the next 3 months. Even if you do get to three months without a seizure a major study has shown that the risk of a second seizure by one year is 35% and medication may reduce this to 24% (or for the high risk category 59% to 36%). Given these risks, and the slight possibility of death from a seizure, I would advise treatment despite the potential risks of drug side effects unless you had any particular issues.

And for stopping medication again:

The long-term 5+ year follow-up in the MESS study indicated that many patients go into seizure remission at this time after their first seizure, whether or not they started on medication initially or had seizures during this time, but those who were initially treated were less likely to have seizures in getting to that 5-year milestone. Furthermore, another study (though on patients who had had more than one seizure) showed that antiepileptic drugs may still reduce the risk of a recurrence over the subsequent 12 months if you have gone up to, but not beyond, 5 years without a seizure. Even if you remain seizure-free, I therefore generally recommend 5 years of treatment before slow medication withdrawal.

If the first presentation was with status, the risk of recurrence is not much greater but the risk of recurrent status is greater and so I would advise at least a 5 year seizure-free period before withdrawal even if no risk factors. And, moving away from the single seizure scenario, if the patient has had many seizures before the seizure-free interval, or evidence of an ongoing epileptic syndrome, even beyond 5 years seizure-free I counsel that there is always a risk of recurrence and being on epileptics may reduce this risk though they have not been proven to do so.

If one happens to see the patient for the first time at around 3 months after the event, and one has an EEG, then I think one might directly apply the MESS reanalysis of stratification of risk, namely to recommend treatment only if the patient is not in the lowest risk category. However, if the seizure was generalised tonic clonic, I am still uncertain about the applicability of that study, and I counsel the patient that while there is no clear evidence for treatment from clinical trials there are still arguments for as well as against treatment.

Of course, all these recommendations would only be a basis for discussion. Some patients may be focussed on taking a medication for any possible benefit to minimise risk of extended driving bans or sudden unexpected death in epilepsy. Others may not want to risk drug side effects unless they are of proven benefit or there is any possible risk of teratogenicity (despite the risk that a generalised tonic-clonic seizure in a mother poses to her unborn baby). I do counsel strongly that if one does embark on medication treatment for an unprovoked seizure there is little point in taking it for a period of less than 2 years. I also counsel them up front about the UK’s 3-month recommended period off driving at a future time of withdrawal of medication. Even this relatively short time off driving during this potentially risky period immediately after drug withdrawal could have important connotations for some patients who have been back driving again for 18 months.

Posted in Epilepsy | Tagged , , | 1 Comment

Primer on Statistics for Non-Statisticians

Many of the journals discussed assume a knowledge of statistics. In fact, it is often the statistics that are the crucial issue in a critical review of a research study. And paradoxically it seems that the further we move from the more scientific field of basic science and towards the more “accessible” field of clinical medicine, the statistics becomes more not less complicated.

“Hard” science might involve testing a complex hypothesis with a single complex experiment in a controlled, perhaps in vitro, environment. The experiment might have a few runs, or a few test subjects or perhaps only one. Statistics are all about estimation and sampling, so little if any statistics may be involved after the result is obtained – especially if there is only one result!

Oon the other hand, a clinical medicine study might involve a relatively easy to conceptualise hypothesis and easy measurements but tested on a real life subject where there are myriad other variables over which the investigator has no control. As a consequence, the test may have to be repeated in many different subjects in order to minimise the “noise” of random variabilities and maximise the “signal” of the variable under investigation. With repetition, the “signal” is amplified in an additive fashion, while the “noise” cancels out. Furthermore, in clinical medicine the hypothesis may be more vague; the investigation might involve an empirical study of a number of different factors which might interact with one another. Often the more vague the hypothesis, the more advanced the statistics required to make any sense of the data.

So I am really simply warning the reader, in a rather long-winded manner, that one may find the most advanced statistics lurking behind the abstracts of the most seemingly accessible research, and that probing the authors’ statistical interpretation of their data is sometimes the key to deciding how seriously to take their findings.

With this in mind I have attempted a statistical primer for the non-statistician, perhaps to dip into as a statistical topic comes up in a journal review, or perhaps to peruse in a more thorough manner. The contents link is below:

Primer on Statistics for Non-Statisticians: Introduction and Contents

Posted in Primer Posts for General Readers | Tagged , , | Leave a comment

Journal Club Scientific Review: Structural Brain Changes in Migraine (the CAMERA-2 Study)

Scientific Review

For this paper, I decided to complete two complementary reviews. The Journal Club General Reader Review can be considered a background and a summary for this scientific review.


It has been suggested for some time that, for a given age, migraine is associated statistically with an excess of white matter lesions as seen on MRI. Possible explanations lie in a pro-coagulant or pro-inflammatory state of cerebral blood vessels during a migraine attack, or recurrent paradoxical emboli. Of course, complicated migraine results in transient neurological symptoms that could have a vascular basis and migraine is associated with clinical stroke, albeit rarely, so there is a potential clinical correlate of such changes at least in some patients with migraine.

The MRI lesion association was corroborated by, among others, the CAMERA study, which looked at 295 patients with migraine and compared the presence of MRI lesions with 140 age, sex, diabetes and hypertension matched controls. There was a higher prevalence (and a greater total volume of if present) of deep white matter T2- weighted intensities. There were also more lesions if the migraineur’s attacks were more frequent.

The CAMERA 2 study, the topic for this journal club, follows up these subjects nine years later, looking at progression of their MRI abnormalities and their scores on a battery of cognitive tests performed at the end of the study period.

Journal Review

There were 203 of the 295 original migraineurs and 83 of the original 140 controls available for this second study. Non-participation was equally likely in both groups, and the most commonly cited reasons were lack of interest and difficulty travelling. (The study goes into analysing the non-responders in appropriate detail).

Migraine was diagnosed by standardised International Headache Society criteria. The use of preventatives (that could be protective against vascular changes) or triptans (that theoretically could provoke vascular changes) was probably not prevalent enough to affect the results.

The same imaging protocols were used for the repeat MRI scans, so that there would be a fair comparison over the nine-year period. Analysis of the number and total volume of lesions was done largely by automated software, checked manually by a blinded rater. Abnormalities were grouped into hemispheric T2 weighted deep white matter hyperintensities, infratentorial T2 weighted hyperintensities excluding those hypointense on FLAIR (ie not simply CSF spaces), and other infarct-like lesions in the posterior circulation territory.

Cognitive scores were measured on a number of tests and then converted to Z-scores so that they could be normalised to give an aggregate score for a patient. Association between deep white matter hyperintensity load and follow-up cognitive tests, or between deep white matter hyperintensity load and change in cognition was assessed by linear regression, adjusting for age, sex and educational level. A second linear regression model also adjusted for presence or absence of migraine to see the influence of migraine on the lesion load cognition relationship.

In women only, there was an increased deep white matter hyperintensity load in migraineurs vs controls – 0.02 ml vs 0.00 ml at baseline and 0.09 ml vs. 0.04 ml at follow-up. There was also higher incidence of progression, defined as >0.01 ml volume (77% vs 60%; p=0.02). These were new lesions rather than enlarging pre-existing lesions. Finally there was an increased incidence of “high” progression (23% vs 9%; p=0.03). There was no association with migraine severity measures or its treatment.

There was no effect of presence of migraine on progression of periventricular white matter hyperintensities, or infratentorial hyperintensities or posterior territory infarcts.

The cognitive performances across a number of tests were normalised by calculating Z-scores. For non-statisticians among us, these work like IQ scores, i.e.  scores that can be directly compared even if the tests are different. A Z-score of 1.0 means that the patient’s score is one standard deviation better than the average score of the population. This is equivalent to an IQ of about 115 on most tests. (One standard deviation is defined such that around 70% of scores will fall between Z-scores of -1.0 and 1.0.)

Taking all the high-lesion load subjects (defined simply as the worst quintile) vs all the low-lesion load subjects (the remainder), the high lesion load ones had an overall mean Z-score of -3.7 and the low lesion load ones had a mean of 1.4. This was said to be not statistically significant (p=0.07).

(This fooled me at first, possibly not a hard thing to do. A Z-score of -3.7 would mean the high lesion load patients were on average in the bottom sub 1%! But what the authors did was simply add the individual Z-scores across 13 individual tests – they did not take the overall mean across the 13 tests. So the mean for high lesion scores is actually only 0.28 standard deviations lower than the population mean. In fact their “population” is simply all their patients, so if the high lesion load patients were lower than average, the low lesion load patients would have to be higher than average, though by less in magnitude, since there were four times as many low lesion load patients. Statistically, simple addition is fine as one would otherwise just be dividing everything by 13 and that would not change the test results.)

As I have commented in the review for general readers, though, a p-value of 0.07 is nevertheless suggestive that there might be an effect, just that it did not reach significance in this study. The presence or absence of migraine did not influence the lesion-load effect, which had a slightly more reassuring p-value of 0.3, but again if the first effect is really borderline, I am not sure how the linear regression model they use would be expected to behave when adding in the migraine factor.

A general limitation of the study, upon which the authors comment, is that the recording of lesions is rather semi-quantitative and the confidence intervals for odds ratios are wide, suggesting wide inter-subject variability. For example infratentorial progression was considered non significantly associated with migraine because the p value was 0.05. The odds ratio was 7.7 (7 times more likely for progressive lesions if migraine), but the confidence interval was 1.0 to 59.9, meaning that the lower limit was just at the level of no excess risk at all.

Other studies have in fact shown a significant association between lesion load and general cognitive function in apparently healthy elderly subjects (van der Flier et al., 2005). Most previous studies, however, do not show a significant association between migraine itself and declining cognitive function.

The suggestive lesion load effect was not present for lesion load at baseline, only for lesion load at the 9 year follow-up; subjects with a high lesion load 9 years ago did not have a greater change in cognitive function (-0.5 for high load, 0.2 for low lesion load; p=0.4).

In summary, it seems that at a mean age of 57, despite the fact that female migraineurs have scans whose lesions progress more, lesion load per se is associated with (almost significantly) lower cognition, and the presence of migraine does not seem to tighten this possible association. The lesion load 9 years earlier in those of mean age 47 does not predict worse cognition.

Probably this is all indicating that migraine is one of many factors that can result in white matter lesions and some but not all of these factors are associated in turn with cognitive impairment. One factor that is likely to be associated is age;  lesions present when subjects were 9 years younger do not predict future impairment, but there is a suggestion that the lesions one has accumulated at a mean age of 57 might be associated with impairment.

In other words, while in general white matter lesions might be associated with impaired cognition, there is no evidence that the white matter lesions seen in younger patients with migraine are going to be associated with impaired cognition in around 10 years time. This is perhaps reflective of the fact that in migraineurs the white matter lesions tend to be small, and to remain small though more numerous over time  – perhaps a different natural history from ischaemic lesions that become larger and more confluent as the volume load increases over time.


There is no clear association in this study between migraine and the development of cognitive deficits. There was a significant, but possibly modest, progression in lesion load on MRI compared to the normal aging process. While there was no clear association between lesion load and cognitive deficits, the wide variability in lesion load and the detailed statistical findings indicate that the study is not powered sufficiently to conclude that it is unlikely that lesion load is associated with cognitive deficits. It therefore remains unclear what it is about migraine that results in this excess lesion load but not in cognitive decline, and for us to be completely confident that there is no age range or other subgroup of patients with migraine where such lesions have any clinical significance. As a result, it still remains unclear how we should advise patients with migraine and MRI lesions regarding cerebrovascular preventive measures.

Posted in Migraine | Tagged , , | 1 Comment

Journal Club General Reader Review: Structural Brain Changes in Migraine (the CAMERA-2 Study)

Review for General Readers

For this paper, I decided to complete two complementary reviews. This one for general readers can be considered a background and a summary for the Journal Club Scientific Review.


For some time there has been a nagging concern among clinicians that migraine is associated with premature vascular changes in the brain. Given how common migraine is, and how commonly imaging is performed as a screening investigation for headache, there arises all too commonly an awkward situation where imaging is performed in a patient with migraine to rule out sinister pathology, and then the imaging is “not quite normal”. In fact the imaging indicates the presence of vascular changes that are typically present mainly in older people. Hardy reassuring.

Does this mean that every migraine attack is causing a mini-stroke, or that migraineurs, when they grow older, are more susceptible to stroke or to vascular dementia or to pre-frontal gait and balance problems? How aggressively should we address vascular risk factors in all migraine patients, about 12% of the adult population? Should we perform MRI scans on all 12%, and address risk factors in the sizeable proportion with the excess lesions, or address risk factors in all, or in none? Should we be thinking in terms of secondary prevention measures, rather than primary prevention? (Secondary prevention means preventing stroke or heart disease when such events have already occurred. The balance of risks is consequently shifted in favour of intervention despite potential side effects or risks.) What about echocardiography to screen all migraineurs for cardiac sources of emboli and for mitral valve prolapse? What about a bubble study to investigate patent foramen ovale? The questions multiply and the answers are frustratingly lacking.

 These concerns over MRI appearances were confirmed by epidemiological findings, including the CAMERA study (Cerebral Abnormalities in Migraine – an Epidemiological Risk Analysis). In nearly 300 subjects with migraine, the female subgroup was indeed found to have an excess of small scattered white matter changes on MR imaging compared to 140 age, sex and other risk factor matched controls. Furthermore, the more frequent the migraines, the greater the number of lesions, indicating that there could be some cumulative lesioning effect from migraine attacks.

However, this study merely corroborated the imaging findings. It did not indicate whether or not they actually mean anything for patients. Therefore the CAMERA study followed up its patients, measuring changes in lesion load and recording cognitive ability by IQ tests; the findings after 9 years are presented in CAMERA 2, the subject for this review.

Journal Review

Around two-thirds of the original CAMERA 1 study subjects with and without migraine were followed up. In females, it was found that 77% of patients with migraine had worsening of a certain pattern of imaging abnormalities called deep hemispheric white matter lesions, compared with 60% of female controls. One expects some progression simply due to age, the mean age by this second study being 57 years. The more prevalent progression in the migraine patients was nevertheless statistically significant (p=0.04). Progression of other types of brain lesions was not significantly different between female migraineurs and controls, nor was there a migrainous association in men with any kind of MRI lesion or progression thereof. Unlike the baseline findings from CAMERA 1, further progression in the number of white matter lesions was not associated with a higher frequency of migraine attacks.

Most importantly, the study failed to find any relationship between presence or absence of MRI lesions and cognition. However, overall I would personally take these finding as leaving me “a little less worried than I was before” rather than “reassured”. This is because of the statistical detail.

The authors chose to analyse the cognitive (and fine movement task) data by lumping all the migraine and non-migraine patients together and then dividing them into the worst fifth regarding lesion load and the best four fifths. Using a statistical model involving linear regression, they found that, after correcting for prior educational level, age and sex, there was a trend for worse cognition in the smaller high lesion load group compared to the larger low lesion load group but this did not reach significance (p=0.07).

However there is a difference between saying, “the lack of statistical significance means there is no evidence for an effect of lesion load on cognition”, and “the lack of statistical significance means there is positive evidence for no effect of lesion load on cognition”. This difference is often lost on journalist and publicists. Although the statistics cannot prove that cognition is worse with higher lesion load, with that p-value I for one would like to be in the low lesion load group!

They then analysed the all-important migraine issue by bringing whether or not the subjects had migraine into the high vs low lesion load cognition statistical model and they found that having migraine did not influence this (said to be lack of) effect (p=0.3). But if the effect is really borderline rather than absent, might the migraine influence be too?

There was also the very clearly non-significant statistic (p=0.9) that the migraine patients overall had cognitive scores that were not significantly worse than non-migrainous controls, which is reassuring though I think this was a straight comparison rather than correcting for possibly higher original cognition or possibly higher educational level.

Finally, high lesion load in the CAMERA 1 study 9 years earlier did not predict worsened cognition at the time of CAMERA 2. In other words, it seems to be more the age-related subsequent accumulation of lesions that possibly matches with poor cognition rather than the original migraine associated lesions. (Remember, nearly as many non-migrainous patients had progression in white matter changes over the nine years as migraine patients.)

While these two latter points are somewhat reassuring, we still do not get a clear answer to the question, “In the subset of female migraine patients with high lesion load, did their cognition deteriorate more from nine years earlier than that of the migraine patients with low lesion load or that of the controls with low lesion load?”


Returning to the original clinical scenario, given all the whys and wherefores I don’ t think we can draw any firm conclusions from this study to provide reassurance to patients with migraine. Yes, migraineurs have progression in lesions more than expected for age. No, these are not associated with ongoing frequency of migraine attacks and no they are not found to be associated with impaired cognition nine years later.

One must also place studies in their context. Reviewing this paper prompted me to look further into the literature. In fact there is a reasonable body of recent evidence from long-term follow-up of migraine patients in general that there is no progressive cognitive impairment. This therefore provides further support for the argument that the MRI lesions seen in migraine do not have this clinical significance.

Nevertheless, I still cannot be confident that in no migraine patient is there any significance to their lesion load, beyond that associated with other coincidental risk factors such as diabetes. I think further follow-up of this study cohort would be helpful. For example, another ten years later when the subjects will on average be in their sixties, will there be any greater deterioration in the already-measured cognitive scores in the subset of migraine patients with more highly progressive lesions than in non-migraine patients with more highly progressive lesions? More importantly, are the high lesion load patients with migraine becoming clinically demented, or suffering increased strokes or progressive gait impairment?

I can only say that, working retrospectively from my own clinical experience, an excess risk of stroke and other vascular diseases is not something I have particularly observed in patients who had migraine when they were younger, unlike the situation in cigarette smokers and diabetics. On the other hand, in the elderly population, the occurrence of migraine attacks does seem to be a marker of vascular disease. Perhaps it is the age of the patient with migraine that is the key, and the slightly mixed findings of the study reflect that they have selected a rather mixed-aged cohort.

Link to Scientific Review of this topic.

Posted in Migraine | Tagged , , | 1 Comment

Journal Club review: Risk Factors in Critical Illness Myopathy during the early course of Critical Illness – a Prospective Observational Study

Summary for General Readers

As discussed in the accompanying primer, I chose to review a research article (Weber-Carstens et al., 2010) I found that looked both at risk factors for development of critical illness myopathy and a new diagnostic test for it.

The premise of the test is this; traditionally both nerve and muscle diseases are investigated electrophysiologically by inserting a tiny needle into a muscle and recording the electrical potential that occurs across the muscle when the nerve to the muscle is stimulated by a small electrical current applied through the skin over the nerve (this is only a little uncomfortable even for a wide awake patient). If there is a shrinkage in the recorded potential due to damage, there are other clues that indicate whether it is likely to be the nerve or the muscle that is the problem. But in an unconscious patient who may have two overlapping pathologies as described above, we need any extra information we can get. The new test actually stimulates the muscle directly, not the nerve, without needing voluntary co-operation on the part of the patent and records the muscle membrane excitability. Thus, this will be abnormal in a myopathy (e.g. a critical illness myopathy) but normal in a neuropathy (e.g. if the patient was in an intensive treatment unit (ITU) for Guillain Barre syndrome or coincidentally had diabetic neuropathy).

The study followed 40 patients who had been admitted to ITU and who had been broadly selected as being at high risk because they had persistently poor scores on basic life-functions (e.g. conscious level, blood pressure, blood oxygenation levels, fever, urine output). They looked at all the parameters that could put patients at risk of developing critical illness myopathy and then analysed these against the muscle membrane excitability test measurements. It was found that 22 of the patients showed abnormalities on this test, and these patients did indeed have more weakness and require a longer ITU stay, suggesting they had critical illness myopathy. In terms of factors that would predict development of myopathy, there was an important correlation between abnormal muscle membrane test findings and a certain blood test (raised interleukin 6 level) that indicates systemic inflammation or infection. Other (possibly overlapping) correlations included the overall disease severity, the overt presence of infection, a marker indicating resistance to the hormone insulin (IGFBP1), the requirement for adrenaline (called epinephrine in the US) type stimulants and the requirement for heavier sedation.

The study’s strengths are that it highlights an important area of patient management that may often be somewhat neglected, it seems thoroughly conducted with a convincing result and it not only describes a new test but shows how it may be clinically useful and validates it against the patients’ actual clinical outcome. I felt that a possible missed opportunity was relying solely upon the notoriously insensitive Medical Research Council (MRC) strength assessment system. At the levels they were recording (from around 2 to 4), the test is a bit better, however, and it at least reflects something that is clinically relevant. Values for the actual numbers of patients who were clinically weak such as to delay recovery in the test-positive vs test-negative patients would have been helpful. A quantitative limb strength measure (when the patient later wakes up more fully) or a measure of respiratory efforts might also have been useful. Finally, one cannot take the proportion of patients with critical illness myopathy on this test as a prevalence level (though the authors do not purport to do this). This is because a positive test result does not necessarily indicate a clinically significant myopathy, as mentioned above, and because the patients were already selected as being severe cases. A study looking at any ITU patients would be interesting; for example would there be certain risk factors for myopathy even in patients who were otherwise generally less critically ill?

This question brings me to another point that I think may be important. After reviewing the journal, further review of the wider literature on critical illness myopathy led to my understanding that there are three distinct pathological types (meaning appearances under microscopy and staining), but to a variable extent they may all be caused by the catabolic state of the ill patient. A catabolic state means a condition where body tissues are broken down for their constituent parts to supply glucose for energy or amino acids to make new protein. In a critically ill patient, the physiological response is to go “all out” to preserve nutrition for vital organs, such as the brain, the heart and the internal organs, in the expectation that there will be little or no food intake. Especially if the patient has fever or is under physiological stress, there is also an increased demand for nutrition. So the body breaks down the protein of its own tissues for its energy supply, and the most plentiful source for this “meat”, as with any meat we might eat, is… muscle. My accompanying journal club review goes beyond the research article to look at measures to  limit or correct this “self-cannabilistic” tendency in ITU patients.

But related to the issue described above regarding selection of patients are some intriguing questions. What if the same phenomenon occurred to a lesser extent in other patients who were sick but not severely enough to need transfer to ITU? What would be the effect if a patient were in a chronic catabolic state already because they were half-starved as the consequence of a neurological problem that affected the ability to swallow, or if they already had a muscle-wasting neurological condition?

It is possible, for example, that this could have a major impact on care of patients suffering from acute and not so acute stroke. Identifying and specifically treating those whose weakness is not only due to their stroke but to a superadded critical illness myopathy induced by the fact that they are generally very unwell, susceptible to infection and poorly nourished due to swallowing problems could have a significant positive influence on rate of recovery and final outcome.

Scientific Background


Critical illness myopathy is a relatively common complication experienced by patients managed in intensive care, occurring in 25-50% of cases where there is sepsis, multi-organ failure or a stay longer than seven days. I chose a research article on this condition for online journal club review because I had previously assumed the condition was rare and knew little about it until the fact that a patient of mine was identified as having the condition prompted me to engage in some background reading. The study I have reviewed focuses on diagnosis and prediction of risk factors. As a Neurologist I was particularly concerned with difficulty in diagnosis when the reason for the patient requiring ITU management in the first place is that they have a primary neuromuscular disorder. In other words, the critical illness myopathy is a superadded cause for their weakness. First, I describe some of the general background on this seldom-reviewed (by me at any rate!) condition.


The exact incidence of critical illness myopathy even in the well-defined situation of ITU, is unclear and varies between studies, perhaps reflecting different case mixes and difficulty distinguishing from critical illness polyneuropathy. Indeed in some cases, myopathy and neuropathy may coexist. An early prospective study by Lacomis et al. (1998) found electromyographic (EMG) evidence of myopathic changes in 46% of prolonged stay ICU patients. When looking at clinically apparent neuromyopathy, De Jonghe et al. (2002), found an incidence of 25%, with 10% of the total having EMG and muscle biopsy evidence of myopathic or neurogenic changes. In a review by Stevens et al. (2007), the overall incidence of critical illness myopathy or neuropathy was 46% in patients with a prolonged stay, multi-organ failure or sepsis. A multi-centre study of 92 unselected patients found that 30% had electrophysiological evidence for neuromyopathy (Guarneri et al., 2008). Pure myopathy was more common that neuropathic or mixed types and carried a better prognosis, with three of six recovering fairly acutely and a further two within six months.


In a patient with limb weakness in an intensive care setting there should be a high level of suspicion for critical illness neuromyopathy. Nerve conduction studies (NCS) and EMG may help to distinguish critical illness polyneuropathy with more distal involvement and large polyphasic motor units on EMG, from critical illness myopathy with more global involvement, normal sensory nerve conduction and small polyphasic units.

However, there remain potential difficulties. First, EMG is easier to interpret when an interference pattern from voluntary contraction can be obtained, but this might prove impossible with a heavily sedated or comatose patient. Second, when the patient’s primary condition is neurological, such as in Guillain Barre syndrome, myasthenia, myopathy or motor neurone disease, it may be difficult to distinguish NCS and EMG abnormalities of these conditions from those of superadded critical illness.

In cases of suspected critical illness myopathy, the most definitive investigation is muscle biopsy. Histologically, it manifests in one of three ways, and these may be distinguished from neurogenic changes or other myopathic disease.

Subtypes of Critical Illness Myopathy: Minimal Change Myopathy

Minimal Change Critical Illness Myopathy (CIM)The first subtype is minimal change myopathy. There is increased fibre size variation, some appearing atrophic and angulated as they become distorted by their normal neighbours. Type II fibre involvement may predominate, perhaps because fast twitch fibres are more susceptible to fatigue and disuse atrophy. There is no inflammatory response and thus serum creatine kinase is normal.

Clinically, it may be apparent only as an unexpected difficulty weaning from ventilation, and the EMG changes may be mild, making muscle biopsy more critical.

The condition may lie on a continuum with disuse atrophy, but made more extreme by a severe catabolic reaction induced by sepsis and systemic inflammatory responses triggering multi-organ failure (Schweickert & Hall, 2007). Muscle is one such target organ; ischaemia and electrolyte and osmotic disturbance in the critically ill patient trigger catabolism by releasing glucocorticoids and cytokines such as interleukins and tumour necrosis factor. For example, Interleukin 6 promotes a high affinity binding protein for insulin like growth factor (IGF) to down-regulate the latter and thereby block its role in glucose uptake and protein synthesis. This is paralleled by a state of insulin resistance. Muscle may be particularly susceptible to catabolic breakdown, being a ready “reserve” for amino acids to be used in proteolysis to maintain gluconeogenesis for other vital tissues in the body’s stressed state (Van den Berghe, 2000). A starved patient may lose around 75 g/day of protein, while a critically ill patient may lose up to 250 g/day, equivalent to nearly 1 kg of muscle mass (Burnham et al., 2003). Disuse, exacerbated iatrogenically by sedatives, membrane stabilisers and neuromuscular blocking drugs, may impair the transmission of myotrophic factors and further potentiate the tendency to muscle atrophy (Ferrando, 2000).

Subtypes of Critical Illness Myopathy: Thick Filament Myopathy

Patchy Myosin Filament loss in Thick Filament MyopathyThe second histological subtype is thick filament myopathy. There is selective proteolysis of myosin filaments, as seen by smudging of fibres on Gomorri Trichrome light microscopy and directly on electron microscopy. Since myosin carries the ATPase moiety, this is apparent on light microscopy as a specific lack of ATPase staining of both type I and type II fibres. Clinically, patients may have global flaccid paralysis, sometimes including ophthalmoplegia, and difficulty weaning from the ventilator. The CK may be normal or raised. Thick filament myopathy appears to have a similar pathophysiology to minimal change myopathy, but may be especially associated with high-dose steroid administration and neuromuscular blocking agents, particularly vecuronium.

Subtypes of Critical Illness Myopathy: Acute Necrotising Myopathy

Acute Necrotising MyopathyThis is a more aggressive myopathy, with prominent myonecrosis, vacuolization and phagocytosis. Weakness is widespread and the CK is generally raised. Its aetiology may relate to the catabolic state rendering the muscle susceptible to variety of additional, possibly iatrogenic, toxic factors. It may lie on a continuum with, and progress to, frank rhabdomyolysis.


There are a number of steps in managing critical illness myopathy.

  • First, iatrogenic risk factors should be identified and avoided where possible (see list above).
  • Second, appropriate nutritional supplementation may be helpful but objective evidence for this is sparse. Parenteral high dose glutamine supplementation may improve overall outcome and length of hospital stay (Novak et al., 2002), and since critical illness myopathy is so common at least some of this may be by partly reversing the catabolic tendency in muscle. Other amino acid supplements and antioxidant supplements (e.g. glutathione) could have similar effects but have not been adequately trialed. There is again no conclusive proof in favour of androgen or growth hormone supplements, and in the latter case there may be adverse effects (Takala J et al., 1999). Tight glucose control with intensive insulin therapy reduces time on ventilatory support, and may protect against critical illness neuropathy, but the effect on myopathy is not clear (van den Berghe et al., 2001).
  • Finally, early physiotherapy encouraging activity may be helpful, as shown in a randomised controlled trial (Schweickert et al., 2009), perhaps preventing the amplification of catabolic effects by lack of activity.

Journal Review

The research article reviewed here (Weber-Carstens et al., 2010) describes a study looking at a relatively new electrophyiological test for myopathy, namely measurement of muscle membrane electrical excitability to direct muscular stimulation. An attenuated response on this test will indicate a myopathic process unlike a reduced traditional compound muscle action potential that could reflect either neural or muscular pathology. Furthermore, while an EMG interference pattern is dependent on some background ongoing voluntary muscle activity, the test can be performed on a fully unconscious patient. The study uses this test to explore the value of various putative clinical or biochemical markers recorded early in the patient’s time on ITU that might subsequently predict the development of critical illness myopathy.

There were 40 patients selected for study on the basis that they had high (poor) Simplified Acute Physiology (SAPS-II) scores for at least three days in their first week on ITU. It was found that 22 of these subsequently had an abnormally muscle membrane excitability. As was also shown in a previous study, the abnormal test values in these patients corresponded to a clinical critical illness myopathy state in that they were weaker than the others on clinical MRC strength testing and they also took significantly longer to recover as measured by ITU length of stay.

The main finding was that multivariate Cox regression analysis pointed to blood interleukin 6 levels as an independent predictor of development of critical illness myopathy, as was the total dose of sedative received. However the  predictive value of this correlation on its own was modest. In an overall predictive test combining a cut-off level of Il-6 of 230 pg/ml or more and a Sequential Organ Failure Assessment (SOFA) score of 10 or more at day 4 on ITU, the observed sensitivity was 85.7% and specificity 86.7%. There were also other potentially co-dependent predictive risk factors, including markers of inflammation, disease severity, catecholamine use and IGF binding protein level. Higher dose steroids, aminoglycosides and neuromuscular blocking agents were interestingly not associated with critical illness myopathy in this sample.


The study is clearly described and carefully conducted. The electrophysiological test appears to have real value, and is perhaps something that should be more widely introduced as a screening test before a muscle biopsy, given the latter test’s potential complications. The test can also be performed at a relatively early stage on a completely unconscious patient, where interventions to address the problem can be made in a more timely manner. Certainly I am going to discuss the feasibility of this test with my neurophysiological colleagues.

As the authors point out, perhaps the fact that they only recorded blood tests such as Interleukin levels on two occasions per patient meant that they missed the true peak level in some patients – its predictive value might otherwise have been stronger. I would have liked to have seen a more explicit link between their muscle membrane excitability and clinically relevant weakness. They show a reduction in mean MRC strength grade from around 4 to 2, which is clinically meaningful at these strength levels, but objective strength testing or respiratory effort measurements would have been advantageous, as well as the actual numbers of patients who were clinically severely weakened rather than just those with abnormal electrophysiology.

I think further study on unselected patients is important, even if it means that perhaps only 22 out of 100 rather than 22 out of 40 will have abnormal electrophysiology. This is because it might not only be those patients selected for the study on the basis of persistently poor physiology scores who could develop critical illness myopathy. A predictive marker in otherwise low risk patients might prove even more useful.

By way of general observation rather than opinion on this research, and extending the argument on investigating less critically ill patients, I have wondered if critical illness myopathy might in fact occur in acutely unwell patients who do not reach ITU at all. There are many neurological and other conditions that predispose to catabolic states, such as patients with chronic infection or inflammation, those who had preexisting disuse atrophy, those on steroids, or those who were already chronically malnourished due to poor care or poor or unsafe swallowing before they deteriorated such that they required acute hospital care. Even patients without pre-existing disease, such as those who have suffered acute stroke, may subsequently be susceptible to a catabolic state due to aspiration, other infection, immobility or suboptimal nutrition. One can speculate that large numbers of patients with stroke, multiple sclerosis relapse or other acute deteriorations requiring neurorehabilitiation may have significantly impaired or delayed recovery due to unrecognized superadded critical illness neuropathy. Certainly in stroke, important measures found to improve outcome, such as early physiotherapy and mobilisation, early addressing of nutrition, treating infection and good glycaemic control, happen to be among the key elements in treating critical illness myopathy. More directed and aggressive management along these lines in a subgroup of these patients who have markers for critical illness myopathy might further accelerate improvement and achieve a better final outcome.

Posted in Myopathy | Tagged , , | Leave a comment

Primer on Critical Illness Myopathy for General Readers

Neurology in Critical Care

Despite the fact that Critical Care and Neurology are separately relatively “glamorous” medical disciplines, neurological diseases in the critical illness setting receive relatively little attention. However, if one is in the business of intervening to make major improvements to patients’ outcomes (which we should be), then perhaps Neurologists as a group should focus a little more on this clinical setting.

There are two ways in which neurological diseases impact on critical care, typified by a patient management setting such as an intensive treatment unit (ITU) or high dependency unit.

  • First, a number of neurological diseases constitute the primary reason why patients need critical care. Examples vary from stroke, the most common cause of disability in developed countries, to Guillain Barre syndrome, myasthenia gravis, inflammatory encephalopathies and rare metabolic diseases. Some of these conditions have the potential to remit spontaneously or with treatment and so if the patient can be “tided” over a critically ill period successfully, the eventual prognosis may be excellent. Optimal management of such patients may therefore make a huge difference to patient outcome.
  • Second, even when the primary condition is not neurological, the critically ill patient may suffer a number of secondary neurological complications which may then become a major factor limiting outcome. These include delirium and hallucinations, nerve pressure palsies, critical illness neuropathy and critical illness myopathy; the last of these is the focus of this post.

Critical Illness Myopathy

A myopathy simply refers to any disease of the muscles, while a neuropathy refers to diseases of the nerves whose function is to transmit movement signals to the muscles or sensory signals back to the brain. For reasons that are not entirely clear, but which we will speculate upon, the muscles (more commonly) and the nerves are susceptible to damage in any patient undergoing intensive care; a myopathy occurs in 25-50% of cases where there is sepsis, multi-organ failure or a stay longer than seven days. At worst this may result in lasting disability, but at best may still significantly delay weaning off the ventilator and return to mobility. This has cost implications as well as implications regarding the extra suffering experienced by such patients.

The reasons why I wanted to conduct a journal review on this topic, for which this is the accompanying primer, are:

  • I had incorrectly assumed that critical illness myopathy was very rare until I had cause to research it in relation to one of my patients and I wonder if some colleagues might be under a similar misapprehension.
  • I wanted to explore any treatment options for this common and important condition.
  • I wanted to see if there were risk factors that would predict likely development of critical illness myopathy before patients get it and to diagnose them accurately when they do get it.
  • In reference to the latter, I was particularly concerned with difficulty in diagnosis when, as may hardly be unexpected if one is a Neurologist, the primary condition rendering the patient requiring intensive care is also neurological. How may we determine, for example, if a patient’s failure to wean from ventilation or to develop return in muscle strength is due to their Guillain Barre syndrome, or a secondary critical illness neuromyopathy?

More Background Information

There is a website providing information and support for patients and relatives with problems related to critical care called ICU Steps.

Posted in Myopathy, Primer Posts for General Readers | Tagged , , , , | 1 Comment