Journal Club Review: Driving after a Single Seizure

BMJ seizuresBackground

One of the main issues facing a patient diagnosed as having had a first epileptic seizure without any sinister underlying lesion – often a young adult and otherwise well – is the driving ban. One can only be sympathetic to the impact that it may have for some on travelling to work or actually performing their job. Some react with understanding, while others have the attitude that they will never expose themselves or others to harm even if the risk is tiny and they later become legally entitled to drive. A few react with incredulity: “I totally lost consciousness without warning, may do so again at any time, and you are ruining my career or social life by preventing me from driving for several months?!”

This can be a difficult conversation for clinicians, but at least one can remind oneself that the conversation might have been more difficult if the cause of their seizure was a brain tumour rather than cryptogenic, in which case they might only be alive for several months.

Two other points can help. First, in the European Union and in most other countries the rules are standardised and set by government authorities. The physician is only explaining the law of the land. In the US, some states have similar standard rules while others, perhaps unfortunately, do leave it to the doctor or to a medical review panel. Second, these rules were developed and modified after extensive review and consultation. Briefly communicating this process may help the patient to appreciate that they are designed to protect, not to punish. The paper reviewed here describes statistical data on risk of seizure recurrence that were used to help develop a consistent European Union Guideline, which informs the UK’s Driving and Vehicle Licencing Agency (DVLA) guideline (2013) and could be used to help doctors who must form their own guidelines.

The paper was published in the good old British Medical Journal (2010) and reanalyses data from the MESS (Multicentre Early Epilepsy and Single Seizure) study (2005), specifically on patients over 16 years of age who had a single unprovoked seizure and looks at the 12-month risk of recurrence at certain time points after the index seizure. In other words, if a patient has already gone some months following an initial seizure without a subsequent seizure, how likely are they to remain seizure-free for another 12 months?

This website had an accompanying commentary that discusses the original MESS study in more detail and the wider issues around prognosis and management after a single seizure. Clearly, the data in this paper are helpful for prognosis, but only in patients who have already gone a certain period seizure-free after their initial event.

Study design

The original MESS study’s inclusion criterion was that both patient and physician were uncertain about whether or not to start antiepileptic medication. Exclusion criteria included previous treatment with antiepileptic drugs or the presence of a progressive neurological disease. Out of around 1800 patients meeting the criteria, 1400 were enrolled; the others refused on the basis that they did not want to be randomised. Demographics showed no particular bias in these patients.

Patients were randomised to immediate treatment – the drug of the physician’s choice as early as possible after seizure (usually carbamazepine or sodium valproate) – or to deferred treatment, generally if the patient had a second seizure.

Where there were around 720 with single seizures in each arm in MESS, in the BMJ reanalysis there were around 320 in each arm who were 16+ and who had had only one seizure at the time of randomisation and whose date of seizure, as opposed to date of randomisation in the MESS study, was known.

Findings

The main finding of the BMJ reanalysis was that in the immediate treatment group the risk of recurrence in the next 12 months, having already gone 6 months without a seizure after the first seizure, was 14% (95% Confidence Interval (CI) 10-18%). In the deferred treatment group the risk rose to 18% (95% CI 13-23%). In the deferred treatment group, if the patient had already gone 12 months without a second seizure, their chance of recurrence dropped to 10% (95% CI  6% to 15%).

The overall general principle regarding driving has been arbitrarily set that if the risk of a seizure is less than 20% over the next year, then it is permissible for the patient to drive a private vehicle and if the risk is less than 2% they may drive a public or heavy goods vehicle. This is not a medical but a policy decision, presumably taking into account the proportion of time that the average person spends driving and the likelihood of risk to self and others should an accident occur as a result. The role of clinicians is simply to provide guidance on which patients have a 20% or greater risk.

It can be clearly seen from the data in this review that if a patient starts treatment, their 12 month risk 6 months after a seizure is lower than 20%. Therefore they may be allowed to drive at 6 months. The same applies to patients not on treatment – if one takes the mean estimate of risk of 18%. However, if a clinician were to be asked, “At what time would you be confident that the risk of recurrence in the next 12 months would be less than 20%”, he or she should use the upper confidence limit for the risk and so the 23% figure for patients not on treatment is too high. Only if patients not on medication have already gone a year without seizures is the upper confidence interval of 15% acceptable.

Strengths and weaknesses of the study

As mentioned in the paper, a potential weakness is that the data were taken from a randomised controlled trial (MESS) of patients having immediate vs. delayed treatment. From looking at the inclusion and exclusion criteria, one might suspect a selection bias that clustered patients of intermediate severity – those who definitely wanted medication or definitely didn’t want medication were excluded. So the risks in the low-risk subcategories might be overestimated and those in the high risk subcategories underestimated.

It could have been a problem that there was an inconvenient delay between seizure and randomisation in MESS of around 3 months. This would rule out patients who had a second seizure in that time. But 3 months is half of the six month seizure free period in which we are interested! Fortunately, in this paper the investigators back-tracked to get the actual seizure time rather than randomisation time; this means that the six month free period is an accurate reflection.

But if one wanted to generalise the findings to prognostication of seizure risk, surely something that the patients will want to know about, if one is making this prognostication on a patient just after their seizure (which should typically be the case as all patients having a seizure should be promptly reviewed by a specialist), then we cannot use the figures from MESS (which included children) or those reviewed here. All we can do is wait three months, say in a subsequent clinic, and if they have not had a seizure in that time, the figures reviewed here can be used. A more full discussion on prediction of risk and decisions on treatment is in the accompanying commentary on management after a first seizure.

Finally, there is the issue of validating seizures in the outpatient department, as was done in the study. Clinicians more inexperienced than those used in MESS might make more mistakes in correctly identifying seizures, or patients might deny or forget seizure occurrences. This is likely to be more of a problem in real life than in a trial. So we cannot say that MESS is overestimating risk, but we can say that MESS does not simulate the real life underestimation of risk that may occur in daily practice.

Different risks in different patients

If the policymakers wanted to finesse the guidelines to take into account other factors, there are adjustments that could be made. In a univariate factor model, it was found that remote symptomatic seizures (seizures occurring as a result of a brain insult e.g. head injury, encephalitis, neurosurgery, that occurred some time before the seizure) were associated with significantly higher risk, as were presence of neurological deficit, seizures while asleep, abnormal electroencephalogram (EEG), and lack of brain imaging information.

Calculating the risks for these subcategories reveals that, if taking the upper confidence intervals, remote symptomatic seizures, neurological deficit, sleeping seizures and abnormal EEG all shift the risk above the 20% threshold after 6 months seizure freedom, and the first two are above the threshold even after 12 months seizure freedom. However, the data numbers are getting small and estimations more inaccurate.

A multivariate analysis of various combinations of factors, much in the same way as risk of osteoporosis can be calculated, is a better way of addressing this issue.  This is shown in table 5 of the paper (below), noting that they have excluded patients with a first degree relative with epilepsy and sleep seizures. The latter are a special case; while recurrent seizures are more likely (because they may reflect particular epileptic syndromes) they are also more likely to recur in sleep and so be less relevant for driving risk. The UK DVLA rules now in fact permit driving with continuing sleep seizures provided a pattern of seizures only while asleep has been established for at least 1 year.

multivariate seizure risk factors

One can see, for example, that a non-remote symptomatic seizure with an abnormal EEG has an upper confidence interval of risk of 23% at 6 months even if imaging is normal. One might argue that the current blanket rule of 6 months is rather lenient for patients with an abnormal EEG or with a remote symptomatic seizure, especially if the patient is not on antiepileptic medication.

A careful view of the wording of the current UK Driving and Vehicle Licensing Agency guidelines in fact includes a clause “provided no risk factors indicate a more than 20% risk of a recurrence over the next 12 months”. If this is interpreted as being confident that the risk not more than 20%, then all the above-mentioned categories would entail a 12 month not 6 month ban, and we would be needing EEGs on everyone to inform this decision. If it is interpreted as being most likely risk level, then abnormal EEG still entails too high a risk if not on medication (23%), as does abnormal imaging if remote symptomatic and not on medication (22%). Only if it is interpreted as being possible that the risk is as low as 20% and the patient was started on medication and the seizure type was non-remote symptomatic is an EEG not necessary because it is only in this circumstance where the lowest confidence interval of risk is not above 20% whether or not the EEG is abnormal.

Data from other studies

A population rather than outpatient based study on 252 patients who had a single seizure as their index seizure (National General Practice Study of Epilepsy (1990)) found a 37% risk of a second seizure within 12 months, and an 18% risk if the patient had already been seizure free for 6 months. This shows just how much the risk level reduces if the patient has already undergone a modest seizure-free period. Factors increasing the risk  of recurrence were symptomatic seizures, neurological deficit, and no antiepileptic drug treatment. The findings are therefore comparable to the reviewed data.

Conclusions

This paper clearly does what it intends; to ascertain whether, after 6 or 12 months seizure free following a first seizure, the level of risk of a seizure over the ensuing 12 months is greater or lower than the policy threshold of safe private vehicle driving of 20%.

The analysis provides a rationale for the duration of the driving ban that might help some patients better come to terms with what may seem a punitive measure.

Partly as a result of this study, a number of changes have been made to the UK’s DVLA regulations (2013) regarding epilepsy:

  • The ban following a single seizure is reduced to 6 months from 12 months.
  • If a pattern of sleeping-only seizures is established for 1 year (formerly 3 years) the individual is allowed to drive.
  • If a patient was seizure free on medication, and then a seizure occurred as a result of a medication change, the patient can return to driving after only 6 months if they go back on the original medication.
  • If a patient has only ever had seizures that do not affect conscious level or ability to drive, they can drive a year after this diagnosis is established even if they continue to have these seizures.

However, the multivariate analysis of risk factors does raise some issues about higher risk categories, and draws attention to the clause in the DVLA guidelines “provided no risk factors indicate a more than 20% risk of a recurrence over the next 12 months”. I am not sure how many clinicians actually apply this rule.

Could they be sued if a patient had a single remote symptomatic seizure, was started on medication, and had a second seizure 11 months later resulting in a fatal road accident if the clinician had not performed an EEG, or if the EEG was performed and found to be abnormal?

Could they be similarly sued if they had any kind of seizure, but had not started medication and the EEG was not performed or was abnormal, or if both EEG and MRI were abnormal?

Or is there a “get-out argument” that one would have to use the lower confidence estimate of risk to prove that the risk was greater than 20%? In some categories even the lower confidence estimates are above 20%. Happy days for lawyers, if not for everyone else…

Posted in Epilepsy | Tagged , , | 1 Comment

Journal Club Commentary: Management of Single Seizures

MESS studyIntroduction

For this edition of the Neurology Online Journal Club I wanted to review not one but a series of papers to address a specific issue, namely predicting the risk of seizure recurrence after a single seizure and predicting how much this is reduced by starting anti-epileptic medication. I started with the Multi-centre study of early Epilepsy and Single Seizures (MESS) study, but there is more than one report on the same data set, and its main points prompted a more detailed look at other literature on the subject and my personal views. Hence I have described this as a commentary.

There is an accompanying Journal Club review that deals specifically with risk of seizure recurrence in relation to driving.

Background

Epilepsy is certainly one of the more common conditions managed in neurology and indeed in general medical practice. The lifetime prevalence of seizures (% of people who will have a non-febrile seizure at some point in their lives) is 2-5%, and the prevalence of active epilepsy is around 0.5%. A first seizure often presents as a sudden, shocking event in a previously well person, and often leaves the patient in a similarly well state with the expectation of returning to a reasonably normal life – and yet bewildered and worried. As a result, it is a condition where in my view counselling of the patient regarding management options and involving the patient in decision-making is particularly important.

A specific issue with epilepsy management is that typically there are no ongoing symptoms or abnormal clinical signs. The patient may be starting treatment, exposing them to potential side effects, without making them feel better at all. We may have no idea whether or not the drug is working until it manifestly fails much later in the form of a recurrent seizure, and even then we are not sure what would have happened if we had not started treatment, or had started a different treatment. In this respect epilepsy management is more akin to management of episodic headache or TIA than of Parkinson’s disease or chronic pain.

When management revolves around predicting and minimising risk, statistics inevitably play a part. Clinicians need to have the communication skills to explain clearly to patients in broad terms the likely risks of seizure recurrence in different circumstances, and of course that means knowing those risks and understanding basic statistics themselves. Knowledge of risks is covered in this review, but communicating them remains a challenge. (For example, in the UK a survey revealed that the majority of adults did not appreciate that it was equally likely for one to roll a 6 on a die as any other number, or that a previous coin toss does not affect the result of a subsequent one.)

The key questions to which patients and clinicians need answers are:

  1. What is a specific patient’s risk of a further seizure over a certain time period? This estimate should factor in whether or not this was their first seizure, the seizure type and aetiology, the time they have already gone without a seizure and other factors that determine risk such as EEG, imaging abnormalities and family history of epilepsy.
  2. How much is this risk reduced if the patient goes on antiepileptic medication?
  3. If starting medication, and there are no further seizures, when should this medication be stopped again?

Risk after a first seizure

The FIRST study (First Seizure Trial Group Study) in 1993 reported recurrence risks of 18%, 28%, 41%, and 51% at 3, 6, 12, and 24 months if not given medication, and 7%, 8%, 17%, and 25% if given medication.  Randomisation on or off medication was done within 7 days of their seizure, so this is nicely applicable to an “early clinic” or inpatient decision. The odds ratio of reduction of risk by medication was 0.4 (i.e. seizures were only 40% as likely on medication as off medication).

The largest single study on risk of seizure recurrence with randomisation for initial treatment was that conducted by the Multi-centre study of early epilepsy and Single Seizures (MESS) study group; here the risk of recurrence in the 404 randomized to immediate treatment was somewhat lower at 18%, 32%, 42%, and 46% at 6 months, 2, 5, and 8 years after randomization versus 26%, 39%, 51%, and 52% in the deferred treatment group.

Cumulative risk of recurrence years after a seizure. Note that it is the top figure that specifically refers to a first seizure.

Cumulative risk of recurrence years after a seizure. Note that it is the top figure that specifically refers to a first seizure.

A key difference between the studies is that in the MESS study patients were randomised generally 3 months after their initial seizure. The six month figure is therefore the risk from 3 to 9 months after a seizure, having already gone about 3 months without a seizure.

Further analysis published in the BMJ (2010) of a subgroup of MESS study patients  looked specifically at implications for driving and is the subject of a complementary journal club review. This subgroup naturally consisted of those over 16 years of age and those who could have their seizure-free period dated back to their first seizure rather than to time of randomisation; it was found that the 12-month risk of a seizure, having already gone 6 months without a second seizure, was 18% off medication and 14% on medication and this difference did not reach statistical significance.

The lower risk found in the MESS study than in the FIRST study is supported by a prospective study without treatment randomisation (Hauser et al., 1998) and largely on adults; the risk of a first recurrence was 21%, 27%, and 33% at 1, 2, and 5 years after the initial seizure. In those who recurred, the risk of a second recurrence was 57%, 61%, and 73% at 1, 2, and 5 years after the first recurrence. The risk of a second recurrence approached 90% after remote symptomatic seizures (those that are secondary to a brain insult at a previous time and therefore indicating an ongoing risk) and was 60% following cryptogenic/idiopathic seizures.

A problem with comparison and interpretation of study data is in patient selection. While there were 1443 patients randomised in the MESS study, another 404 did not consent to randomisation. Those where the risk might be considered lowest might not want to consider taking medication, while those at high risk might not want to take the chance not to have medication. Furthermore, an actual selection criterion was that for ethical reasons both patient and clinician had to be unsure about whether or not to start medication to be invited to participate.

It is likely that low risk groups in such a study will have overestimated risk, while high risk groups might have underestimated risk and underestimated treatment effect. This possible shortcoming is important in guiding actual practice. If there is a policy from opinion leaders that treatment is not warranted for first seizures, this might get interpreted rigidly by others as a blanket rule and those patients at high risk after a first seizure – the very patients who might not have enrolled on the study – might not even get counselling about the possibility of taking medication.

Finally, different studies may have differing proportions of different seizure type. The MESS study took anyone over the age of 1 year, and there may have been a relatively high proportion presenting with a single minor complex partial seizure.

Decision to treat

Most epileptologists do not treat a single seizure. In fact they define epilepsy as two or more seizures, to try to exclude the significant proportion of individuals who have a single seizure and no further attacks.

Perhaps this conservative strategy is because of the side effects of antiepileptic drugs. These include potential teratogenicity if falling pregnant while on the drug, long-term effects contributing to osteoporosis, possible long-term effects on fertility and possible long-term effects on cognition (mainly mooted in children).

However, there are now many antiepileptic drugs from which to choose, increasing the chance of finding one to suit, and modern drugs may minimise many of these risks. If one looks at the side effects of most drugs taken for any length of time, the list looks at least as scary as that for modern antiepileptics. For example, most anti-migraine drugs also have potential teratogenicity.

If a cardiologist said to a patient who had just had a heart attack, “Well you could have secondary prevention to reduce your risk of a subsequent myocardial infarction (MI) over the next year from 41% to 17% (using the FIRST trial data), but we won’t bother because we don’t really say you have heart disease until you get your second MI”, they would be dialling up for a second opinion before he or she had finished the sentence! And secondary preventatives such as beta-blockers, antiplatelets and statins, and certainly coronary stenting procedures and coronary artery bypass grafts, are not without their risks either.

While the mortality associated with a generalised tonic clonic seizure is lower than that for an MI, it is not insignificant. Quite apart from the circumstances of the attack potentially posing a risk, there is a small but well-documented risk of sudden unexpected death in epilepsy, thought to relate to a number of factors including the extreme autonomic disturbance that occurs during the attack. The event may occur in a young completely healthy person out of the blue, reflects a total loss of self-control, may be potentially embarrassing and stigmatising, and may leave the patient exhausted or potentially even in a psychotic state for days afterwards. I think any trivialisation of a seizure in comparison with an MI can only reflect an age-old prejudice against neurological disease that it is “difficult”, “untreatable” and not suffered by “normal” people.

But other data presented here show that if for some reason an adult patient only saw someone in a position to advise on antiepiletic treatment about six months after their first seizure (the BMJ trial went back to the seizure date not the recruitment date), and they had not had a second seizure in that time, the 12-month seizure risk figures are only 18% vs 14%. This presents a completely different picture of risk of treatment side effects versus reduction of risk of seizures.

Stratification of Risk

Another follow-up to the MESS study (2006) stratified risk of seizure recurrence according to a scoring system (below).

Scoring system for sratification of risk of recurrence after a single seizure according to the MESS study data.

Scoring system for stratification of risk of recurrence after a single seizure according to the MESS study data.

Half of the patients in the MESS study were used to investigate these risk factors to develop the scoring system, and the other half were used to see if subgroups divided post-hoc according to this risk stratification would have differing benefits from medication. It was found that all but the lowest risk subgroup would benefit from medication (see below); in fact it bizarrely seems in the lowest risk category that avoiding treatment is non-significantly protective (p=0.2).

Kaplan-Meier derived estimates of probabilities of seizure recurrence divided according to different risk groups

Kaplan-Meier derived estimates of probabilities of seizure recurrence divided according to different risk groups. Start and delayed treatment refers to treatment started at randomisation or delayed until subsequent seizures.

This information could therefore provide a basis for individualising risk assessment and individualising decisions to treat on that basis, or at least providing a default strategy. However, it would be applicable only to patients seen in a clinic fully three months after their seizure who had not already started medication or had another seizure in the meantime.

When to stop treatment

If one is to embark on treatment, perhaps controversially so after a first seizure, when does one stop?

Antiepileptic drugs are probably only protective while being taken. This is indirectly illustrated by long-term remission figures in the MESS trial. Initial treatment decisions did not affect the overall figure of 92% of patients being at least 2 years seizure free 5 years after enrolment. In other words if treatment was deferred until a second seizure, they were as likely eventually to go into remission, but had obviously had more than one seizure while getting to that point and might still be on medication at that point.

One rationale would be to treat for as long as the drug appears from population studies to be significantly reducing the risk of a subsequent seizure.

The longer the patient is seizure-free, the closer data taken from patients with single seizures recruited 3 months late will correspond to those taken immediately, so the more accurate the original MESS data become. We see that from this study’s long term follow-up, the 2 year risks were 32% vs 39%, 5 year risks were 42% vs 51%, and the 8 year risks 46% vs 52%. There is probably a diminishing return over time, but it is difficult to draw a firm conclusion as to significance of this reduced risk at different times.

Most studies specifically looking at timing of antiepileptic withdrawal are on patients who had had more than one seizure, precisely because most clinicians do not start treatment for a single seizure in the first place! Obviously the findings cannot be applied to those who had a single unprovoked seizure, because the overall risk is lower in this group.

One study (JNNP 2002) on patients who mainly had multiple seizures but which at least selected patients on monotherapy, and so tended to reflect patients more easily controlled, found that after 2 years the 12 month recurrence risk was 9% continuing on medication vs 26% stopping medication; on a multivariate analysis, the hazard ratio was 2.6 (CI 1.5 -4.8), and the hazard ratio dropped to 1.6 (1.0 – 2.6) if 3 to five years seizure free and to 1.0 if >5 years seizure free. So after multiple seizures there is clearly no excess risk from stopping medication only if seizure free for >5 years.

Conclusions

Umm…

We have conflicting risks, conflicting risk reductions from medication and data that apply only in specific circumstances.

What we need is a large multi-centre study that:

  • Randomises patients immediately, so we can make an informed treatment decision at an appropriate time when the recurrence risk is highest
  • Subdivides into age groups, as the paediatric population and geriatric population may have different seizure aetiologies from young adults, and even different clinicians.
  • Subdivides according to generalised tonic clonic versus complex partial seizures. The latter are by no means as severe and dangerous, and one might imagine that if the first seizure is complex partial, there may be a higher chance of a subsequent one being of the same type.
  • Stratifies risk as in the MESS study, taking account of EEG, MRI, neurological deficit and cognitive impairment.
  • Uses more modern drugs – nowadays lamotrigine and levetiracetam are common first-line agents, as opposed to carbamazepine and valproate which were the drugs that mainly featured in the MESS study. While these are admittedly not clearly more efficacious, they are better tolerated.
  • Includes an analysis of the side effects of drugs in those randomised to treatment, and the quality of life impact of these side effects and of the “inconvenience” factor of taking regular medication.

Given the current lack of clear data, we are left with clinical judgement and patient preference.

My practice with regard to a patient who has just had a generalised tonic clonic seizure is largely to ignore the data from MESS indicating that treating a first seizure non-significantly increases risk when the EEG and neurological examination are normal. How much were the data distorted by being randomised 3 months after the seizure? How many in this category had a complex partial seizure? A particular problem is that often I am not going to get an EEG within a week of the seizure, so a major risk stratification factor is unknown at the most important time to start treatment. I quote the FIRST trial as a “worst case scenario”, something like:

The risks of recurrence could be as high as 41% over the next year and medication could reduce this to 17%. However, given your neurological examination and imaging (and possibly EEG) are normal, and there is no particular evidence of a recurrent epileptic syndrome (e.g. clear family history, developmental delay, juvenile myoclonic epilepsy), the risk may be appreciably lower and the benefit of medication therefore appreciably less. The risk, which includes a slight risk of sudden death as a result of a second seizure, must be balanced against the risk of side effects of taking medication.

Particular factors relevant for you might be the further 12-month driving ban after a subsequent seizure, and teratogenic risk of drugs if you fall pregnant while taking them. (Though lamotrigine and levetiracetam have rather favourable teratogenic risk profiles.)

Then, when it comes to stopping medication, as this should really be addressed before starting:

Since you have only had one seizure, we would empirically consider you in the generally accepted “best category in whom one would initially treat” and advise at least 2 years treatment assuming no further seizures. This 2 year figure is somewhat arbitrary, reflecting that FIRST demonstrated continued risk reduction two years after starting medication but did not investigate a longer period.

If the patient has had a single complex partial seizure and no risk factors, I would explain:

For this relatively minor seizure type there is a lack of evidence for treatment and most patients are not treated. Only if you are very keen on treatment, e.g. regarding driving, would I offer it to you after counselling on potential drug side effects.

If the patient is in the medium or high risk category according to the stratification of the MESS data, in other words neurological deficit, developmental delay, cognitive impairment, features of an epileptic syndrome, or if I have an EEG already and it is abnormal, or perhaps an epileptogenic lesion on an MRI scan to boot, I will tend to use the MESS data:

A potentially risky time for seizure recurrence is in the next 3 months. Even if you do get to three months without a seizure a major study has shown that the risk of a second seizure by one year is 35% and medication may reduce this to 24% (or for the high risk category 59% to 36%). Given these risks, and the slight possibility of death from a seizure, I would advise treatment despite the potential risks of drug side effects unless you had any particular issues.

And for stopping medication again:

The long-term 5+ year follow-up in the MESS study indicated that many patients go into seizure remission at this time after their first seizure, whether or not they started on medication initially or had seizures during this time, but those who were initially treated were less likely to have seizures in getting to that 5-year milestone. Furthermore, another study (though on patients who had had more than one seizure) showed that antiepileptic drugs may still reduce the risk of a recurrence over the subsequent 12 months if you have gone up to, but not beyond, 5 years without a seizure. Even if you remain seizure-free, I therefore generally recommend 5 years of treatment before slow medication withdrawal.

If the first presentation was with status, the risk of recurrence is not much greater but the risk of recurrent status is greater and so I would advise at least a 5 year seizure-free period before withdrawal even if no risk factors. And, moving away from the single seizure scenario, if the patient has had many seizures before the seizure-free interval, or evidence of an ongoing epileptic syndrome, even beyond 5 years seizure-free I counsel that there is always a risk of recurrence and being on epileptics may reduce this risk though they have not been proven to do so.

If one happens to see the patient for the first time at around 3 months after the event, and one has an EEG, then I think one might directly apply the MESS reanalysis of stratification of risk, namely to recommend treatment only if the patient is not in the lowest risk category. However, if the seizure was generalised tonic clonic, I am still uncertain about the applicability of that study, and I counsel the patient that while there is no clear evidence for treatment from clinical trials there are still arguments for as well as against treatment.

Of course, all these recommendations would only be a basis for discussion. Some patients may be focussed on taking a medication for any possible benefit to minimise risk of extended driving bans or sudden unexpected death in epilepsy. Others may not want to risk drug side effects unless they are of proven benefit or there is any possible risk of teratogenicity (despite the risk that a generalised tonic-clonic seizure in a mother poses to her unborn baby). I do counsel strongly that if one does embark on medication treatment for an unprovoked seizure there is little point in taking it for a period of less than 2 years. I also counsel them up front about the UK’s 3-month recommended period off driving at a future time of withdrawal of medication. Even this relatively short time off driving during this potentially risky period immediately after drug withdrawal could have important connotations for some patients who have been back driving again for 18 months.

Posted in Epilepsy | Tagged , , | 1 Comment

Primer on Statistics for Non-Statisticians

Many of the journals discussed assume a knowledge of statistics. In fact, it is often the statistics that are the crucial issue in a critical review of a research study. And paradoxically it seems that the further we move from the more scientific field of basic science and towards the more “accessible” field of clinical medicine, the statistics becomes more not less complicated.

“Hard” science might involve testing a complex hypothesis with a single complex experiment in a controlled, perhaps in vitro, environment. The experiment might have a few runs, or a few test subjects or perhaps only one. Statistics are all about estimation and sampling, so little if any statistics may be involved after the result is obtained – especially if there is only one result!

Oon the other hand, a clinical medicine study might involve a relatively easy to conceptualise hypothesis and easy measurements but tested on a real life subject where there are myriad other variables over which the investigator has no control. As a consequence, the test may have to be repeated in many different subjects in order to minimise the “noise” of random variabilities and maximise the “signal” of the variable under investigation. With repetition, the “signal” is amplified in an additive fashion, while the “noise” cancels out. Furthermore, in clinical medicine the hypothesis may be more vague; the investigation might involve an empirical study of a number of different factors which might interact with one another. Often the more vague the hypothesis, the more advanced the statistics required to make any sense of the data.

So I am really simply warning the reader, in a rather long-winded manner, that one may find the most advanced statistics lurking behind the abstracts of the most seemingly accessible research, and that probing the authors’ statistical interpretation of their data is sometimes the key to deciding how seriously to take their findings.

With this in mind I have attempted a statistical primer for the non-statistician, perhaps to dip into as a statistical topic comes up in a journal review, or perhaps to peruse in a more thorough manner. The contents link is below:

Primer on Statistics for Non-Statisticians: Introduction and Contents

Posted in Primer Posts for General Readers | Tagged , , | Leave a comment

Journal Club Scientific Review: Structural Brain Changes in Migraine (the CAMERA-2 Study)

Scientific Review

For this paper, I decided to complete two complementary reviews. The Journal Club General Reader Review can be considered a background and a summary for this scientific review.

Background

It has been suggested for some time that, for a given age, migraine is associated statistically with an excess of white matter lesions as seen on MRI. Possible explanations lie in a pro-coagulant or pro-inflammatory state of cerebral blood vessels during a migraine attack, or recurrent paradoxical emboli. Of course, complicated migraine results in transient neurological symptoms that could have a vascular basis and migraine is associated with clinical stroke, albeit rarely, so there is a potential clinical correlate of such changes at least in some patients with migraine.

The MRI lesion association was corroborated by, among others, the CAMERA study, which looked at 295 patients with migraine and compared the presence of MRI lesions with 140 age, sex, diabetes and hypertension matched controls. There was a higher prevalence (and a greater total volume of if present) of deep white matter T2- weighted intensities. There were also more lesions if the migraineur’s attacks were more frequent.

The CAMERA 2 study, the topic for this journal club, follows up these subjects nine years later, looking at progression of their MRI abnormalities and their scores on a battery of cognitive tests performed at the end of the study period.

Journal Review

There were 203 of the 295 original migraineurs and 83 of the original 140 controls available for this second study. Non-participation was equally likely in both groups, and the most commonly cited reasons were lack of interest and difficulty travelling. (The study goes into analysing the non-responders in appropriate detail).

Migraine was diagnosed by standardised International Headache Society criteria. The use of preventatives (that could be protective against vascular changes) or triptans (that theoretically could provoke vascular changes) was probably not prevalent enough to affect the results.

The same imaging protocols were used for the repeat MRI scans, so that there would be a fair comparison over the nine-year period. Analysis of the number and total volume of lesions was done largely by automated software, checked manually by a blinded rater. Abnormalities were grouped into hemispheric T2 weighted deep white matter hyperintensities, infratentorial T2 weighted hyperintensities excluding those hypointense on FLAIR (ie not simply CSF spaces), and other infarct-like lesions in the posterior circulation territory.

Cognitive scores were measured on a number of tests and then converted to Z-scores so that they could be normalised to give an aggregate score for a patient. Association between deep white matter hyperintensity load and follow-up cognitive tests, or between deep white matter hyperintensity load and change in cognition was assessed by linear regression, adjusting for age, sex and educational level. A second linear regression model also adjusted for presence or absence of migraine to see the influence of migraine on the lesion load cognition relationship.

In women only, there was an increased deep white matter hyperintensity load in migraineurs vs controls – 0.02 ml vs 0.00 ml at baseline and 0.09 ml vs. 0.04 ml at follow-up. There was also higher incidence of progression, defined as >0.01 ml volume (77% vs 60%; p=0.02). These were new lesions rather than enlarging pre-existing lesions. Finally there was an increased incidence of “high” progression (23% vs 9%; p=0.03). There was no association with migraine severity measures or its treatment.

There was no effect of presence of migraine on progression of periventricular white matter hyperintensities, or infratentorial hyperintensities or posterior territory infarcts.

The cognitive performances across a number of tests were normalised by calculating Z-scores. For non-statisticians among us, these work like IQ scores, i.e.  scores that can be directly compared even if the tests are different. A Z-score of 1.0 means that the patient’s score is one standard deviation better than the average score of the population. This is equivalent to an IQ of about 115 on most tests. (One standard deviation is defined such that around 70% of scores will fall between Z-scores of -1.0 and 1.0.)

Taking all the high-lesion load subjects (defined simply as the worst quintile) vs all the low-lesion load subjects (the remainder), the high lesion load ones had an overall mean Z-score of -3.7 and the low lesion load ones had a mean of 1.4. This was said to be not statistically significant (p=0.07).

(This fooled me at first, possibly not a hard thing to do. A Z-score of -3.7 would mean the high lesion load patients were on average in the bottom sub 1%! But what the authors did was simply add the individual Z-scores across 13 individual tests – they did not take the overall mean across the 13 tests. So the mean for high lesion scores is actually only 0.28 standard deviations lower than the population mean. In fact their “population” is simply all their patients, so if the high lesion load patients were lower than average, the low lesion load patients would have to be higher than average, though by less in magnitude, since there were four times as many low lesion load patients. Statistically, simple addition is fine as one would otherwise just be dividing everything by 13 and that would not change the test results.)

As I have commented in the review for general readers, though, a p-value of 0.07 is nevertheless suggestive that there might be an effect, just that it did not reach significance in this study. The presence or absence of migraine did not influence the lesion-load effect, which had a slightly more reassuring p-value of 0.3, but again if the first effect is really borderline, I am not sure how the linear regression model they use would be expected to behave when adding in the migraine factor.

A general limitation of the study, upon which the authors comment, is that the recording of lesions is rather semi-quantitative and the confidence intervals for odds ratios are wide, suggesting wide inter-subject variability. For example infratentorial progression was considered non significantly associated with migraine because the p value was 0.05. The odds ratio was 7.7 (7 times more likely for progressive lesions if migraine), but the confidence interval was 1.0 to 59.9, meaning that the lower limit was just at the level of no excess risk at all.

Other studies have in fact shown a significant association between lesion load and general cognitive function in apparently healthy elderly subjects (van der Flier et al., 2005). Most previous studies, however, do not show a significant association between migraine itself and declining cognitive function.

The suggestive lesion load effect was not present for lesion load at baseline, only for lesion load at the 9 year follow-up; subjects with a high lesion load 9 years ago did not have a greater change in cognitive function (-0.5 for high load, 0.2 for low lesion load; p=0.4).

In summary, it seems that at a mean age of 57, despite the fact that female migraineurs have scans whose lesions progress more, lesion load per se is associated with (almost significantly) lower cognition, and the presence of migraine does not seem to tighten this possible association. The lesion load 9 years earlier in those of mean age 47 does not predict worse cognition.

Probably this is all indicating that migraine is one of many factors that can result in white matter lesions and some but not all of these factors are associated in turn with cognitive impairment. One factor that is likely to be associated is age;  lesions present when subjects were 9 years younger do not predict future impairment, but there is a suggestion that the lesions one has accumulated at a mean age of 57 might be associated with impairment.

In other words, while in general white matter lesions might be associated with impaired cognition, there is no evidence that the white matter lesions seen in younger patients with migraine are going to be associated with impaired cognition in around 10 years time. This is perhaps reflective of the fact that in migraineurs the white matter lesions tend to be small, and to remain small though more numerous over time  – perhaps a different natural history from ischaemic lesions that become larger and more confluent as the volume load increases over time.

Conclusions

There is no clear association in this study between migraine and the development of cognitive deficits. There was a significant, but possibly modest, progression in lesion load on MRI compared to the normal aging process. While there was no clear association between lesion load and cognitive deficits, the wide variability in lesion load and the detailed statistical findings indicate that the study is not powered sufficiently to conclude that it is unlikely that lesion load is associated with cognitive deficits. It therefore remains unclear what it is about migraine that results in this excess lesion load but not in cognitive decline, and for us to be completely confident that there is no age range or other subgroup of patients with migraine where such lesions have any clinical significance. As a result, it still remains unclear how we should advise patients with migraine and MRI lesions regarding cerebrovascular preventive measures.

Posted in Migraine | Tagged , , | 1 Comment

Journal Club General Reader Review: Structural Brain Changes in Migraine (the CAMERA-2 Study)

Review for General Readers

For this paper, I decided to complete two complementary reviews. This one for general readers can be considered a background and a summary for the Journal Club Scientific Review.

Background

For some time there has been a nagging concern among clinicians that migraine is associated with premature vascular changes in the brain. Given how common migraine is, and how commonly imaging is performed as a screening investigation for headache, there arises all too commonly an awkward situation where imaging is performed in a patient with migraine to rule out sinister pathology, and then the imaging is “not quite normal”. In fact the imaging indicates the presence of vascular changes that are typically present mainly in older people. Hardy reassuring.

Does this mean that every migraine attack is causing a mini-stroke, or that migraineurs, when they grow older, are more susceptible to stroke or to vascular dementia or to pre-frontal gait and balance problems? How aggressively should we address vascular risk factors in all migraine patients, about 12% of the adult population? Should we perform MRI scans on all 12%, and address risk factors in the sizeable proportion with the excess lesions, or address risk factors in all, or in none? Should we be thinking in terms of secondary prevention measures, rather than primary prevention? (Secondary prevention means preventing stroke or heart disease when such events have already occurred. The balance of risks is consequently shifted in favour of intervention despite potential side effects or risks.) What about echocardiography to screen all migraineurs for cardiac sources of emboli and for mitral valve prolapse? What about a bubble study to investigate patent foramen ovale? The questions multiply and the answers are frustratingly lacking.

 These concerns over MRI appearances were confirmed by epidemiological findings, including the CAMERA study (Cerebral Abnormalities in Migraine – an Epidemiological Risk Analysis). In nearly 300 subjects with migraine, the female subgroup was indeed found to have an excess of small scattered white matter changes on MR imaging compared to 140 age, sex and other risk factor matched controls. Furthermore, the more frequent the migraines, the greater the number of lesions, indicating that there could be some cumulative lesioning effect from migraine attacks.

However, this study merely corroborated the imaging findings. It did not indicate whether or not they actually mean anything for patients. Therefore the CAMERA study followed up its patients, measuring changes in lesion load and recording cognitive ability by IQ tests; the findings after 9 years are presented in CAMERA 2, the subject for this review.

Journal Review

Around two-thirds of the original CAMERA 1 study subjects with and without migraine were followed up. In females, it was found that 77% of patients with migraine had worsening of a certain pattern of imaging abnormalities called deep hemispheric white matter lesions, compared with 60% of female controls. One expects some progression simply due to age, the mean age by this second study being 57 years. The more prevalent progression in the migraine patients was nevertheless statistically significant (p=0.04). Progression of other types of brain lesions was not significantly different between female migraineurs and controls, nor was there a migrainous association in men with any kind of MRI lesion or progression thereof. Unlike the baseline findings from CAMERA 1, further progression in the number of white matter lesions was not associated with a higher frequency of migraine attacks.

Most importantly, the study failed to find any relationship between presence or absence of MRI lesions and cognition. However, overall I would personally take these finding as leaving me “a little less worried than I was before” rather than “reassured”. This is because of the statistical detail.

The authors chose to analyse the cognitive (and fine movement task) data by lumping all the migraine and non-migraine patients together and then dividing them into the worst fifth regarding lesion load and the best four fifths. Using a statistical model involving linear regression, they found that, after correcting for prior educational level, age and sex, there was a trend for worse cognition in the smaller high lesion load group compared to the larger low lesion load group but this did not reach significance (p=0.07).

However there is a difference between saying, “the lack of statistical significance means there is no evidence for an effect of lesion load on cognition”, and “the lack of statistical significance means there is positive evidence for no effect of lesion load on cognition”. This difference is often lost on journalist and publicists. Although the statistics cannot prove that cognition is worse with higher lesion load, with that p-value I for one would like to be in the low lesion load group!

They then analysed the all-important migraine issue by bringing whether or not the subjects had migraine into the high vs low lesion load cognition statistical model and they found that having migraine did not influence this (said to be lack of) effect (p=0.3). But if the effect is really borderline rather than absent, might the migraine influence be too?

There was also the very clearly non-significant statistic (p=0.9) that the migraine patients overall had cognitive scores that were not significantly worse than non-migrainous controls, which is reassuring though I think this was a straight comparison rather than correcting for possibly higher original cognition or possibly higher educational level.

Finally, high lesion load in the CAMERA 1 study 9 years earlier did not predict worsened cognition at the time of CAMERA 2. In other words, it seems to be more the age-related subsequent accumulation of lesions that possibly matches with poor cognition rather than the original migraine associated lesions. (Remember, nearly as many non-migrainous patients had progression in white matter changes over the nine years as migraine patients.)

While these two latter points are somewhat reassuring, we still do not get a clear answer to the question, “In the subset of female migraine patients with high lesion load, did their cognition deteriorate more from nine years earlier than that of the migraine patients with low lesion load or that of the controls with low lesion load?”

Conclusions

Returning to the original clinical scenario, given all the whys and wherefores I don’ t think we can draw any firm conclusions from this study to provide reassurance to patients with migraine. Yes, migraineurs have progression in lesions more than expected for age. No, these are not associated with ongoing frequency of migraine attacks and no they are not found to be associated with impaired cognition nine years later.

One must also place studies in their context. Reviewing this paper prompted me to look further into the literature. In fact there is a reasonable body of recent evidence from long-term follow-up of migraine patients in general that there is no progressive cognitive impairment. This therefore provides further support for the argument that the MRI lesions seen in migraine do not have this clinical significance.

Nevertheless, I still cannot be confident that in no migraine patient is there any significance to their lesion load, beyond that associated with other coincidental risk factors such as diabetes. I think further follow-up of this study cohort would be helpful. For example, another ten years later when the subjects will on average be in their sixties, will there be any greater deterioration in the already-measured cognitive scores in the subset of migraine patients with more highly progressive lesions than in non-migraine patients with more highly progressive lesions? More importantly, are the high lesion load patients with migraine becoming clinically demented, or suffering increased strokes or progressive gait impairment?

I can only say that, working retrospectively from my own clinical experience, an excess risk of stroke and other vascular diseases is not something I have particularly observed in patients who had migraine when they were younger, unlike the situation in cigarette smokers and diabetics. On the other hand, in the elderly population, the occurrence of migraine attacks does seem to be a marker of vascular disease. Perhaps it is the age of the patient with migraine that is the key, and the slightly mixed findings of the study reflect that they have selected a rather mixed-aged cohort.

Link to Scientific Review of this topic.

Posted in Migraine | Tagged , , | 1 Comment

Journal Club review: Risk Factors in Critical Illness Myopathy during the early course of Critical Illness – a Prospective Observational Study

Summary for General Readers

As discussed in the accompanying primer, I chose to review a research article (Weber-Carstens et al., 2010) I found that looked both at risk factors for development of critical illness myopathy and a new diagnostic test for it.

The premise of the test is this; traditionally both nerve and muscle diseases are investigated electrophysiologically by inserting a tiny needle into a muscle and recording the electrical potential that occurs across the muscle when the nerve to the muscle is stimulated by a small electrical current applied through the skin over the nerve (this is only a little uncomfortable even for a wide awake patient). If there is a shrinkage in the recorded potential due to damage, there are other clues that indicate whether it is likely to be the nerve or the muscle that is the problem. But in an unconscious patient who may have two overlapping pathologies as described above, we need any extra information we can get. The new test actually stimulates the muscle directly, not the nerve, without needing voluntary co-operation on the part of the patent and records the muscle membrane excitability. Thus, this will be abnormal in a myopathy (e.g. a critical illness myopathy) but normal in a neuropathy (e.g. if the patient was in an intensive treatment unit (ITU) for Guillain Barre syndrome or coincidentally had diabetic neuropathy).

The study followed 40 patients who had been admitted to ITU and who had been broadly selected as being at high risk because they had persistently poor scores on basic life-functions (e.g. conscious level, blood pressure, blood oxygenation levels, fever, urine output). They looked at all the parameters that could put patients at risk of developing critical illness myopathy and then analysed these against the muscle membrane excitability test measurements. It was found that 22 of the patients showed abnormalities on this test, and these patients did indeed have more weakness and require a longer ITU stay, suggesting they had critical illness myopathy. In terms of factors that would predict development of myopathy, there was an important correlation between abnormal muscle membrane test findings and a certain blood test (raised interleukin 6 level) that indicates systemic inflammation or infection. Other (possibly overlapping) correlations included the overall disease severity, the overt presence of infection, a marker indicating resistance to the hormone insulin (IGFBP1), the requirement for adrenaline (called epinephrine in the US) type stimulants and the requirement for heavier sedation.

The study’s strengths are that it highlights an important area of patient management that may often be somewhat neglected, it seems thoroughly conducted with a convincing result and it not only describes a new test but shows how it may be clinically useful and validates it against the patients’ actual clinical outcome. I felt that a possible missed opportunity was relying solely upon the notoriously insensitive Medical Research Council (MRC) strength assessment system. At the levels they were recording (from around 2 to 4), the test is a bit better, however, and it at least reflects something that is clinically relevant. Values for the actual numbers of patients who were clinically weak such as to delay recovery in the test-positive vs test-negative patients would have been helpful. A quantitative limb strength measure (when the patient later wakes up more fully) or a measure of respiratory efforts might also have been useful. Finally, one cannot take the proportion of patients with critical illness myopathy on this test as a prevalence level (though the authors do not purport to do this). This is because a positive test result does not necessarily indicate a clinically significant myopathy, as mentioned above, and because the patients were already selected as being severe cases. A study looking at any ITU patients would be interesting; for example would there be certain risk factors for myopathy even in patients who were otherwise generally less critically ill?

This question brings me to another point that I think may be important. After reviewing the journal, further review of the wider literature on critical illness myopathy led to my understanding that there are three distinct pathological types (meaning appearances under microscopy and staining), but to a variable extent they may all be caused by the catabolic state of the ill patient. A catabolic state means a condition where body tissues are broken down for their constituent parts to supply glucose for energy or amino acids to make new protein. In a critically ill patient, the physiological response is to go “all out” to preserve nutrition for vital organs, such as the brain, the heart and the internal organs, in the expectation that there will be little or no food intake. Especially if the patient has fever or is under physiological stress, there is also an increased demand for nutrition. So the body breaks down the protein of its own tissues for its energy supply, and the most plentiful source for this “meat”, as with any meat we might eat, is… muscle. My accompanying journal club review goes beyond the research article to look at measures to  limit or correct this “self-cannabilistic” tendency in ITU patients.

But related to the issue described above regarding selection of patients are some intriguing questions. What if the same phenomenon occurred to a lesser extent in other patients who were sick but not severely enough to need transfer to ITU? What would be the effect if a patient were in a chronic catabolic state already because they were half-starved as the consequence of a neurological problem that affected the ability to swallow, or if they already had a muscle-wasting neurological condition?

It is possible, for example, that this could have a major impact on care of patients suffering from acute and not so acute stroke. Identifying and specifically treating those whose weakness is not only due to their stroke but to a superadded critical illness myopathy induced by the fact that they are generally very unwell, susceptible to infection and poorly nourished due to swallowing problems could have a significant positive influence on rate of recovery and final outcome.

Scientific Background

Introduction

Critical illness myopathy is a relatively common complication experienced by patients managed in intensive care, occurring in 25-50% of cases where there is sepsis, multi-organ failure or a stay longer than seven days. I chose a research article on this condition for online journal club review because I had previously assumed the condition was rare and knew little about it until the fact that a patient of mine was identified as having the condition prompted me to engage in some background reading. The study I have reviewed focuses on diagnosis and prediction of risk factors. As a Neurologist I was particularly concerned with difficulty in diagnosis when the reason for the patient requiring ITU management in the first place is that they have a primary neuromuscular disorder. In other words, the critical illness myopathy is a superadded cause for their weakness. First, I describe some of the general background on this seldom-reviewed (by me at any rate!) condition.

Epidemiology

The exact incidence of critical illness myopathy even in the well-defined situation of ITU, is unclear and varies between studies, perhaps reflecting different case mixes and difficulty distinguishing from critical illness polyneuropathy. Indeed in some cases, myopathy and neuropathy may coexist. An early prospective study by Lacomis et al. (1998) found electromyographic (EMG) evidence of myopathic changes in 46% of prolonged stay ICU patients. When looking at clinically apparent neuromyopathy, De Jonghe et al. (2002), found an incidence of 25%, with 10% of the total having EMG and muscle biopsy evidence of myopathic or neurogenic changes. In a review by Stevens et al. (2007), the overall incidence of critical illness myopathy or neuropathy was 46% in patients with a prolonged stay, multi-organ failure or sepsis. A multi-centre study of 92 unselected patients found that 30% had electrophysiological evidence for neuromyopathy (Guarneri et al., 2008). Pure myopathy was more common that neuropathic or mixed types and carried a better prognosis, with three of six recovering fairly acutely and a further two within six months.

Investigation

In a patient with limb weakness in an intensive care setting there should be a high level of suspicion for critical illness neuromyopathy. Nerve conduction studies (NCS) and EMG may help to distinguish critical illness polyneuropathy with more distal involvement and large polyphasic motor units on EMG, from critical illness myopathy with more global involvement, normal sensory nerve conduction and small polyphasic units.

However, there remain potential difficulties. First, EMG is easier to interpret when an interference pattern from voluntary contraction can be obtained, but this might prove impossible with a heavily sedated or comatose patient. Second, when the patient’s primary condition is neurological, such as in Guillain Barre syndrome, myasthenia, myopathy or motor neurone disease, it may be difficult to distinguish NCS and EMG abnormalities of these conditions from those of superadded critical illness.

In cases of suspected critical illness myopathy, the most definitive investigation is muscle biopsy. Histologically, it manifests in one of three ways, and these may be distinguished from neurogenic changes or other myopathic disease.

Subtypes of Critical Illness Myopathy: Minimal Change Myopathy

Minimal Change Critical Illness Myopathy (CIM)The first subtype is minimal change myopathy. There is increased fibre size variation, some appearing atrophic and angulated as they become distorted by their normal neighbours. Type II fibre involvement may predominate, perhaps because fast twitch fibres are more susceptible to fatigue and disuse atrophy. There is no inflammatory response and thus serum creatine kinase is normal.

Clinically, it may be apparent only as an unexpected difficulty weaning from ventilation, and the EMG changes may be mild, making muscle biopsy more critical.

The condition may lie on a continuum with disuse atrophy, but made more extreme by a severe catabolic reaction induced by sepsis and systemic inflammatory responses triggering multi-organ failure (Schweickert & Hall, 2007). Muscle is one such target organ; ischaemia and electrolyte and osmotic disturbance in the critically ill patient trigger catabolism by releasing glucocorticoids and cytokines such as interleukins and tumour necrosis factor. For example, Interleukin 6 promotes a high affinity binding protein for insulin like growth factor (IGF) to down-regulate the latter and thereby block its role in glucose uptake and protein synthesis. This is paralleled by a state of insulin resistance. Muscle may be particularly susceptible to catabolic breakdown, being a ready “reserve” for amino acids to be used in proteolysis to maintain gluconeogenesis for other vital tissues in the body’s stressed state (Van den Berghe, 2000). A starved patient may lose around 75 g/day of protein, while a critically ill patient may lose up to 250 g/day, equivalent to nearly 1 kg of muscle mass (Burnham et al., 2003). Disuse, exacerbated iatrogenically by sedatives, membrane stabilisers and neuromuscular blocking drugs, may impair the transmission of myotrophic factors and further potentiate the tendency to muscle atrophy (Ferrando, 2000).

Subtypes of Critical Illness Myopathy: Thick Filament Myopathy

Patchy Myosin Filament loss in Thick Filament MyopathyThe second histological subtype is thick filament myopathy. There is selective proteolysis of myosin filaments, as seen by smudging of fibres on Gomorri Trichrome light microscopy and directly on electron microscopy. Since myosin carries the ATPase moiety, this is apparent on light microscopy as a specific lack of ATPase staining of both type I and type II fibres. Clinically, patients may have global flaccid paralysis, sometimes including ophthalmoplegia, and difficulty weaning from the ventilator. The CK may be normal or raised. Thick filament myopathy appears to have a similar pathophysiology to minimal change myopathy, but may be especially associated with high-dose steroid administration and neuromuscular blocking agents, particularly vecuronium.

Subtypes of Critical Illness Myopathy: Acute Necrotising Myopathy

Acute Necrotising MyopathyThis is a more aggressive myopathy, with prominent myonecrosis, vacuolization and phagocytosis. Weakness is widespread and the CK is generally raised. Its aetiology may relate to the catabolic state rendering the muscle susceptible to variety of additional, possibly iatrogenic, toxic factors. It may lie on a continuum with, and progress to, frank rhabdomyolysis.

Management

There are a number of steps in managing critical illness myopathy.

  • First, iatrogenic risk factors should be identified and avoided where possible (see list above).
  • Second, appropriate nutritional supplementation may be helpful but objective evidence for this is sparse. Parenteral high dose glutamine supplementation may improve overall outcome and length of hospital stay (Novak et al., 2002), and since critical illness myopathy is so common at least some of this may be by partly reversing the catabolic tendency in muscle. Other amino acid supplements and antioxidant supplements (e.g. glutathione) could have similar effects but have not been adequately trialed. There is again no conclusive proof in favour of androgen or growth hormone supplements, and in the latter case there may be adverse effects (Takala J et al., 1999). Tight glucose control with intensive insulin therapy reduces time on ventilatory support, and may protect against critical illness neuropathy, but the effect on myopathy is not clear (van den Berghe et al., 2001).
  • Finally, early physiotherapy encouraging activity may be helpful, as shown in a randomised controlled trial (Schweickert et al., 2009), perhaps preventing the amplification of catabolic effects by lack of activity.

Journal Review

The research article reviewed here (Weber-Carstens et al., 2010) describes a study looking at a relatively new electrophyiological test for myopathy, namely measurement of muscle membrane electrical excitability to direct muscular stimulation. An attenuated response on this test will indicate a myopathic process unlike a reduced traditional compound muscle action potential that could reflect either neural or muscular pathology. Furthermore, while an EMG interference pattern is dependent on some background ongoing voluntary muscle activity, the test can be performed on a fully unconscious patient. The study uses this test to explore the value of various putative clinical or biochemical markers recorded early in the patient’s time on ITU that might subsequently predict the development of critical illness myopathy.

There were 40 patients selected for study on the basis that they had high (poor) Simplified Acute Physiology (SAPS-II) scores for at least three days in their first week on ITU. It was found that 22 of these subsequently had an abnormally muscle membrane excitability. As was also shown in a previous study, the abnormal test values in these patients corresponded to a clinical critical illness myopathy state in that they were weaker than the others on clinical MRC strength testing and they also took significantly longer to recover as measured by ITU length of stay.

The main finding was that multivariate Cox regression analysis pointed to blood interleukin 6 levels as an independent predictor of development of critical illness myopathy, as was the total dose of sedative received. However the  predictive value of this correlation on its own was modest. In an overall predictive test combining a cut-off level of Il-6 of 230 pg/ml or more and a Sequential Organ Failure Assessment (SOFA) score of 10 or more at day 4 on ITU, the observed sensitivity was 85.7% and specificity 86.7%. There were also other potentially co-dependent predictive risk factors, including markers of inflammation, disease severity, catecholamine use and IGF binding protein level. Higher dose steroids, aminoglycosides and neuromuscular blocking agents were interestingly not associated with critical illness myopathy in this sample.

Opinion

The study is clearly described and carefully conducted. The electrophysiological test appears to have real value, and is perhaps something that should be more widely introduced as a screening test before a muscle biopsy, given the latter test’s potential complications. The test can also be performed at a relatively early stage on a completely unconscious patient, where interventions to address the problem can be made in a more timely manner. Certainly I am going to discuss the feasibility of this test with my neurophysiological colleagues.

As the authors point out, perhaps the fact that they only recorded blood tests such as Interleukin levels on two occasions per patient meant that they missed the true peak level in some patients – its predictive value might otherwise have been stronger. I would have liked to have seen a more explicit link between their muscle membrane excitability and clinically relevant weakness. They show a reduction in mean MRC strength grade from around 4 to 2, which is clinically meaningful at these strength levels, but objective strength testing or respiratory effort measurements would have been advantageous, as well as the actual numbers of patients who were clinically severely weakened rather than just those with abnormal electrophysiology.

I think further study on unselected patients is important, even if it means that perhaps only 22 out of 100 rather than 22 out of 40 will have abnormal electrophysiology. This is because it might not only be those patients selected for the study on the basis of persistently poor physiology scores who could develop critical illness myopathy. A predictive marker in otherwise low risk patients might prove even more useful.

By way of general observation rather than opinion on this research, and extending the argument on investigating less critically ill patients, I have wondered if critical illness myopathy might in fact occur in acutely unwell patients who do not reach ITU at all. There are many neurological and other conditions that predispose to catabolic states, such as patients with chronic infection or inflammation, those who had preexisting disuse atrophy, those on steroids, or those who were already chronically malnourished due to poor care or poor or unsafe swallowing before they deteriorated such that they required acute hospital care. Even patients without pre-existing disease, such as those who have suffered acute stroke, may subsequently be susceptible to a catabolic state due to aspiration, other infection, immobility or suboptimal nutrition. One can speculate that large numbers of patients with stroke, multiple sclerosis relapse or other acute deteriorations requiring neurorehabilitiation may have significantly impaired or delayed recovery due to unrecognized superadded critical illness neuropathy. Certainly in stroke, important measures found to improve outcome, such as early physiotherapy and mobilisation, early addressing of nutrition, treating infection and good glycaemic control, happen to be among the key elements in treating critical illness myopathy. More directed and aggressive management along these lines in a subgroup of these patients who have markers for critical illness myopathy might further accelerate improvement and achieve a better final outcome.

Posted in Intensive Care Neurology, Myopathy | Tagged , , | Leave a comment

Primer on Critical Illness Myopathy for General Readers

Neurology in Critical Care

Despite the fact that Critical Care and Neurology are separately relatively “glamorous” medical disciplines, neurological diseases in the critical illness setting receive relatively little attention. However, if one is in the business of intervening to make major improvements to patients’ outcomes (which we should be), then perhaps Neurologists as a group should focus a little more on this clinical setting.

There are two ways in which neurological diseases impact on critical care, typified by a patient management setting such as an intensive treatment unit (ITU) or high dependency unit.

  • First, a number of neurological diseases constitute the primary reason why patients need critical care. Examples vary from stroke, the most common cause of disability in developed countries, to Guillain Barre syndrome, myasthenia gravis, inflammatory encephalopathies and rare metabolic diseases. Some of these conditions have the potential to remit spontaneously or with treatment and so if the patient can be “tided” over a critically ill period successfully, the eventual prognosis may be excellent. Optimal management of such patients may therefore make a huge difference to patient outcome.
  • Second, even when the primary condition is not neurological, the critically ill patient may suffer a number of secondary neurological complications which may then become a major factor limiting outcome. These include delirium and hallucinations, nerve pressure palsies, critical illness neuropathy and critical illness myopathy; the last of these is the focus of this post.

Critical Illness Myopathy

A myopathy simply refers to any disease of the muscles, while a neuropathy refers to diseases of the nerves whose function is to transmit movement signals to the muscles or sensory signals back to the brain. For reasons that are not entirely clear, but which we will speculate upon, the muscles (more commonly) and the nerves are susceptible to damage in any patient undergoing intensive care; a myopathy occurs in 25-50% of cases where there is sepsis, multi-organ failure or a stay longer than seven days. At worst this may result in lasting disability, but at best may still significantly delay weaning off the ventilator and return to mobility. This has cost implications as well as implications regarding the extra suffering experienced by such patients.

The reasons why I wanted to conduct a journal review on this topic, for which this is the accompanying primer, are:

  • I had incorrectly assumed that critical illness myopathy was very rare until I had cause to research it in relation to one of my patients and I wonder if some colleagues might be under a similar misapprehension.
  • I wanted to explore any treatment options for this common and important condition.
  • I wanted to see if there were risk factors that would predict likely development of critical illness myopathy before patients get it and to diagnose them accurately when they do get it.
  • In reference to the latter, I was particularly concerned with difficulty in diagnosis when, as may hardly be unexpected if one is a Neurologist, the primary condition rendering the patient requiring intensive care is also neurological. How may we determine, for example, if a patient’s failure to wean from ventilation or to develop return in muscle strength is due to their Guillain Barre syndrome, or a secondary critical illness neuromyopathy?

More Background Information

There is a website providing information and support for patients and relatives with problems related to critical care called ICU Steps.

Posted in Myopathy, Primer Posts for General Readers | Tagged , , , , | 1 Comment

Journal Club Review: A Double-Blind, Delayed-Start Trial of Rasagiline in Parkinson’s Disease

Summary for General Readers

Given it was first introduced to treat Parkinson’s disease in the 1960’s (see the accompanying background information on Parkinson’s disease for general readers), it is surprising that it was not until four decades later that a major study took place looking at outcome versus placebo of levodopa therapy from the point of view of its long-term neurotoxic or neuroprotective effects. At the turn of the 21st century, it was considered a fashionable view that levodopa therapy primed the development of dyskinesia and on-off fluctuations; it was almost a necessary evil in treating Parkinson’s disease and to be delayed as far as possible into the illness.

Then came the ELLDOPA (Earlier vs Later LevoDOPA) study, which confirmed the accepted view that levodopa did lead later on in therapy to dyskinesia, but more importantly showed that treating a patient adequately over nine months with efficacious medication rendered them in a better clinical state than those starved of medication, even after stopping the treatment for two weeks. In other words, treated patients were better even when the drug was temporarily “washed-out” of their systems. Did this mean that the treatment was somehow slowing the deterioration of the disease? Not according to a parallel brain scan study; radioactively labelling the amount of surviving nerve endings of the degenerating dopaminergic nerve cells revealed that patients who had received levodopa had worse scans than those who had received nothing, despite being clinically better off.

To many “jobbing” (which is what some call those who spend their time just managing patients on a practical basis rather than leading opinion) neurologists, this simply suggested that such imaging is perhaps not such a reliable marker of disease progression, and confirmed their suspicions that fears over the dangers of levodopa therapy had been over-played. They would have seen many of their patients do really quite well on levodopa therapy, improving significantly over their prior untreated state, and remaining better than that level for a long time and without complications, especially if they had been dosed cautiously. Keeping a patient under good control, thus maintaining their activities of daily living as best as possible, might easily render the patient in a better state even after a temporary withdrawal than one left untreated to become chronically disabled. This did not necessarily imply neuroprotection.

However debate remained intense over the possible neuroprotective effect and over the study’s methodology. A series of other neuroprotection studies were carried out. The one reviewed here, called ADAGIO – an acronym which, if you can believe it, comes from “Attenuation of Disease progression with Azilect GIven Once daily” – is an example of one that employs an elegant design called “delayed start”. (Azilect is the trade name of rasagiline.)

The problem of studying a neuroprotective effect of a drug that also helps symptoms is that, when the only way you can measure the disease is by symptom severity, you don’t know if the patients are better because their disease course has improved or if they simply feel better from symptom control. The drug “masks” the state of the disease. The obvious solution, which was employed by the ELLDOPA study, is to stop the drug temporarily so that the treated and untreated groups are back on a level playing field. But another solution is to delay the start in one group compared to another. At the end of the study, both groups are on the same treatment, but one group had enjoyed the treatment for a longer time, and therefore had more time over which to have the cumulative neuroprotective effect. (One assumes symptomatic benefits are relatively short-lasting.)

The ADAGIO study investigated the drug rasagiline, a monoamine oxidase inhibitor which works by preserving more dopamine within the synapse, so making surviving dopamine nerve cells work harder (see the Primer in Parkinson’s Disease for details). It was found that those patients given the drug earlier were indeed better off by the end of the study, presumably because they had had a longer time receiving neuroprotection.

But doubling the normal dose of the drug made the patients not even better off when treated earlier, but worse off, and this was not simply because of symptomatic side effects of the higher dose. It is therefore not surprising that the conclusion over neuroprotection was muted, and that the debate still continues.

It may be that there are no “short-cuts” to studying neuroprotection. Fortunately, in most patients PD progresses slowly over many years. A neuroprotective agent should therefore be given the chance to work over 10+ years to measure its benefit, and patient groups on or not on the agent should be on similar best symptomatic therapy throughout, just as you would do if you were using the neuroprotective agent in real life.

In the meantime, what do we do? There is merit in the argument that we should give every PD patient the “good” dose of rasagiline, because the study suggested neuroprotection. When the results come out from a 10 year study, it will be too late for today’s patients. But many neurologists do not do this.

The first reason is economic. In health care economies that are free at source, like that in the UK, costs are limited by a model that requires proof and quantification of efficacy (though there are always “political” exceptions). Clearly, there is some scientific evidence for neuroprotection from rasagiline, but it is a judgement call whether this is enough to extrapolate that patients will be better off after 10 years on the drug because of its neuroprotection than those on other treatment regimes. In these grey areas, the drug is essentially competing with a number of other agents of uncertain cost:benefit ratio and with varying strengths of claim. Even in other health economies, and indeed in advertising in general, there are strict rules about what claims may be made about a product.

The second reason for caution over wholesale use of an agent for neuroprotection is historical. In some quarters, until around twenty years ago, there was wholesale use of selegeline, a drug similar to rasagiline, as a neuroprotective agent. This was reversed by a study (DATATOP) that suggested increased mortality from the drug. Many patients were dismayed at this news, and when they were taken off the drug they were even more dismayed because it is actually quite good as a symptomatic agent. Now the findings of DATATOP have been refuted again, and it is back to being used as one of a number of reasonable choices for symptom control. Of course this cannot be directly extrapolated to rasagiline, but there is natural concern over losing the trust of another generation of neurologists and patients.

That is why it may be prudent to steer a middle course. Levodopa is not desperately neurotoxic, and it good for controlling symptoms. On the other hand, it does cause dyskinesia and the doubts over neuroprotection are such that most would not use it until symptoms warranted it. Similarly rasagiline has a good role in symptom control, and is officially recommended for such use (even in “rationed” health economies), but many neurologists are cautious about using it specifically for a special long-term neuroprotective benefit.

Scientific Background

After the uncertainty surrounding the disease modifying effects of selegeline, where the final conclusion is that it probably is neither neuroprotective nor increases mortality, in recent years a number of studies have explored again the neuroprotective or neurotoxic properties of symptomatic therapies for Parkinson’s disease. The ELLDOPA (Earlier vs Later LevoDOPA) study (Fahn et al., 2005) took treatment naive patients who had had symptoms for at least two years and measured Unified Parkinson’s Disease Rating Scale (UPDRS) scores during 40 weeks treatment either with levodopa or placebo. Measurements were then taken after a two-week washout period off medication. Patients had the expected dose-dependent  improvement on treatment, and the expected deterioration off treatment, but they were still significantly better than those who had been on placebo throughout. Did this mean that levodopa had been partially neuroprotective over those 40 weeks? Functional imaging performed on a subgroup in the same study gave the opposite picture. There was worse deterioration in the treated group as measured by beta-CIT SPECT dopamine transporter levels. The study therefore cast doubt on the notion that such functional imaging is a reliable biomarker of disease progression. However, a number of uncertainties remain concerning interpretation of the study’s findings: i) there could be compensatory transporter up-regulation in untreated patients, ii) transporter levels might otherwise not be an accurate marker of neurodegeneration, iii) the washout period might not have been long enough to remove residual symptomatic benefit, iv) there were some patients who might not in fact have had Parkinson’s disease in this study as they had no functional imaging abnormalities.

The TEMPO study (Parkinson Study Group, 2002) conducted at around the same time employed a different design to look at the possible neuroprotective effects of the Monoamine Oxidase (MAO-B) inhibitor rasagiline. Instead of a washout at the end, there was a delayed start in one group at the beginning. Thus in the first phase, one group was given placebo and the other rasagiline. In the second phase the placebo patients were given rasagiline, and the treated patients carried on with their existing rasagiline.

Journal Review

The study reviewed here, the ADAGIO study (Olanow et al., 2009), employed the same design for the same drug and used three hierarchical statistical tests for inferiority of delayed versus immediate start. The measure of disease severity was total UPDRS ( ie motor aspects, non-motor  aspects and disability all combined). First, after the initial improvement over the 12-week wash-in period in both treated and placebo groups following commencement of therapy, the slope of subsequent deterioration in the rasagiline group over week 12 to the delayed start point at week 36 had to be less steep than in the placebo group. Second, the final UPDRS scores after the end of the delayed start period at 72 weeks had to be better in the initial start group than in the delayed start group. Finally, the slope of deterioration in the initial treatment group during the delayed start period from the end of the second wash-in at 48 weeks to the 72-week end point had to be no worse than in the delayed start group. In other words, initial deterioration had to be slowed in earlier treated patients, they needed to remain better off than delayed treated patients even after the latter group had started treatment, and there needed to be no suggestion that the initial treatment patients were catching-up in terms of disease progression with delayed treated patients so that they would eventually become as badly symptomatic as the latter if the trial had gone on longer.

All three of these statistical criteria were met – but only for 1 mg rasagiline (the current standard dose) not 2 mg. The authors commented that there appeared to be no difference in side effects or in drop-out rates. They suggested that a greater symptomatic benefit might mask the neuroprotective effect in mildly affected patients, and found that all was well in a post-hoc analysis of the worst affected patients. In other words, the delayed start patients improved very well initially and in comparison there was little neurodegeneration to act upon.

But I note that the slopes of deterioration were nevertheless significant in this study in both early and delayed phases. The problem was that at the end of the 72 weeks, the higher strength initial start patients had more deterioration than lower strength initial start patients (3.5 UPDRS points vs 2.8 points) and yet the higher strength delayed start patients had less deterioration than lower strength delayed start patients. The 2 mg initial start end result was basically aberrantly poor.

Given this caveat, the study concluded that there is a possible neuroprotective effect of rasagiline at 1 mg strength but described concerns with the study design. They established that drop out of placebo patients because they were suffering too badly on no treatment was not a factor and admitted the trade-off between the fact that a longer initial phase would yield more potential for neuroprotection but also more placebo dropouts due to uncontrolled symptoms.

Opinion

Rasagiline has undoubted clinical efficacy (though more modest than levodopa), a good side effect profile and an advantage over the similar agent selegeline in possibly being safer to use concomitantly with certain antidepressants, though it is considered “prudent” not to prescribe them together. Its long duration of action makes it a good choice for reducing nocturnal and early morning symptoms.

Regarding the neuroprotective effect, in my opinion the 2 mg dosing issue remains a major problem in interpretation. A neurotoxic effect of higher doses seems unlikely. Given the different metabolism and body habitus of different subjects and general pharmacological behaviour it is very unlikely that there should be such a narrow range of “special” dose.

I am not convinced that the UPDRS is a true interval scale – in other words a certain change at one level is the equivalent of one at a more severe level. This is important because the analyses in this study have to assume this linearity. Yes, there are studies that indicate UPDRS linearity over time. For example one study showed a linear 3 point annual drop on treatment, but only after treatment had “bedded in” for six months (Guimaraes et al., 2005). But is disease progression  itself always linear over time? Clinical experience sometimes suggests otherwise.

So it might just have happened that more patients on 2 mg initial start, while adequately matched in terms of initial clinical severity, were at a stage of disease where they were teetering on the edge of a steeper slope of clinical deterioration. And by the same token, how can we be sure that the delayed start 1 mg patients were not similarly “unlucky”? After all, statistics is only probability and if there are a lot of unknown variables in a complex study there is a fair chance that one of them may throw up an aberration.

Rehabilitationists are very aware that it is much harder to regain lost function than to maintain function. Therefore the likely slope of disease severity vs function when deteriorating is not the same as that when improving; this might be called hysteresis. Could the same apply to a lesser extent to different rates of deterioration, eg allowing deterioration to occur unchecked and then trying to slow it at a later, more resistant, stage?

My arguments are just conjecture, and perhaps I am wrong about the possible non-interval scale behaviour of the UPDRS scale. But I have yet to hear an adequate explanation of why doubling the rasagiline dose statistically significantly removes neuroprotective benefit by the end of the study.

A difficulty with this and other study designs, as mentioned above, is that there is a trade-off between the neuroprotection period and the drop-out rate; a long neuroprotection period may show a bigger effect, but too many placebo patients might drop out of the trial because they were going too long without symptomatic treatment. In addition there is a trade-off regarding the initial disease severity. The ADAGIO patients had to already be diagnosed with two of tremor, rigidity and bradykinesia and then remain untreated for 18 months until baseline. Many of my patients would be desperate for treatment by then! But take patients that are too mild, and there is a risk that many will not have true Parkinson’s disease; that may have been what happened with the normal functional imaging in some of the ELLDOPA study patients.

One sympathises with the difficulties in designing a trial to test the neuroprotective properties of a symptomatic agent, and the ADAGIO study was very carefully designed to address these concerns. But perhaps there is no short cut to a study design where a putative agent is added at an early stage to ongoing best symptomatic treatment (at least six months of such treatment as indicated by the study on predictability of UPDRS behaviour), and continued through in parallel with best symptomatic treatment for 10-20 years. Studies have looked at how much UPDRS change constitutes a clinically important difference (Shulman et al, 2010). Minimal clinically important difference, according to external criteria of disease severity, constitutes a change of at least 4.1 points; the putative neuroprotective effects in ADAGIO did not approach this level. Given that Parkinson’s disease (fortunately) has such a long time course, we would hardly expect that a few months of neuroprotection would result in anything more. Unless we do a trial of this 10-20 year duration, we remain wholly reliant upon extrapolation rather than demonstration.

 

Posted in Parkinson's Disease | Tagged , , , | Leave a comment

Primer on Parkinson’s Disease for General Readers

Pathology

English: Immunohistochemistry for alpha-synucl...

English: Immunohistochemistry for alpha-synuclein showing positive staining (brown) of an intraneural Lewy-body in the Substantia nigra in Parkinson’s disease. (Photo credit: Wikipedia)

Parkinson’s Disease (PD) is a relatively common condition affecting around 1% of all individuals aged over 60, and increasing towards 5% of those over 80. It is characterised by neurodegeneration, a “wearing out” of certain groups of nerve cells in the brain, in this case the dopamine secreting cells of a small area deep within the brain called the substantia nigra. To the naked eye, this degeneration is apparent as a visible pallor of this normally darkly coloured area, and under a microscope characteristic proteinaceous collections called Lewy bodies are seen within the nerve cells.

Main Symptoms of Parkinson’s Disease: The Triad of Bradykinesia, Rigidity and Tremor

Basal ganglia disease

Circuits within the Basal Ganglia involved in Parkinson’s disease (Photo credit: Wikipedia)

Dopamine is an example of a neurotransmitter, a chemical “messenger” released from one nerve ending to the adjoining ending of another nerve to allow the transmission of a signal from one to the other. The lack of dopamine-driven connections from the substantia nigra results in failure of signalling downstream through a network of nerves in functionally linked areas collectively called the basal ganglia. The particular function of these signals may be to “turn up the volume” on various aspects of brain function, especially those controlling movement. Thus PD is characterised by a general slowness and paucity of movement called bradykinesia. There is a parallel failure to “turn down the volume” on other brain functions, namely those that increase muscle activity in the resting state, and this results in rigidity. The dopaminergic loss in general also upsets the fine balance of interconnected brain signalling and this may allow the undesirable spread of “background noise” synchronised rhythmic nerve firing – rather like removing the dampers that prevent a mechanical structure from vibrating uncontrollably at its resonant frequency. It is this rhythmic activity spreading down through nerve pathways to the muscles that results in Parkinsonian tremor.

Other Common Symptoms of Parkinson’s Disease

There are other abnormalities of function in PD that are traditionally regarded as secondary, but which in some patients are the dominant problem. The same failure that results in bradykinesia may result in subtle cognitive deficits such as “slowness” of thought, and a lack of ability to focus the brain on the task in hand, especially if multiple tasks have to be performed simultaneously. Internally initiated tasks becomes relatively more difficult, so patients are more reliant on external triggers or instructions. This is illustrated by the difficulty in initiating a step while walking, which can be partially remedied by a visual target to step towards or a sound to trigger the time to make the step.

Finally, to a variable extent, the degeneration of PD may spread beyond the substantia nigra. This sometimes results in cognitive deficits that are unfortunately not so subtle as those described above, but instead constitute a frank dementia that can be associated with hallucinations. There may also be a failure of autonomic functions, namely the parts of the nervous system that control automatic activities like blood pressure maintenance, bladder and bowel function. As a result, patients may have a tendency to faint or suffer constipation or urinary difficulties.

Drug Treatment of Parkinson’s Disease

Excellent treatment is available for the symptoms of PD, partly because the abnormalities of function so specifically reflect a dopaminergic deficit. These treatments by and large are aimed at correcting this deficit by:

  • Increasing production from the remaining dopaminergic nerves by “flooding” them with the dopaminergic substrate levodopa. This is combined with various additional agents to stop it being broken down in the body before it gets to the brain. Examples are the branded products Madopar, Sinemet and Stalevo.
  • Making the same amount of dopamine go further by inhibiting its breakdown in the synapse (the connection between nerve cells). Examples are selegeline and rasagiline.
  • Mimicking the action of dopamine by a drug that acts directly like a dopaminergic neurotransmitter. These are called dopamine agonists and examples are apomorphine, ropinirole, pramipexole and rotigotine.

In fact, these three actions on the dopaminergic system fall into categories that encompass most pharmacological agents acting on any neurotransmitter system. PD therapeutics is thus a classic model for understanding of neurological therapeutics in general.

All drugs have their side effects, but there are particular side effects unique to those used to treat PD. In a way, the drugs are victims of their own success. Since the deficit is so specifically dopaminergic, and the spread of dopaminergic signalling normally rather generalised, dopaminergic drugs flooding into the brain from the bloodstream do remarkably well to control symptoms. However, when the underlying degeneration of the condition has progressed such that there are hardly any normal dopaminergic neurones remaining, it is not surprising that symptom control becomes very brittle – a drug “bathing” the whole substantia nigra can hardly achieve the same level of control as a precise measure of dopamine specifically triggered from one individual nerve cell to another. Brittle control means that the drugs do not last very long (“wearing off”) or sometimes not at all (“dose failure”). The basal ganglia may be overstimulated following dosing, leading to an excess of movement over which the patient loses control (“dyskinesia”). Transitions from the untreated “off” state to the treated “on” state or to a dyskinetic state and back again may be very sudden and unpredictable (“on”-“off” fluctuations).

In addition, the dopaminergic system is not really entirely specific to the basal ganglia. For example, the limbic system, which controls mood and complex behavioural functions, also uses dopamine as a neurotransmitter and this is the reason why anti-dopaminergic drugs are used successfully to treat psychosis and schizophrenia. The corollary of this is that the dopamine-promoting drugs used to treat PD may make a susceptible individual more likely to suffer symptoms of psychosis. Unfortunately, due to the degeneration of PD sometimes involving areas of the brain other than the substantia nigra, certain patients with PD have this particular susceptibility!

The consequence of this is a therapeutic dilemma; in these susceptible individuals the dopamine-stimulating drugs taken to treat their physical symptoms can bring on hallucinations, psychosis and other behavioural problems called impulse control disorders (e.g. gambling, hypersexuality). One would normally treat such symptoms with an antipsychotic drug that blocks dopamine, but this would make the physical Parkinsonian symptoms worse! In recent years, to the great relief of patients, carers and physicians alike, there have been advances in atypical antipsychotics that treat such symptoms without having a dopamine-blocking action (e.g. sulpiride, clozepine (requires frequent blood monitoring) and quetiapine). In addition, greater understanding of these problems by physicians has led to better recognition, more balanced dopaminergic drug regimes and better avoidance of other contributory drugs. Nevertheless dopaminergic psychosis remains one of the most difficult to manage aspects of PD.

Surgery for Parkinson’s Disease

Before levodopa and other modern drug therapies were developed, the main treatment of PD was surgical. A lesion (basically a hole) would be created deliberately in a certain part of the brain to counterbalance the existing Parkinsonian lesion resulting from the dopaminergic deficit. Unfortunately, complication rates were high and the procedures were literally “hit and miss” with respect to targeting an effective area to make the lesion.

But with increasing recognition of the limitations of drug treatments, and enormous advances in brain imaging and in surgical targeting there has been revival of PD surgery since the 1990’s. The most commonly performed surgical treatment now employed does not involve actual lesioning but ongoing electrical stimulation through electrodes surgically implanted deep into the brain and connected to a controller and battery sited under the skin like a heart pacemaker. This stimulation is actually functioning in the same way as a lesion – it blocks signals passing through the particular brain area, but the key difference is that it is reversible and can be adjusted to suit the patient and minimise side effects.

The key point about these surgical treatments is that they are not a cure and that they are not innately “better” than medications. Clinically, as well as physiologically, “pain” is not proportional to “gain” – going through a major surgical procedure will not get you permanent symptom relief and freedom from drugs. In fact the main procedure, subthalamic nucleus stimulation, only works in a patient if levodopa also works in that patient. Its role is in providing additive treatment without additive side effects, and a treatment relatively free of dose fluctuations. Thus it is (or should be) mainly used in patients who respond to levodopa but who suffer brittle control and certain side effects.

Neuroprotection in Parkinson’s Disease

No matter how good dopaminergic drugs might be, they are directed at symptom control not at the underlying condition. Since the 1980’s there has been much research on the possibility that existing or new anti-Parkinsonian drugs may in addition have a neuroprotective action – in other word they actually protect the nerve cells from the disease process that results in ongoing otherwise relentless degeneration.

A journal discussion on neuroprotection is the subject of a related blog post.

Experimental Treatments

I will not discuss these in detail at the time of writing (January 2013), as by definition they are not the mainstay of management. They include various stem cell lines and stem cell delivery strategies, new dopaminergic and non-dopaminergic drugs, and new delivery systems for existing drugs.

Patients and their relatives often worry that they might somehow be missing out by not having these experimental treatments. Rest assured that if there was a new treatment that was already shown to be fantastic and far better than levodopa, I would be shouting about it as loudly as would any tabloid newspaper!

More Background Information

Other information on PD can be obtained from charitable organisations such as Parkinson’s UK.

Posted in Parkinson's Disease, Primer Posts for General Readers | Tagged , , | 2 Comments