Journal Club General Reader Review: Structural Brain Changes in Migraine (the CAMERA-2 Study)

Review for General Readers

For this paper, I decided to complete two complementary reviews. This one for general readers can be considered a background and a summary for the Journal Club Scientific Review.

Background

For some time there has been a nagging concern among clinicians that migraine is associated with premature vascular changes in the brain. Given how common migraine is, and how commonly imaging is performed as a screening investigation for headache, there arises all too commonly an awkward situation where imaging is performed in a patient with migraine to rule out sinister pathology, and then the imaging is “not quite normal”. In fact the imaging indicates the presence of vascular changes that are typically present mainly in older people. Hardy reassuring.

Does this mean that every migraine attack is causing a mini-stroke, or that migraineurs, when they grow older, are more susceptible to stroke or to vascular dementia or to pre-frontal gait and balance problems? How aggressively should we address vascular risk factors in all migraine patients, about 12% of the adult population? Should we perform MRI scans on all 12%, and address risk factors in the sizeable proportion with the excess lesions, or address risk factors in all, or in none? Should we be thinking in terms of secondary prevention measures, rather than primary prevention? (Secondary prevention means preventing stroke or heart disease when such events have already occurred. The balance of risks is consequently shifted in favour of intervention despite potential side effects or risks.) What about echocardiography to screen all migraineurs for cardiac sources of emboli and for mitral valve prolapse? What about a bubble study to investigate patent foramen ovale? The questions multiply and the answers are frustratingly lacking.

 These concerns over MRI appearances were confirmed by epidemiological findings, including the CAMERA study (Cerebral Abnormalities in Migraine – an Epidemiological Risk Analysis). In nearly 300 subjects with migraine, the female subgroup was indeed found to have an excess of small scattered white matter changes on MR imaging compared to 140 age, sex and other risk factor matched controls. Furthermore, the more frequent the migraines, the greater the number of lesions, indicating that there could be some cumulative lesioning effect from migraine attacks.

However, this study merely corroborated the imaging findings. It did not indicate whether or not they actually mean anything for patients. Therefore the CAMERA study followed up its patients, measuring changes in lesion load and recording cognitive ability by IQ tests; the findings after 9 years are presented in CAMERA 2, the subject for this review.

Journal Review

Around two-thirds of the original CAMERA 1 study subjects with and without migraine were followed up. In females, it was found that 77% of patients with migraine had worsening of a certain pattern of imaging abnormalities called deep hemispheric white matter lesions, compared with 60% of female controls. One expects some progression simply due to age, the mean age by this second study being 57 years. The more prevalent progression in the migraine patients was nevertheless statistically significant (p=0.04). Progression of other types of brain lesions was not significantly different between female migraineurs and controls, nor was there a migrainous association in men with any kind of MRI lesion or progression thereof. Unlike the baseline findings from CAMERA 1, further progression in the number of white matter lesions was not associated with a higher frequency of migraine attacks.

Most importantly, the study failed to find any relationship between presence or absence of MRI lesions and cognition. However, overall I would personally take these finding as leaving me “a little less worried than I was before” rather than “reassured”. This is because of the statistical detail.

The authors chose to analyse the cognitive (and fine movement task) data by lumping all the migraine and non-migraine patients together and then dividing them into the worst fifth regarding lesion load and the best four fifths. Using a statistical model involving linear regression, they found that, after correcting for prior educational level, age and sex, there was a trend for worse cognition in the smaller high lesion load group compared to the larger low lesion load group but this did not reach significance (p=0.07).

However there is a difference between saying, “the lack of statistical significance means there is no evidence for an effect of lesion load on cognition”, and “the lack of statistical significance means there is positive evidence for no effect of lesion load on cognition”. This difference is often lost on journalist and publicists. Although the statistics cannot prove that cognition is worse with higher lesion load, with that p-value I for one would like to be in the low lesion load group!

They then analysed the all-important migraine issue by bringing whether or not the subjects had migraine into the high vs low lesion load cognition statistical model and they found that having migraine did not influence this (said to be lack of) effect (p=0.3). But if the effect is really borderline rather than absent, might the migraine influence be too?

There was also the very clearly non-significant statistic (p=0.9) that the migraine patients overall had cognitive scores that were not significantly worse than non-migrainous controls, which is reassuring though I think this was a straight comparison rather than correcting for possibly higher original cognition or possibly higher educational level.

Finally, high lesion load in the CAMERA 1 study 9 years earlier did not predict worsened cognition at the time of CAMERA 2. In other words, it seems to be more the age-related subsequent accumulation of lesions that possibly matches with poor cognition rather than the original migraine associated lesions. (Remember, nearly as many non-migrainous patients had progression in white matter changes over the nine years as migraine patients.)

While these two latter points are somewhat reassuring, we still do not get a clear answer to the question, “In the subset of female migraine patients with high lesion load, did their cognition deteriorate more from nine years earlier than that of the migraine patients with low lesion load or that of the controls with low lesion load?”

Conclusions

Returning to the original clinical scenario, given all the whys and wherefores I don’ t think we can draw any firm conclusions from this study to provide reassurance to patients with migraine. Yes, migraineurs have progression in lesions more than expected for age. No, these are not associated with ongoing frequency of migraine attacks and no they are not found to be associated with impaired cognition nine years later.

One must also place studies in their context. Reviewing this paper prompted me to look further into the literature. In fact there is a reasonable body of recent evidence from long-term follow-up of migraine patients in general that there is no progressive cognitive impairment. This therefore provides further support for the argument that the MRI lesions seen in migraine do not have this clinical significance.

Nevertheless, I still cannot be confident that in no migraine patient is there any significance to their lesion load, beyond that associated with other coincidental risk factors such as diabetes. I think further follow-up of this study cohort would be helpful. For example, another ten years later when the subjects will on average be in their sixties, will there be any greater deterioration in the already-measured cognitive scores in the subset of migraine patients with more highly progressive lesions than in non-migraine patients with more highly progressive lesions? More importantly, are the high lesion load patients with migraine becoming clinically demented, or suffering increased strokes or progressive gait impairment?

I can only say that, working retrospectively from my own clinical experience, an excess risk of stroke and other vascular diseases is not something I have particularly observed in patients who had migraine when they were younger, unlike the situation in cigarette smokers and diabetics. On the other hand, in the elderly population, the occurrence of migraine attacks does seem to be a marker of vascular disease. Perhaps it is the age of the patient with migraine that is the key, and the slightly mixed findings of the study reflect that they have selected a rather mixed-aged cohort.

Link to Scientific Review of this topic.

Posted in Migraine | Tagged , , | 1 Comment

Journal Club review: Risk Factors in Critical Illness Myopathy during the early course of Critical Illness – a Prospective Observational Study

Summary for General Readers

As discussed in the accompanying primer, I chose to review a research article (Weber-Carstens et al., 2010) I found that looked both at risk factors for development of critical illness myopathy and a new diagnostic test for it.

The premise of the test is this; traditionally both nerve and muscle diseases are investigated electrophysiologically by inserting a tiny needle into a muscle and recording the electrical potential that occurs across the muscle when the nerve to the muscle is stimulated by a small electrical current applied through the skin over the nerve (this is only a little uncomfortable even for a wide awake patient). If there is a shrinkage in the recorded potential due to damage, there are other clues that indicate whether it is likely to be the nerve or the muscle that is the problem. But in an unconscious patient who may have two overlapping pathologies as described above, we need any extra information we can get. The new test actually stimulates the muscle directly, not the nerve, without needing voluntary co-operation on the part of the patent and records the muscle membrane excitability. Thus, this will be abnormal in a myopathy (e.g. a critical illness myopathy) but normal in a neuropathy (e.g. if the patient was in an intensive treatment unit (ITU) for Guillain Barre syndrome or coincidentally had diabetic neuropathy).

The study followed 40 patients who had been admitted to ITU and who had been broadly selected as being at high risk because they had persistently poor scores on basic life-functions (e.g. conscious level, blood pressure, blood oxygenation levels, fever, urine output). They looked at all the parameters that could put patients at risk of developing critical illness myopathy and then analysed these against the muscle membrane excitability test measurements. It was found that 22 of the patients showed abnormalities on this test, and these patients did indeed have more weakness and require a longer ITU stay, suggesting they had critical illness myopathy. In terms of factors that would predict development of myopathy, there was an important correlation between abnormal muscle membrane test findings and a certain blood test (raised interleukin 6 level) that indicates systemic inflammation or infection. Other (possibly overlapping) correlations included the overall disease severity, the overt presence of infection, a marker indicating resistance to the hormone insulin (IGFBP1), the requirement for adrenaline (called epinephrine in the US) type stimulants and the requirement for heavier sedation.

The study’s strengths are that it highlights an important area of patient management that may often be somewhat neglected, it seems thoroughly conducted with a convincing result and it not only describes a new test but shows how it may be clinically useful and validates it against the patients’ actual clinical outcome. I felt that a possible missed opportunity was relying solely upon the notoriously insensitive Medical Research Council (MRC) strength assessment system. At the levels they were recording (from around 2 to 4), the test is a bit better, however, and it at least reflects something that is clinically relevant. Values for the actual numbers of patients who were clinically weak such as to delay recovery in the test-positive vs test-negative patients would have been helpful. A quantitative limb strength measure (when the patient later wakes up more fully) or a measure of respiratory efforts might also have been useful. Finally, one cannot take the proportion of patients with critical illness myopathy on this test as a prevalence level (though the authors do not purport to do this). This is because a positive test result does not necessarily indicate a clinically significant myopathy, as mentioned above, and because the patients were already selected as being severe cases. A study looking at any ITU patients would be interesting; for example would there be certain risk factors for myopathy even in patients who were otherwise generally less critically ill?

This question brings me to another point that I think may be important. After reviewing the journal, further review of the wider literature on critical illness myopathy led to my understanding that there are three distinct pathological types (meaning appearances under microscopy and staining), but to a variable extent they may all be caused by the catabolic state of the ill patient. A catabolic state means a condition where body tissues are broken down for their constituent parts to supply glucose for energy or amino acids to make new protein. In a critically ill patient, the physiological response is to go “all out” to preserve nutrition for vital organs, such as the brain, the heart and the internal organs, in the expectation that there will be little or no food intake. Especially if the patient has fever or is under physiological stress, there is also an increased demand for nutrition. So the body breaks down the protein of its own tissues for its energy supply, and the most plentiful source for this “meat”, as with any meat we might eat, is… muscle. My accompanying journal club review goes beyond the research article to look at measures to  limit or correct this “self-cannabilistic” tendency in ITU patients.

But related to the issue described above regarding selection of patients are some intriguing questions. What if the same phenomenon occurred to a lesser extent in other patients who were sick but not severely enough to need transfer to ITU? What would be the effect if a patient were in a chronic catabolic state already because they were half-starved as the consequence of a neurological problem that affected the ability to swallow, or if they already had a muscle-wasting neurological condition?

It is possible, for example, that this could have a major impact on care of patients suffering from acute and not so acute stroke. Identifying and specifically treating those whose weakness is not only due to their stroke but to a superadded critical illness myopathy induced by the fact that they are generally very unwell, susceptible to infection and poorly nourished due to swallowing problems could have a significant positive influence on rate of recovery and final outcome.

Scientific Background

Introduction

Critical illness myopathy is a relatively common complication experienced by patients managed in intensive care, occurring in 25-50% of cases where there is sepsis, multi-organ failure or a stay longer than seven days. I chose a research article on this condition for online journal club review because I had previously assumed the condition was rare and knew little about it until the fact that a patient of mine was identified as having the condition prompted me to engage in some background reading. The study I have reviewed focuses on diagnosis and prediction of risk factors. As a Neurologist I was particularly concerned with difficulty in diagnosis when the reason for the patient requiring ITU management in the first place is that they have a primary neuromuscular disorder. In other words, the critical illness myopathy is a superadded cause for their weakness. First, I describe some of the general background on this seldom-reviewed (by me at any rate!) condition.

Epidemiology

The exact incidence of critical illness myopathy even in the well-defined situation of ITU, is unclear and varies between studies, perhaps reflecting different case mixes and difficulty distinguishing from critical illness polyneuropathy. Indeed in some cases, myopathy and neuropathy may coexist. An early prospective study by Lacomis et al. (1998) found electromyographic (EMG) evidence of myopathic changes in 46% of prolonged stay ICU patients. When looking at clinically apparent neuromyopathy, De Jonghe et al. (2002), found an incidence of 25%, with 10% of the total having EMG and muscle biopsy evidence of myopathic or neurogenic changes. In a review by Stevens et al. (2007), the overall incidence of critical illness myopathy or neuropathy was 46% in patients with a prolonged stay, multi-organ failure or sepsis. A multi-centre study of 92 unselected patients found that 30% had electrophysiological evidence for neuromyopathy (Guarneri et al., 2008). Pure myopathy was more common that neuropathic or mixed types and carried a better prognosis, with three of six recovering fairly acutely and a further two within six months.

Investigation

In a patient with limb weakness in an intensive care setting there should be a high level of suspicion for critical illness neuromyopathy. Nerve conduction studies (NCS) and EMG may help to distinguish critical illness polyneuropathy with more distal involvement and large polyphasic motor units on EMG, from critical illness myopathy with more global involvement, normal sensory nerve conduction and small polyphasic units.

However, there remain potential difficulties. First, EMG is easier to interpret when an interference pattern from voluntary contraction can be obtained, but this might prove impossible with a heavily sedated or comatose patient. Second, when the patient’s primary condition is neurological, such as in Guillain Barre syndrome, myasthenia, myopathy or motor neurone disease, it may be difficult to distinguish NCS and EMG abnormalities of these conditions from those of superadded critical illness.

In cases of suspected critical illness myopathy, the most definitive investigation is muscle biopsy. Histologically, it manifests in one of three ways, and these may be distinguished from neurogenic changes or other myopathic disease.

Subtypes of Critical Illness Myopathy: Minimal Change Myopathy

Minimal Change Critical Illness Myopathy (CIM)The first subtype is minimal change myopathy. There is increased fibre size variation, some appearing atrophic and angulated as they become distorted by their normal neighbours. Type II fibre involvement may predominate, perhaps because fast twitch fibres are more susceptible to fatigue and disuse atrophy. There is no inflammatory response and thus serum creatine kinase is normal.

Clinically, it may be apparent only as an unexpected difficulty weaning from ventilation, and the EMG changes may be mild, making muscle biopsy more critical.

The condition may lie on a continuum with disuse atrophy, but made more extreme by a severe catabolic reaction induced by sepsis and systemic inflammatory responses triggering multi-organ failure (Schweickert & Hall, 2007). Muscle is one such target organ; ischaemia and electrolyte and osmotic disturbance in the critically ill patient trigger catabolism by releasing glucocorticoids and cytokines such as interleukins and tumour necrosis factor. For example, Interleukin 6 promotes a high affinity binding protein for insulin like growth factor (IGF) to down-regulate the latter and thereby block its role in glucose uptake and protein synthesis. This is paralleled by a state of insulin resistance. Muscle may be particularly susceptible to catabolic breakdown, being a ready “reserve” for amino acids to be used in proteolysis to maintain gluconeogenesis for other vital tissues in the body’s stressed state (Van den Berghe, 2000). A starved patient may lose around 75 g/day of protein, while a critically ill patient may lose up to 250 g/day, equivalent to nearly 1 kg of muscle mass (Burnham et al., 2003). Disuse, exacerbated iatrogenically by sedatives, membrane stabilisers and neuromuscular blocking drugs, may impair the transmission of myotrophic factors and further potentiate the tendency to muscle atrophy (Ferrando, 2000).

Subtypes of Critical Illness Myopathy: Thick Filament Myopathy

Patchy Myosin Filament loss in Thick Filament MyopathyThe second histological subtype is thick filament myopathy. There is selective proteolysis of myosin filaments, as seen by smudging of fibres on Gomorri Trichrome light microscopy and directly on electron microscopy. Since myosin carries the ATPase moiety, this is apparent on light microscopy as a specific lack of ATPase staining of both type I and type II fibres. Clinically, patients may have global flaccid paralysis, sometimes including ophthalmoplegia, and difficulty weaning from the ventilator. The CK may be normal or raised. Thick filament myopathy appears to have a similar pathophysiology to minimal change myopathy, but may be especially associated with high-dose steroid administration and neuromuscular blocking agents, particularly vecuronium.

Subtypes of Critical Illness Myopathy: Acute Necrotising Myopathy

Acute Necrotising MyopathyThis is a more aggressive myopathy, with prominent myonecrosis, vacuolization and phagocytosis. Weakness is widespread and the CK is generally raised. Its aetiology may relate to the catabolic state rendering the muscle susceptible to variety of additional, possibly iatrogenic, toxic factors. It may lie on a continuum with, and progress to, frank rhabdomyolysis.

Management

There are a number of steps in managing critical illness myopathy.

  • First, iatrogenic risk factors should be identified and avoided where possible (see list above).
  • Second, appropriate nutritional supplementation may be helpful but objective evidence for this is sparse. Parenteral high dose glutamine supplementation may improve overall outcome and length of hospital stay (Novak et al., 2002), and since critical illness myopathy is so common at least some of this may be by partly reversing the catabolic tendency in muscle. Other amino acid supplements and antioxidant supplements (e.g. glutathione) could have similar effects but have not been adequately trialed. There is again no conclusive proof in favour of androgen or growth hormone supplements, and in the latter case there may be adverse effects (Takala J et al., 1999). Tight glucose control with intensive insulin therapy reduces time on ventilatory support, and may protect against critical illness neuropathy, but the effect on myopathy is not clear (van den Berghe et al., 2001).
  • Finally, early physiotherapy encouraging activity may be helpful, as shown in a randomised controlled trial (Schweickert et al., 2009), perhaps preventing the amplification of catabolic effects by lack of activity.

Journal Review

The research article reviewed here (Weber-Carstens et al., 2010) describes a study looking at a relatively new electrophyiological test for myopathy, namely measurement of muscle membrane electrical excitability to direct muscular stimulation. An attenuated response on this test will indicate a myopathic process unlike a reduced traditional compound muscle action potential that could reflect either neural or muscular pathology. Furthermore, while an EMG interference pattern is dependent on some background ongoing voluntary muscle activity, the test can be performed on a fully unconscious patient. The study uses this test to explore the value of various putative clinical or biochemical markers recorded early in the patient’s time on ITU that might subsequently predict the development of critical illness myopathy.

There were 40 patients selected for study on the basis that they had high (poor) Simplified Acute Physiology (SAPS-II) scores for at least three days in their first week on ITU. It was found that 22 of these subsequently had an abnormally muscle membrane excitability. As was also shown in a previous study, the abnormal test values in these patients corresponded to a clinical critical illness myopathy state in that they were weaker than the others on clinical MRC strength testing and they also took significantly longer to recover as measured by ITU length of stay.

The main finding was that multivariate Cox regression analysis pointed to blood interleukin 6 levels as an independent predictor of development of critical illness myopathy, as was the total dose of sedative received. However the  predictive value of this correlation on its own was modest. In an overall predictive test combining a cut-off level of Il-6 of 230 pg/ml or more and a Sequential Organ Failure Assessment (SOFA) score of 10 or more at day 4 on ITU, the observed sensitivity was 85.7% and specificity 86.7%. There were also other potentially co-dependent predictive risk factors, including markers of inflammation, disease severity, catecholamine use and IGF binding protein level. Higher dose steroids, aminoglycosides and neuromuscular blocking agents were interestingly not associated with critical illness myopathy in this sample.

Opinion

The study is clearly described and carefully conducted. The electrophysiological test appears to have real value, and is perhaps something that should be more widely introduced as a screening test before a muscle biopsy, given the latter test’s potential complications. The test can also be performed at a relatively early stage on a completely unconscious patient, where interventions to address the problem can be made in a more timely manner. Certainly I am going to discuss the feasibility of this test with my neurophysiological colleagues.

As the authors point out, perhaps the fact that they only recorded blood tests such as Interleukin levels on two occasions per patient meant that they missed the true peak level in some patients – its predictive value might otherwise have been stronger. I would have liked to have seen a more explicit link between their muscle membrane excitability and clinically relevant weakness. They show a reduction in mean MRC strength grade from around 4 to 2, which is clinically meaningful at these strength levels, but objective strength testing or respiratory effort measurements would have been advantageous, as well as the actual numbers of patients who were clinically severely weakened rather than just those with abnormal electrophysiology.

I think further study on unselected patients is important, even if it means that perhaps only 22 out of 100 rather than 22 out of 40 will have abnormal electrophysiology. This is because it might not only be those patients selected for the study on the basis of persistently poor physiology scores who could develop critical illness myopathy. A predictive marker in otherwise low risk patients might prove even more useful.

By way of general observation rather than opinion on this research, and extending the argument on investigating less critically ill patients, I have wondered if critical illness myopathy might in fact occur in acutely unwell patients who do not reach ITU at all. There are many neurological and other conditions that predispose to catabolic states, such as patients with chronic infection or inflammation, those who had preexisting disuse atrophy, those on steroids, or those who were already chronically malnourished due to poor care or poor or unsafe swallowing before they deteriorated such that they required acute hospital care. Even patients without pre-existing disease, such as those who have suffered acute stroke, may subsequently be susceptible to a catabolic state due to aspiration, other infection, immobility or suboptimal nutrition. One can speculate that large numbers of patients with stroke, multiple sclerosis relapse or other acute deteriorations requiring neurorehabilitiation may have significantly impaired or delayed recovery due to unrecognized superadded critical illness neuropathy. Certainly in stroke, important measures found to improve outcome, such as early physiotherapy and mobilisation, early addressing of nutrition, treating infection and good glycaemic control, happen to be among the key elements in treating critical illness myopathy. More directed and aggressive management along these lines in a subgroup of these patients who have markers for critical illness myopathy might further accelerate improvement and achieve a better final outcome.

Posted in Intensive Care Neurology, Myopathy | Tagged , , | Leave a comment

Primer on Critical Illness Myopathy for General Readers

Neurology in Critical Care

Despite the fact that Critical Care and Neurology are separately relatively “glamorous” medical disciplines, neurological diseases in the critical illness setting receive relatively little attention. However, if one is in the business of intervening to make major improvements to patients’ outcomes (which we should be), then perhaps Neurologists as a group should focus a little more on this clinical setting.

There are two ways in which neurological diseases impact on critical care, typified by a patient management setting such as an intensive treatment unit (ITU) or high dependency unit.

  • First, a number of neurological diseases constitute the primary reason why patients need critical care. Examples vary from stroke, the most common cause of disability in developed countries, to Guillain Barre syndrome, myasthenia gravis, inflammatory encephalopathies and rare metabolic diseases. Some of these conditions have the potential to remit spontaneously or with treatment and so if the patient can be “tided” over a critically ill period successfully, the eventual prognosis may be excellent. Optimal management of such patients may therefore make a huge difference to patient outcome.
  • Second, even when the primary condition is not neurological, the critically ill patient may suffer a number of secondary neurological complications which may then become a major factor limiting outcome. These include delirium and hallucinations, nerve pressure palsies, critical illness neuropathy and critical illness myopathy; the last of these is the focus of this post.

Critical Illness Myopathy

A myopathy simply refers to any disease of the muscles, while a neuropathy refers to diseases of the nerves whose function is to transmit movement signals to the muscles or sensory signals back to the brain. For reasons that are not entirely clear, but which we will speculate upon, the muscles (more commonly) and the nerves are susceptible to damage in any patient undergoing intensive care; a myopathy occurs in 25-50% of cases where there is sepsis, multi-organ failure or a stay longer than seven days. At worst this may result in lasting disability, but at best may still significantly delay weaning off the ventilator and return to mobility. This has cost implications as well as implications regarding the extra suffering experienced by such patients.

The reasons why I wanted to conduct a journal review on this topic, for which this is the accompanying primer, are:

  • I had incorrectly assumed that critical illness myopathy was very rare until I had cause to research it in relation to one of my patients and I wonder if some colleagues might be under a similar misapprehension.
  • I wanted to explore any treatment options for this common and important condition.
  • I wanted to see if there were risk factors that would predict likely development of critical illness myopathy before patients get it and to diagnose them accurately when they do get it.
  • In reference to the latter, I was particularly concerned with difficulty in diagnosis when, as may hardly be unexpected if one is a Neurologist, the primary condition rendering the patient requiring intensive care is also neurological. How may we determine, for example, if a patient’s failure to wean from ventilation or to develop return in muscle strength is due to their Guillain Barre syndrome, or a secondary critical illness neuromyopathy?

More Background Information

There is a website providing information and support for patients and relatives with problems related to critical care called ICU Steps.

Posted in Myopathy, Primer Posts for General Readers | Tagged , , , , | 1 Comment

Journal Club Review: A Double-Blind, Delayed-Start Trial of Rasagiline in Parkinson’s Disease

Summary for General Readers

Given it was first introduced to treat Parkinson’s disease in the 1960’s (see the accompanying background information on Parkinson’s disease for general readers), it is surprising that it was not until four decades later that a major study took place looking at outcome versus placebo of levodopa therapy from the point of view of its long-term neurotoxic or neuroprotective effects. At the turn of the 21st century, it was considered a fashionable view that levodopa therapy primed the development of dyskinesia and on-off fluctuations; it was almost a necessary evil in treating Parkinson’s disease and to be delayed as far as possible into the illness.

Then came the ELLDOPA (Earlier vs Later LevoDOPA) study, which confirmed the accepted view that levodopa did lead later on in therapy to dyskinesia, but more importantly showed that treating a patient adequately over nine months with efficacious medication rendered them in a better clinical state than those starved of medication, even after stopping the treatment for two weeks. In other words, treated patients were better even when the drug was temporarily “washed-out” of their systems. Did this mean that the treatment was somehow slowing the deterioration of the disease? Not according to a parallel brain scan study; radioactively labelling the amount of surviving nerve endings of the degenerating dopaminergic nerve cells revealed that patients who had received levodopa had worse scans than those who had received nothing, despite being clinically better off.

To many “jobbing” (which is what some call those who spend their time just managing patients on a practical basis rather than leading opinion) neurologists, this simply suggested that such imaging is perhaps not such a reliable marker of disease progression, and confirmed their suspicions that fears over the dangers of levodopa therapy had been over-played. They would have seen many of their patients do really quite well on levodopa therapy, improving significantly over their prior untreated state, and remaining better than that level for a long time and without complications, especially if they had been dosed cautiously. Keeping a patient under good control, thus maintaining their activities of daily living as best as possible, might easily render the patient in a better state even after a temporary withdrawal than one left untreated to become chronically disabled. This did not necessarily imply neuroprotection.

However debate remained intense over the possible neuroprotective effect and over the study’s methodology. A series of other neuroprotection studies were carried out. The one reviewed here, called ADAGIO – an acronym which, if you can believe it, comes from “Attenuation of Disease progression with Azilect GIven Once daily” – is an example of one that employs an elegant design called “delayed start”. (Azilect is the trade name of rasagiline.)

The problem of studying a neuroprotective effect of a drug that also helps symptoms is that, when the only way you can measure the disease is by symptom severity, you don’t know if the patients are better because their disease course has improved or if they simply feel better from symptom control. The drug “masks” the state of the disease. The obvious solution, which was employed by the ELLDOPA study, is to stop the drug temporarily so that the treated and untreated groups are back on a level playing field. But another solution is to delay the start in one group compared to another. At the end of the study, both groups are on the same treatment, but one group had enjoyed the treatment for a longer time, and therefore had more time over which to have the cumulative neuroprotective effect. (One assumes symptomatic benefits are relatively short-lasting.)

The ADAGIO study investigated the drug rasagiline, a monoamine oxidase inhibitor which works by preserving more dopamine within the synapse, so making surviving dopamine nerve cells work harder (see the Primer in Parkinson’s Disease for details). It was found that those patients given the drug earlier were indeed better off by the end of the study, presumably because they had had a longer time receiving neuroprotection.

But doubling the normal dose of the drug made the patients not even better off when treated earlier, but worse off, and this was not simply because of symptomatic side effects of the higher dose. It is therefore not surprising that the conclusion over neuroprotection was muted, and that the debate still continues.

It may be that there are no “short-cuts” to studying neuroprotection. Fortunately, in most patients PD progresses slowly over many years. A neuroprotective agent should therefore be given the chance to work over 10+ years to measure its benefit, and patient groups on or not on the agent should be on similar best symptomatic therapy throughout, just as you would do if you were using the neuroprotective agent in real life.

In the meantime, what do we do? There is merit in the argument that we should give every PD patient the “good” dose of rasagiline, because the study suggested neuroprotection. When the results come out from a 10 year study, it will be too late for today’s patients. But many neurologists do not do this.

The first reason is economic. In health care economies that are free at source, like that in the UK, costs are limited by a model that requires proof and quantification of efficacy (though there are always “political” exceptions). Clearly, there is some scientific evidence for neuroprotection from rasagiline, but it is a judgement call whether this is enough to extrapolate that patients will be better off after 10 years on the drug because of its neuroprotection than those on other treatment regimes. In these grey areas, the drug is essentially competing with a number of other agents of uncertain cost:benefit ratio and with varying strengths of claim. Even in other health economies, and indeed in advertising in general, there are strict rules about what claims may be made about a product.

The second reason for caution over wholesale use of an agent for neuroprotection is historical. In some quarters, until around twenty years ago, there was wholesale use of selegeline, a drug similar to rasagiline, as a neuroprotective agent. This was reversed by a study (DATATOP) that suggested increased mortality from the drug. Many patients were dismayed at this news, and when they were taken off the drug they were even more dismayed because it is actually quite good as a symptomatic agent. Now the findings of DATATOP have been refuted again, and it is back to being used as one of a number of reasonable choices for symptom control. Of course this cannot be directly extrapolated to rasagiline, but there is natural concern over losing the trust of another generation of neurologists and patients.

That is why it may be prudent to steer a middle course. Levodopa is not desperately neurotoxic, and it good for controlling symptoms. On the other hand, it does cause dyskinesia and the doubts over neuroprotection are such that most would not use it until symptoms warranted it. Similarly rasagiline has a good role in symptom control, and is officially recommended for such use (even in “rationed” health economies), but many neurologists are cautious about using it specifically for a special long-term neuroprotective benefit.

Scientific Background

After the uncertainty surrounding the disease modifying effects of selegeline, where the final conclusion is that it probably is neither neuroprotective nor increases mortality, in recent years a number of studies have explored again the neuroprotective or neurotoxic properties of symptomatic therapies for Parkinson’s disease. The ELLDOPA (Earlier vs Later LevoDOPA) study (Fahn et al., 2005) took treatment naive patients who had had symptoms for at least two years and measured Unified Parkinson’s Disease Rating Scale (UPDRS) scores during 40 weeks treatment either with levodopa or placebo. Measurements were then taken after a two-week washout period off medication. Patients had the expected dose-dependent  improvement on treatment, and the expected deterioration off treatment, but they were still significantly better than those who had been on placebo throughout. Did this mean that levodopa had been partially neuroprotective over those 40 weeks? Functional imaging performed on a subgroup in the same study gave the opposite picture. There was worse deterioration in the treated group as measured by beta-CIT SPECT dopamine transporter levels. The study therefore cast doubt on the notion that such functional imaging is a reliable biomarker of disease progression. However, a number of uncertainties remain concerning interpretation of the study’s findings: i) there could be compensatory transporter up-regulation in untreated patients, ii) transporter levels might otherwise not be an accurate marker of neurodegeneration, iii) the washout period might not have been long enough to remove residual symptomatic benefit, iv) there were some patients who might not in fact have had Parkinson’s disease in this study as they had no functional imaging abnormalities.

The TEMPO study (Parkinson Study Group, 2002) conducted at around the same time employed a different design to look at the possible neuroprotective effects of the Monoamine Oxidase (MAO-B) inhibitor rasagiline. Instead of a washout at the end, there was a delayed start in one group at the beginning. Thus in the first phase, one group was given placebo and the other rasagiline. In the second phase the placebo patients were given rasagiline, and the treated patients carried on with their existing rasagiline.

Journal Review

The study reviewed here, the ADAGIO study (Olanow et al., 2009), employed the same design for the same drug and used three hierarchical statistical tests for inferiority of delayed versus immediate start. The measure of disease severity was total UPDRS ( ie motor aspects, non-motor  aspects and disability all combined). First, after the initial improvement over the 12-week wash-in period in both treated and placebo groups following commencement of therapy, the slope of subsequent deterioration in the rasagiline group over week 12 to the delayed start point at week 36 had to be less steep than in the placebo group. Second, the final UPDRS scores after the end of the delayed start period at 72 weeks had to be better in the initial start group than in the delayed start group. Finally, the slope of deterioration in the initial treatment group during the delayed start period from the end of the second wash-in at 48 weeks to the 72-week end point had to be no worse than in the delayed start group. In other words, initial deterioration had to be slowed in earlier treated patients, they needed to remain better off than delayed treated patients even after the latter group had started treatment, and there needed to be no suggestion that the initial treatment patients were catching-up in terms of disease progression with delayed treated patients so that they would eventually become as badly symptomatic as the latter if the trial had gone on longer.

All three of these statistical criteria were met – but only for 1 mg rasagiline (the current standard dose) not 2 mg. The authors commented that there appeared to be no difference in side effects or in drop-out rates. They suggested that a greater symptomatic benefit might mask the neuroprotective effect in mildly affected patients, and found that all was well in a post-hoc analysis of the worst affected patients. In other words, the delayed start patients improved very well initially and in comparison there was little neurodegeneration to act upon.

But I note that the slopes of deterioration were nevertheless significant in this study in both early and delayed phases. The problem was that at the end of the 72 weeks, the higher strength initial start patients had more deterioration than lower strength initial start patients (3.5 UPDRS points vs 2.8 points) and yet the higher strength delayed start patients had less deterioration than lower strength delayed start patients. The 2 mg initial start end result was basically aberrantly poor.

Given this caveat, the study concluded that there is a possible neuroprotective effect of rasagiline at 1 mg strength but described concerns with the study design. They established that drop out of placebo patients because they were suffering too badly on no treatment was not a factor and admitted the trade-off between the fact that a longer initial phase would yield more potential for neuroprotection but also more placebo dropouts due to uncontrolled symptoms.

Opinion

Rasagiline has undoubted clinical efficacy (though more modest than levodopa), a good side effect profile and an advantage over the similar agent selegeline in possibly being safer to use concomitantly with certain antidepressants, though it is considered “prudent” not to prescribe them together. Its long duration of action makes it a good choice for reducing nocturnal and early morning symptoms.

Regarding the neuroprotective effect, in my opinion the 2 mg dosing issue remains a major problem in interpretation. A neurotoxic effect of higher doses seems unlikely. Given the different metabolism and body habitus of different subjects and general pharmacological behaviour it is very unlikely that there should be such a narrow range of “special” dose.

I am not convinced that the UPDRS is a true interval scale – in other words a certain change at one level is the equivalent of one at a more severe level. This is important because the analyses in this study have to assume this linearity. Yes, there are studies that indicate UPDRS linearity over time. For example one study showed a linear 3 point annual drop on treatment, but only after treatment had “bedded in” for six months (Guimaraes et al., 2005). But is disease progression  itself always linear over time? Clinical experience sometimes suggests otherwise.

So it might just have happened that more patients on 2 mg initial start, while adequately matched in terms of initial clinical severity, were at a stage of disease where they were teetering on the edge of a steeper slope of clinical deterioration. And by the same token, how can we be sure that the delayed start 1 mg patients were not similarly “unlucky”? After all, statistics is only probability and if there are a lot of unknown variables in a complex study there is a fair chance that one of them may throw up an aberration.

Rehabilitationists are very aware that it is much harder to regain lost function than to maintain function. Therefore the likely slope of disease severity vs function when deteriorating is not the same as that when improving; this might be called hysteresis. Could the same apply to a lesser extent to different rates of deterioration, eg allowing deterioration to occur unchecked and then trying to slow it at a later, more resistant, stage?

My arguments are just conjecture, and perhaps I am wrong about the possible non-interval scale behaviour of the UPDRS scale. But I have yet to hear an adequate explanation of why doubling the rasagiline dose statistically significantly removes neuroprotective benefit by the end of the study.

A difficulty with this and other study designs, as mentioned above, is that there is a trade-off between the neuroprotection period and the drop-out rate; a long neuroprotection period may show a bigger effect, but too many placebo patients might drop out of the trial because they were going too long without symptomatic treatment. In addition there is a trade-off regarding the initial disease severity. The ADAGIO patients had to already be diagnosed with two of tremor, rigidity and bradykinesia and then remain untreated for 18 months until baseline. Many of my patients would be desperate for treatment by then! But take patients that are too mild, and there is a risk that many will not have true Parkinson’s disease; that may have been what happened with the normal functional imaging in some of the ELLDOPA study patients.

One sympathises with the difficulties in designing a trial to test the neuroprotective properties of a symptomatic agent, and the ADAGIO study was very carefully designed to address these concerns. But perhaps there is no short cut to a study design where a putative agent is added at an early stage to ongoing best symptomatic treatment (at least six months of such treatment as indicated by the study on predictability of UPDRS behaviour), and continued through in parallel with best symptomatic treatment for 10-20 years. Studies have looked at how much UPDRS change constitutes a clinically important difference (Shulman et al, 2010). Minimal clinically important difference, according to external criteria of disease severity, constitutes a change of at least 4.1 points; the putative neuroprotective effects in ADAGIO did not approach this level. Given that Parkinson’s disease (fortunately) has such a long time course, we would hardly expect that a few months of neuroprotection would result in anything more. Unless we do a trial of this 10-20 year duration, we remain wholly reliant upon extrapolation rather than demonstration.

 

Posted in Parkinson's Disease | Tagged , , , | Leave a comment

Primer on Parkinson’s Disease for General Readers

Pathology

English: Immunohistochemistry for alpha-synucl...

English: Immunohistochemistry for alpha-synuclein showing positive staining (brown) of an intraneural Lewy-body in the Substantia nigra in Parkinson’s disease. (Photo credit: Wikipedia)

Parkinson’s Disease (PD) is a relatively common condition affecting around 1% of all individuals aged over 60, and increasing towards 5% of those over 80. It is characterised by neurodegeneration, a “wearing out” of certain groups of nerve cells in the brain, in this case the dopamine secreting cells of a small area deep within the brain called the substantia nigra. To the naked eye, this degeneration is apparent as a visible pallor of this normally darkly coloured area, and under a microscope characteristic proteinaceous collections called Lewy bodies are seen within the nerve cells.

Main Symptoms of Parkinson’s Disease: The Triad of Bradykinesia, Rigidity and Tremor

Basal ganglia disease

Circuits within the Basal Ganglia involved in Parkinson’s disease (Photo credit: Wikipedia)

Dopamine is an example of a neurotransmitter, a chemical “messenger” released from one nerve ending to the adjoining ending of another nerve to allow the transmission of a signal from one to the other. The lack of dopamine-driven connections from the substantia nigra results in failure of signalling downstream through a network of nerves in functionally linked areas collectively called the basal ganglia. The particular function of these signals may be to “turn up the volume” on various aspects of brain function, especially those controlling movement. Thus PD is characterised by a general slowness and paucity of movement called bradykinesia. There is a parallel failure to “turn down the volume” on other brain functions, namely those that increase muscle activity in the resting state, and this results in rigidity. The dopaminergic loss in general also upsets the fine balance of interconnected brain signalling and this may allow the undesirable spread of “background noise” synchronised rhythmic nerve firing – rather like removing the dampers that prevent a mechanical structure from vibrating uncontrollably at its resonant frequency. It is this rhythmic activity spreading down through nerve pathways to the muscles that results in Parkinsonian tremor.

Other Common Symptoms of Parkinson’s Disease

There are other abnormalities of function in PD that are traditionally regarded as secondary, but which in some patients are the dominant problem. The same failure that results in bradykinesia may result in subtle cognitive deficits such as “slowness” of thought, and a lack of ability to focus the brain on the task in hand, especially if multiple tasks have to be performed simultaneously. Internally initiated tasks becomes relatively more difficult, so patients are more reliant on external triggers or instructions. This is illustrated by the difficulty in initiating a step while walking, which can be partially remedied by a visual target to step towards or a sound to trigger the time to make the step.

Finally, to a variable extent, the degeneration of PD may spread beyond the substantia nigra. This sometimes results in cognitive deficits that are unfortunately not so subtle as those described above, but instead constitute a frank dementia that can be associated with hallucinations. There may also be a failure of autonomic functions, namely the parts of the nervous system that control automatic activities like blood pressure maintenance, bladder and bowel function. As a result, patients may have a tendency to faint or suffer constipation or urinary difficulties.

Drug Treatment of Parkinson’s Disease

Excellent treatment is available for the symptoms of PD, partly because the abnormalities of function so specifically reflect a dopaminergic deficit. These treatments by and large are aimed at correcting this deficit by:

  • Increasing production from the remaining dopaminergic nerves by “flooding” them with the dopaminergic substrate levodopa. This is combined with various additional agents to stop it being broken down in the body before it gets to the brain. Examples are the branded products Madopar, Sinemet and Stalevo.
  • Making the same amount of dopamine go further by inhibiting its breakdown in the synapse (the connection between nerve cells). Examples are selegeline and rasagiline.
  • Mimicking the action of dopamine by a drug that acts directly like a dopaminergic neurotransmitter. These are called dopamine agonists and examples are apomorphine, ropinirole, pramipexole and rotigotine.

In fact, these three actions on the dopaminergic system fall into categories that encompass most pharmacological agents acting on any neurotransmitter system. PD therapeutics is thus a classic model for understanding of neurological therapeutics in general.

All drugs have their side effects, but there are particular side effects unique to those used to treat PD. In a way, the drugs are victims of their own success. Since the deficit is so specifically dopaminergic, and the spread of dopaminergic signalling normally rather generalised, dopaminergic drugs flooding into the brain from the bloodstream do remarkably well to control symptoms. However, when the underlying degeneration of the condition has progressed such that there are hardly any normal dopaminergic neurones remaining, it is not surprising that symptom control becomes very brittle – a drug “bathing” the whole substantia nigra can hardly achieve the same level of control as a precise measure of dopamine specifically triggered from one individual nerve cell to another. Brittle control means that the drugs do not last very long (“wearing off”) or sometimes not at all (“dose failure”). The basal ganglia may be overstimulated following dosing, leading to an excess of movement over which the patient loses control (“dyskinesia”). Transitions from the untreated “off” state to the treated “on” state or to a dyskinetic state and back again may be very sudden and unpredictable (“on”-“off” fluctuations).

In addition, the dopaminergic system is not really entirely specific to the basal ganglia. For example, the limbic system, which controls mood and complex behavioural functions, also uses dopamine as a neurotransmitter and this is the reason why anti-dopaminergic drugs are used successfully to treat psychosis and schizophrenia. The corollary of this is that the dopamine-promoting drugs used to treat PD may make a susceptible individual more likely to suffer symptoms of psychosis. Unfortunately, due to the degeneration of PD sometimes involving areas of the brain other than the substantia nigra, certain patients with PD have this particular susceptibility!

The consequence of this is a therapeutic dilemma; in these susceptible individuals the dopamine-stimulating drugs taken to treat their physical symptoms can bring on hallucinations, psychosis and other behavioural problems called impulse control disorders (e.g. gambling, hypersexuality). One would normally treat such symptoms with an antipsychotic drug that blocks dopamine, but this would make the physical Parkinsonian symptoms worse! In recent years, to the great relief of patients, carers and physicians alike, there have been advances in atypical antipsychotics that treat such symptoms without having a dopamine-blocking action (e.g. sulpiride, clozepine (requires frequent blood monitoring) and quetiapine). In addition, greater understanding of these problems by physicians has led to better recognition, more balanced dopaminergic drug regimes and better avoidance of other contributory drugs. Nevertheless dopaminergic psychosis remains one of the most difficult to manage aspects of PD.

Surgery for Parkinson’s Disease

Before levodopa and other modern drug therapies were developed, the main treatment of PD was surgical. A lesion (basically a hole) would be created deliberately in a certain part of the brain to counterbalance the existing Parkinsonian lesion resulting from the dopaminergic deficit. Unfortunately, complication rates were high and the procedures were literally “hit and miss” with respect to targeting an effective area to make the lesion.

But with increasing recognition of the limitations of drug treatments, and enormous advances in brain imaging and in surgical targeting there has been revival of PD surgery since the 1990’s. The most commonly performed surgical treatment now employed does not involve actual lesioning but ongoing electrical stimulation through electrodes surgically implanted deep into the brain and connected to a controller and battery sited under the skin like a heart pacemaker. This stimulation is actually functioning in the same way as a lesion – it blocks signals passing through the particular brain area, but the key difference is that it is reversible and can be adjusted to suit the patient and minimise side effects.

The key point about these surgical treatments is that they are not a cure and that they are not innately “better” than medications. Clinically, as well as physiologically, “pain” is not proportional to “gain” – going through a major surgical procedure will not get you permanent symptom relief and freedom from drugs. In fact the main procedure, subthalamic nucleus stimulation, only works in a patient if levodopa also works in that patient. Its role is in providing additive treatment without additive side effects, and a treatment relatively free of dose fluctuations. Thus it is (or should be) mainly used in patients who respond to levodopa but who suffer brittle control and certain side effects.

Neuroprotection in Parkinson’s Disease

No matter how good dopaminergic drugs might be, they are directed at symptom control not at the underlying condition. Since the 1980’s there has been much research on the possibility that existing or new anti-Parkinsonian drugs may in addition have a neuroprotective action – in other word they actually protect the nerve cells from the disease process that results in ongoing otherwise relentless degeneration.

A journal discussion on neuroprotection is the subject of a related blog post.

Experimental Treatments

I will not discuss these in detail at the time of writing (January 2013), as by definition they are not the mainstay of management. They include various stem cell lines and stem cell delivery strategies, new dopaminergic and non-dopaminergic drugs, and new delivery systems for existing drugs.

Patients and their relatives often worry that they might somehow be missing out by not having these experimental treatments. Rest assured that if there was a new treatment that was already shown to be fantastic and far better than levodopa, I would be shouting about it as loudly as would any tabloid newspaper!

More Background Information

Other information on PD can be obtained from charitable organisations such as Parkinson’s UK.

Posted in Parkinson's Disease, Primer Posts for General Readers | Tagged , , | 2 Comments