Chapter 4
nhshistory.net

nhshistory.com


Email author

National Health Service History

Geoffrey Rivett

home inheritance1948-19571958-19671968-1977 1978-1987  1988-19971998-2007 2008-2017envoishort history London's hospitals

PicoSearch
  Help
Site Search by PicoSearch
 


Chapter 4 

1978 -1987  -  Clinical advance and financial crisis 

Chapter contents

 

Medical  Progress

Medical education and staffing

Primary Care

Nursing

Hospital Services

NHS restructuring (1984)

References

Management review - Griffiths


Chronology: the fourth decade

Background  Year 

 NHS events

Winter of discontent1978

Alma-Ata declaration: Health for All
First test-tube baby
Medical manpower  the next 20 years

Conservative government (1st term)
Margaret Thatcher Prime Minister

1979Royal Commission on the NHS reported
Patients first
(a priorities document)
Industrial action
Kennedy’s Reith lectures    1980 

Black Report on inequalities in health
WHO announce eradication of smallpox
Flowers and LHPC Reports on London
Compulsory vocational training for GPs
Körner steering group established on information
Magnetic resonance imaging (MRI)
Clegg report  on nursing pay
Panorama programme on brain death

Humber Bridge opens1981

Care in action (a priorities document)
Cost improvement programmes
Primary health care in inner London (Acheson)

Falklands War                                     

Barbican centre opened in London                        

Tylenol deaths from cyanide sabotage

 1982First reported case of AIDS
Körner Reports on information
industrial action
Mandatory GP vocational training
NHS restructuring; abolition of areas
Warnock inquiry

Conservative election victory (2nd term)
Compact discs
Seat belts compulsory 

 
1983NHS management enquiry (Griffiths)
UKCC established
Mental Health Act
Pay review body established for nurses
Binder Hamlyn report on cash limits for family practitioner services completed, but not published
Miners’ strike
British Telecom privatisation
Band Aid concert for Ethiopian famine
Data Protection Act
1984  Warnock Report
Implementation of general management function
Limited prescribing list for GPs
Stanley Royd salmonellosis outbreak
Word processors increasingly common1985

Enthoven’s review of NHS
FPCs gain independent status
WHO (Europe) Targets for health for all

First laparoscopic cholecstectomy.

Chernobyl nuclear disaster
British Gas privatisation
Stock market ‘big bang'

1986BSE identified in cattle
Project 2000
Primary health care Green Paper
Neighbourhood Nursing: Cumberlege Report
Conservative election victory (3rd term)
Black Wednesday on stock market 
King’s Cross tube station fire
1987Promoting better health (White Paper)
Achieving a balance (medical manpower)
Financial crises
Health Education Authority 

Thirty years on

The thirtieth anniversary of the NHS in 1978 brought self-congratulatory noises from the Department of Health and Social Security (DHSS).1 The medical profession took a different view and dissociated itself from celebrations. 

In 1948 the NHS may have been an example to the rest of the world, but 30 years later it measures poorly against many alternative methods of providing health care, and its medical and nursing staff are disillusioned and depressed. Yet only ten years ago the same staff were enthusiastic and optimistic. There is nothing wrong with the concept of the NHS . . . What has gone wrong?2 

Two experienced commentators, Sir Francis Avery Jones, a clinician, and Professor Rudolf Klein, an academic, thought NHS reorganisation had put too great a distance between administrators and clinicians, breaking up the partnership and trust between those working in and those running the service. Economists, civil servants and administrators with no recent clinical contact had written three major documents of the previous decade, the Report of the Resource Allocation Working Party, the Priorities document (Priorities for health and personal social services), and The way forward. The combination of an administration remote from practical realities and abrasive labour relations had made the NHS vulnerable to financial stresses, when all over the world medical services had been struggling to reconcile economic stagnation with a period of remarkable technical and pharmacological innovation. A major cause of low morale was the dangerous delays in decision-making that NHS reorganisation had produced. An environment free from internal dissension and outside interference was needed. 

There was increasing scepticism about the idea of an all-embracing welfare state. Not even prosperous economies (and Britain’s was not that) could slake medicine’s insatiable thirst for resources and skilled staff. ‘There has been a lot of wild talk recently about the NHS being in danger of collapse through lack of funds,’ said David Ennals, Secretary of State. The fact is that current spending in real terms has gone up every year under this government.’3 Staff were not convinced. The managerial response to low morale seemed to be to demand better information systems. The cult of arithmetic waxed. The workforce was counted, cash was limited, budgets were set and indicators of performance were calculated. Less attention seemed to be paid to the organisation and development of clinical services. The direct access between staff and management enjoyed pre-reorganisation was sadly missed.4 Large-scale organisations were increasingly seen as out-of-date monuments to the optimistic belief in rational planning that dominated the 1960s and early 1970s. The reorganisations of 1974 and 1982 epitomised the change. In 1974 the emphasis was on the centralisation of planning and the centre could reasonably claim credit for growth. In 1982 the emphasis was on decentralisation of responsibility; governments were well advised to diffuse the blame for bad news.5 

Overshadowing the health service was the financial pressure after the oil crisis. Spending on the NHS had previously grown faster than the economy. In the earlier decades, staff believed that if they did not get the money they wanted one year, they would do so in the next. The reduction of growth in real terms, from 34 per cent in earlier decades to less than 1 per cent, now meant that some dreams would never come true. The Labour government had tried to constrain NHS costs by income policies and cash limits, and to shift more resources to the care of people who were elderly, mentally ill or mentally handicapped. It proved difficult to change spending patterns at a local level. The search for a new national solution began.

Social change 

Public mood had swung away from unquestioning admiration of science and technology. Ian Kennedy’s Reith Lectures in 1980 were a watershed in public perception of medicine. Kennedy suggested that there should be a new relationship between doctor and patient, with people taking greater responsibility for their lives, challenging the power that doctors exercised.6 Nuclear energy was seen as a threat, the car was evil and jet aircraft were noisy and polluting. Films were concerned with doom, disaster and the paranormal. Much could no longer be taken for granted; violence might be random and meaningless.

On September 29th 1982 12-year-old Mary Kellerman of Elk Grove Village, Illinois, woke up at dawn and went into her parents’ bedroom. She complained of a sore throat and a runny nose. Her parents gave her one Extra-Strength Tylenol capsule. At 7 a.m. they found Mary on the bathroom floor, and immediately taken to the hospital, she was pronounced dead. Doctors initially suspected that Mary died from a stroke.  A 27-year-old postal worker was found lying on the floor. His breathing was labored, his blood pressure was dangerously low and his pupils were fixed and dilated. The paramedics rushed him to hospital. He was dead.  Firefighters discussing four bizarre deaths noticed that all had taken Tylenol.  A hospital doctor wondered about cyanide poisoning.  The police retrieved suspect bottles and a day later it was confirmed that capsules, all from one batch, contained 65mgrms cyanide, 10,000 times the lethal dose.  There were 7 deaths mainly around Chicago.  31 million bottles of Tylenol were recalled but it seemed that the factory was blameless, bottles on store shelves seemed to have been tampered with.  Johnson & Johnson introduced money off coupons and tamperproof packages.7   Public confidence soon returned, but the poisoner was never found.  Over the next months there were many copy-cat incidents, some fatal.

Patients and health care were not immune from evil. Tylenol entered the textbooks and became a classic example of industrial – and health care – crisis management. Johnson & Johnson's top management put customer safety first, before they worried about their company’s profit and other financial concerns. 

The BMJ felt that there was a flight from science. Increasingly patients were being treated by alternative medicine  meditation, acupuncture, ginseng and a galaxy of special diets. Much of the appeal of alternative medicine lay in the setting in which it was given. Practitioners gave their patients time, courtesy, individual attention, and they listened. Healing was not necessarily the same as curing, and a compassionate healer who did nothing to arrest the disease process could relieve symptoms.8 Whatever the merits of alternative medicine, those seeking it were seldom cranks; they were well-informed people seeking a solution to an unresolved long-term problem. They had not lost confidence in conventional medicine. Young doctors showed more interest in the techniques than their seniors.9 Nevertheless, for medicine there was a downside. While the media gave massive publicity to the hazards and side effects of orthodox medicine, the proponents of alternative medicine such as chiropractic did not follow the same standards of proof when it came to assessing their favoured alternatives. The British Medical Association (BMA) Board of Science and Education examined how far it was possible to assess the effectiveness of alternative medicine. Although not totally impossible it was nearly so, partly because with some therapies no two patients were treated alike. While alternative medicine comforted many, and some might be ‘healed’, the responsibility of the medical profession, said the BMA, was to types of care that could be assessed scientifically.10 

Television was now deeply involved in health service affairs. Since Your life in their hands was first screened in 1958, ever more programmes had been produced, sometimes sensitive and deserving acclaim, occasionally a travesty of medicine. Ian McColl, Professor of Surgery at Guy’s, advised the BBC on request, and the requests were frequent. Most doctors now accepted that the public needed information to form a view of important but undecided medical issues, and to cooperate in treatment. However, producers did not seem to feel that a balanced approach necessarily mattered.11 When Channel 4 was set up the BMJ urged the new programme makers to increase public awareness about the influence of life styles on health, the limitations of what medicine could do and the need to debate medical priorities.12 In October 1980 BBC’s Panorama broadcast a programme on brain death. It centred on four American patients, said to have been declared brain dead, who subsequently recovered. In none of the cases were the criteria for certifying brain death, set out by the Royal Colleges,13 satisfied even approximately. The Director General of the BBC, Ian Trethowan, was told in advance that damage would be done to the renal transplant programme and patients would die as a result. The BBC edited out the comments of British doctors whom they interviewed. The Secretary of State told Parliament it had been a disturbing broadcast and his department had received torn-up donor cards from people worried by what they had seen. In a single night Panorama virtually destroyed trust between television and the medical profession. Transplantation numbers remained static for two years. While the BBC proposed to return to the topic subsequently, it and the Royal Colleges failed to agree on the arrangements for a reply. The BMA and the Colleges held their own press conference in an attempt to allay public anxiety.14 

Rudolf Klein said that it was easy to forget one startling fact. Throughout its history the NHS had enjoyed popular support. The NHS was probably the most popular institution in Britain.15 Its finances might be precarious, its staff on the edge of revolt and its facilities threadbare. Yet whenever pollsters asked the public, four out of five declared themselves satisfied, a figure remarkably steady over the decades. The contrast between public support for the health service and increasing cynicism about other national institutions such as Parliament was striking. However, general satisfaction was combined with specific grievances, such as the organisational routines of hospitals and the personal attitudes of staff. There was also a generational effect. People who grew up in the pre-NHS era had lower expectations. Dissatisfaction was therefore likely to increase.16 

The NHS and the private sector

The introduction of the NHS had greatly reduced the role of private health care, and what little persisted was essentially in the hospital sector. Following Barbara Castle’s forays, however, health care had moved hesitatingly and haphazardly towards a mixed economy. The assumption that health care policy could be equated with what was happening in the NHS was no longer valid.17 Private care was performing two main functions. First was the elective (non-emergency) treatment of acute self-limiting illness, paid for predominantly by insurance, mostly in the south where the NHS itself was best funded, and undertaken by consultants also working in the NHS. Secondly, there was the long-term care of elderly people in residential and nursing homes, paid for partly by the individuals concerned but increasingly by social security  private provision of publicly financed care. There was, according to Rudolf Klein, no clear explanation for the growth; was it ‘overspill’, an excess of demand over supply creating a private sector? Were the attractions of private health care making it the preferred pattern when payment was no great problem? Could the blame be laid at Labour’s door, whose wages policy provided an incentive for employers to offer health insurance, and whose assault on private practice had lessened the commitment of some consultants to the NHS? The General Household Survey in 1982 showed that 7 per cent of both sexes had some form of private insurance. The number of operations performed privately was also rising, to 17 per cent in the Oxford region in the early 1980s.18 Previously, private work was often carried out in the evenings or at weekends. Increasingly it was undertaken during the normal working day, nearly all by consultants working for the NHS, which created an awkward relationship seldom found in the commercial world or the public service, although accepted by Bevan from the beginning.19 

Private health care was particularly common in some surgical specialties, such as ophthalmology, heart disease and orthopaedics. Waiting times for an NHS outpatient appointment in these specialties were usually lengthy, and further time was spent waiting for admission. The NHS workload of surgeons who also engaged in private practice varied widely, and the specialties with the longest waiting times were also those with the highest earnings from private practice. Two-thirds of private work was undertaken by 20 per cent of NHS consultants, and doubling their income was comparatively easy in some fields. Many reasons for NHS waiting lists could be quoted. There might be a shortage of consultant staff, although the local surgeons were sometimes loath to see an additional colleague appointed. Shortage of money might mean the curtailment of operating sessions. Often the only way for patients to avoid a long wait was to pay, when the problem disappeared. 

Medical progress

Health promotion and Alma-Ata

Developing countries could not even start to emulate the patterns of health care common in the West. Increasingly they looked to primary health care, the use of semiskilled workers based in the community, and collaboration between different sectors  agriculture, water, sanitation and education. In September 1978 the World Health Organization (WHO) and UNICEF called a conference at Alma-Ata in the USSR. The resulting declaration stressed that primary care was the route to Health for all, this was achievable by the year 2000 and could be attained at affordable cost.20 The definition of health was idealistic: health was a state of complete physical, mental and social wellbeing, not simply the absence of disease or infirmity. The Alma-Ata declaration pointed to unacceptable gross inequality of health status, the right of people to participate in the planning and implementation of their health care, and the need to switch expenditure from armaments and conflicts to social and economic development, of which primary health care was an essential part. Primary health care was not primary medical care; it was far broader. It was universal, based on homes and families rather than clinics, provided according to need, culturally acceptable with an accent on health promotion, housing and education, and involving the community in the planning process. It demanded redistribution of resources, between and within nations, radical change in medical priorities and passing power from the professional to the community. The European countries did not immediately recognise the ‘health for all’ movement as relevant to them; they saw it largely as a call to the richer countries to provide greater help to the third world. 

The ‘new’ public health, based on these ideas, was in some respects a rediscovery of old traditions. Previously health promotion had been conducted in an earnest and worthy way. Now it became a mass movement with various schools of thought. The nature and style of health promotion broadened from disease prevention by providing information, and programmes with clear objectives and outcomes that could be measured, to community-based intervention based on alliances and pressure for legislative activity. Some argued that, alongside simple intervention such as immunisation, educating public opinion was essential and legislation would then follow. Others felt that legislative action, regulation and changes in taxation could be introduced irrespective of a public demand for them. Most believed that government, health promotion agencies, the media, educational institutions, local authorities, health authorities and industry all had a role to play, together and individually. Money, coordinated action, programme planning, research and evaluation were needed. Changes in life style were needed, particularly in smoking, diet, exercise, alcohol consumption, sexual activity and behaviour on the roads. Nationwide health promotion strategies were called for.21 Sometimes it took a disaster to shift public attitudes. Restrictions were increasingly placed on smoking in public places after the disastrous fire at King’s Cross underground station in 1987, when 31 people died. 

The Americans produced Health for the year 2000 shortly afterwards. In 1985 the European office of the WHO published Targets for health for all. In 1986 the Ottawa charter for health promotion, the outcome of a joint conference organised by the European regional office of WHO and the Canadian Public Health Association, set out a broad conceptual policy for the direction that health promotion and ‘the new public health’ might take.22 The WHO launched its ‘Healthy Cities’ project in 1986, which aimed to build up a strong lobby for public health at local and city level. Early participants in Britain were Glasgow, Liverpool and the London Borough of Camden. The ‘Healthy Cities’ project reflected the increasing importance of the green movement, and health promotion was becoming increasingly politicised. Should organisations concerned with health promotion continue to restrict themselves to education or be more active in promoting healthy life styles, arguing for changes in society and the social, economic and legislative environment desirable for healthy living? Should health education encompass the socioeconomic factors relating to health?  Richard Wilkinson, a community medicine student at Nottingham, researched the widening social class differences in death rates. This dissertation was published in 1976, picked up by the media and by the New Statesman. It was seen by David Ennals who, in 1977 commissioned the Black Report on inequalities in health, there was a broad consensus that the welfare state was a good thing, even if worryingly expensive.  Published three years later, the Report showed that the association of health and socioeconomic status was not trivial; the standardised mortality rate was more than twice as high in social class V as in social class I. The association was universal. Wherever there was social disparity there was disparity in health, and disparity was to be found in a wide range of conditions from obesity to accident rates, arthritis and stroke.23 By the time of publication in 1980 things had changed and the Conservative government issued the report as duplicated copies of the typescript, without a press conference. The report was allowed to mature undisturbed, although it was updated in 1987 by the Health Education Council.23 By then, government had become unhappy with the HEC. The Health Education Authority (HEA) replaced it in 1987. A body less independent of government, it was given the task of health education on AIDS. 

Strong though British primary health care was, it had not been particularly successful in the incorporation of health promotion. Social workers were seldom integrated into primary health care teams. Priority was not always given to people with the greatest need, and patient participation was rare. Alma-Ata challenged professionals to be more ‘patient-centred’ and government to give higher priority to primary care.24 Nevertheless, save in public health circles, Alma-Ata was barely mentioned throughout the management changes to come. 

The quality and effectiveness of health care

The Maxwell six25

  •  Access to services

  •   Relevance to need

  •   Effectiveness

  •   Equity

  •   Social acceptability

  •   Efficiency

International concern with the rising cost of health care, and increasing awareness that not all treatment was helpful, was leading to closer examination of what professionals were doing. Robert Maxwell at the King’s Fund proposed six criteria that defined health care quality.25 Maxwell’s ‘six’ were drawn in part from American sources, for example the work of Donabedian and the US Joint Commission on Hospital Accreditation. They proved influential in Britain because they encompassed population aspects as well as those relating to individuals. 

Retrospective review had led to the improvement of some forms of care, for example the confidential enquiry into maternal deaths. In 1979/80 the Association of Anaesthetists undertook a study based on over one million operations in five regions, associated with over 6,000 deaths within six days after surgery. The report by L N Lunn and W W Mushin showed that, although anaesthesia was remarkably safe, mistakes and avoidable deaths did occur.26 Trainee anaesthetists might be left unsupervised, and monitoring instruments might be inadequate or not used. In 1982 the Association of Surgeons and Anaesthetists set up a confidential enquiry into peri-operative deaths (CEPOD) related to operations in the Northern, South Western and North East Thames regions. Immunity from prosecution was obtained from the DHSS and all deaths that occurred within 30 days of any operation were studied. Probably the most vigorous self-appraisal ever undertaken by the profession, it was financed by the King’s Fund and the Nuffield Provincial Hospitals Trust. There were about 4,000 deaths among 555,000 operations. Widely differing standards of care were found and several problems were apparent. Surgeons might be operating outside their field of expertise, or there might be inappropriate surgery on patients known to be dying. Consultants were not always involved in serious decisions their trainees were making about patients, and some trainees were going far beyond their competence. The quality of hospital notes might be poor. Few deaths were reviewed as a routine. Sometimes elderly and sick patients were subjected to long operations when already in a poor medical condition.27 Two regions began a prospective review of neonatal deaths. 

Walter Holland, at St Thomas’, followed Rutstein in looking at conditions in which it was generally accepted that appropriate and timely intervention could prevent death, and at the variation in mortality in different districts for ten conditions, including cancer of the cervix, tuberculosis, high blood pressure and asthma. Substantial variations that persisted over time were apparent. In one district it was found that the screening process for cancer of the cervix failed to reach high-risk individuals; in another there was failure to follow up abnormalities.28

In the USA workers examined the process of health care, variations in practice and the number of procedures undertaken.  John E Wennberg, a US epidemiologist, showed what is now commonplace but was then a surprise.  Medical care varied greatly from place to place in cost, quality and outcomes. Where you live determines how well or badly you are treated, and how much it will cost. From Dartmouth, New Hampshire, he showed that apparently similar groups of people in Vermont and Maine were treated for conditions such as enlargement of the prostate, carotid stenosis and coronary artery disease at widely varying rates, and even when far more operations were done there seemed to be no apparent difference in the outcome for the patient.29 Similar variations were found in the UK by Klim Macpherson. Wennberg believed that the different rates occurred because different decisions were being made about the need for aggressive treatment. Doctors were not equally well informed, and were motivated by factors other than pure science. Patients were seldom given enough information to make a rational choice and their preferences were not always sought. The more doubt there was about the indications for treatment, the wider the variation from clinician to clinician. Wennberg believed that some procedures were of little worth and that if they were abandoned the increasing cost of health care would lessen, and rationing would probably not be required. 

To underpin decisions on priorities, measurements of outcome rather than process were required. Systems to assess health status were developed in the USA and in the UK. Questionnaires, sometimes self-administered, took account of pain, disability and emotional factors. They could be used on a regular basis to track the effect of clinical care. An economic perspective led to the development of the ‘quality adjusted life years’ (QALYs), which attempted to measure life expectancy and quality of life. Devised by the US Senate Office of Technology Assessment, QALY was popularised in the UK by Williams and Maynard, economists at the University of York.30 A year of healthy life was taken to be worth one; the value was lower if health was poorer or life expectation shorter. It might be possible to cost treatment that changed the QALY and produce a ‘cost/QALY’. For example, advice to give up smoking was cheap to give and, although comparatively few people took it, enough gave up smoking to generate a substantial benefit. Complex surgery might rate poorly, for the costs were high and life expectancy might not change dramatically. These techniques challenged the clinical freedom to carry out any treatment, however costly and slim the possibility of success.31 QALYs did not solve the problems facing clinicians. How did one value death or the quality of life enjoyed by people with widely disparate conditions   needing hip replacement or renal dialysis or suffering from dementia? At a crude national level QALYs might provide a new insight, but to doctors caring for patients it was like comparing apples and oranges. 

Robert Brook, Medical Director of the Rand Corporation in the USA, described the appropriateness of clinical practice as ‘the next frontier’ in clinical development.32 The Rand Corporation had long been interested in whether different patterns of health care organisation, or different forms of treatment, improved patients’ health.33 Everyone agreed that new drugs should be tested before their introduction. A similar consensus developed over surgical procedures. The phrases ‘health technology’ and ‘technology assessment’ were coined to cover new types of treatment and their scientific assessment. David Eddy, of Duke University, wrote about the creation of clinical guidelines.34 Paul Ellwood’s consultancy firm, Interstudy, developed questionnaires on patient health status. In his Shattuck Lecture on ‘outcomes management’ in 1988 he brought these ideas together.35 Guidelines, outcome management and evidence-based medicine (as the concept later became known in the UK) were much the same idea. As health costs rose, consumer groups became more powerful and widely varying patterns of practice persisted, could management remain on the sidelines? In the USA audit and quality assurance were generally introduced by management and backed by sanctions. When so much was being spent on care that was of doubtful efficacy, management had an incentive to examine the processes and the outcomes. This approach was not to the liking of the British medical profession, which preferred an educational approach. Government chose to keep out of the professional minefield. In 1948 the profession had been given an assurance that it would be free from outside intervention in clinical work, and British doctors were cautious about medical audit with its implied threat to clinical freedom. If the professionals wanted no outside interference, said the BMJ, would they ensure that patients had no need to be concerned about the quality of care? Jargon obscured the simple idea that doctors should look at their day-to-day work to see if they could improve it.36 

Regular clinical review of routine work was not regarded as part of the day-to-day activity of a doctor. Don Berwick, who ran quality assurance at the Harvard Community Health Plan, a health maintenance organisation (HMO)* in Boston, argued that it should be, and that clinicians should be educated and encouraged, not policed.37 In the USA the Agency for Health Care Policy and Research (AHCPR), an agency within the US Public Health Service, was well financed to develop a wide-ranging programme of evaluative research to produce treatment guidelines and stimulate research on the effectiveness of established treatment. Well-established operations, such as transurethral resection of the prostate, might have a complication and re-operation rate far higher than had been thought. Priority was therefore given to major problems common in health services, which involved many people and cost much money, rather than rare conditions at the forefront of medicine. 

[*HMOs, a US system of health care delivery first emerging in the second world war but becoming popular in the 1980s, were increasingly seen as an interesting organisational development. They provided an integrated health service for ‘members’, usually on a local basis and financed through capitation payments. Varying in pattern, they might own their own facilities or contract for them. They aimed to offer quality care more cheaply by restricting the choice of doctors, providing secondary care only in selected hospitals, encouraging clinical guidelines and sometimes placing an accent on primary health care. They competed with each other and with fee-for-service medicine. ]

The drug treatment of disease

Increasingly, new drugs were produced by techniques that manipulated DNA. The first drug for human use produced by genetic engineering reached the market in 1982, human insulin. New drugs were often designed to act on DNA or intercalate with it. Interferon, initially discovered in 1957 as a protein that interfered with viral infection, was the focus of much research. It proved to be a group of compounds, with several varieties  alpha, beta and gamma  that were produced in small quantities by recombinant methods. Although they caused regression in some types of tumours, their side effects limited the dose that could be given and interferon never was to cancer what penicillin had been to bacterial infection.38 A new antiviral drug of remarkably low toxicity, acyclovir, was introduced in the early 1980s, active against the herpes simplex virus that causes cold sores and varicella-zoster virus. It was immediately applied to eye infections, cold sores and viral encephalitis.39 The pharmaceutical industry undertook less work on cardiovascular drugs, where there had previously been great activity, to concentrate on cancer chemotherapy. 

Diabetic control was improved by the introduction, in the late 1970s, of self-monitoring of blood glucose. It allowed patients to make spot checks before driving, exercising or sleeping, and enabled patients to build up a profile of their blood glucose concentrations to establish the best insulin dose. Combined with continuous subcutaneous infusion or multiple daily injections almost normal levels of blood glucose could be achieved.40 In dermatology the outlook for patients with psoriasis was improved by the introduction of ultraviolet light in combination with a skin sensitiser, and for those with acne by retinoid drugs derived from vitamin A. 

The relief of pain had long been part of a doctor’s role. However, patients’ analgesic requirements differed and the dosage had to be adjusted to match individuals. New forms of equipment such as infusion pumps allowed patients to control their own pain and proved to be safe and effective when used for postoperative and obstetric pain, coronary pain and pain in terminal disease. No longer need patients in discomfort have to wait until a doctor or nurse had time to ask the necessary questions and decide if another dose was required.41 

The popularity of oral contraceptives peaked in the mid1970s when about 3 million women were using them. Then usage fell a little, as women became aware of clinical studies showing complications and occasional deaths from thrombo-embolism. Lower-dosage pills restored some of their popularity. Increasingly, people turned to sterilisation, in particular vasectomy, as a safe and effective alternative.42 

In the 1960s the benzodiazepines had replaced barbiturates in the symptomatic treatment of minor neuroses and anxiety states. The public and the profession embraced them with enthusiasm, and consumption continued to increase during the 1970s. Then it was noticed that some patients tended to ask for them for unduly long periods and they were shown unequivocally to produce pharmacological dependence. It was accepted that they were usually unsuitable for anything more than short-term use and their use began to decline.43 

The establishment of the Committee on Safety of Drugs in 1962 had helped to keep unsafe drugs off the market. Sometimes, however, because adverse reactions were uncommon they became apparent only when drugs were in wide use. Opren (benoxaprofen) was an example. There were concerns about its safety from an early date. The manufacturers, however, promoted it as a useful drug in the rheumatic diseases: it suppressed inflammation and was effective in controlling symptoms when given only once a day. In 1982 eight elderly women were reported to have developed jaundice, and six died. At that time over 500,000 people in Britain had taken it. Other reports of adverse reactions followed and the drug was quickly withdrawn. The BBC programme Panorama suggested that the company had made deceptive claims, obscured important information and did not act quickly enough when the drug evidently caused problems; faults in the approval procedure were uncovered.44 The need for post-marketing surveillance was becoming clear. But this was costly and difficult - how did one establish a control group? For which drugs would it be most important  perhaps those for disorders that were not life-threatening and for which reasonably safe alternatives were already available?45 Not only might an individual drug have side effects, but there was also a danger of interaction between powerful remedies.46 Two drugs might alter each other’s absorption, metabolism or excretion. Drugs might be additive, and be potentiated by alcohol. So complex were the interactions that wall charts, cardboard sliderules and computer systems were developed to alert the doctor or pharmacist to dangers. 

Radiology and diagnostic imaging 

Much of the achievement of high technology radiology was the result of advances in microprocessors and processing power. CT scanning revolutionised investigative practice, improved diagnostic accuracy and rapidly became the method of choice for imaging the brain. A new phenomenon originally identified in 1945, nuclear magnetic resonance, was also applied to imaging. It did not use ionising radiation but strong magnetic fields and radiofrequency pulses. The hydrogen protons of water and fat were imaged, their concentration and settling down behaviour when stimulated determining the contrast of the images. Computing systems, central to the display of images, had already been developed for CT scanners, and magnetic resonance imaging (MRI) could piggyback on the technology. Workers in Nottingham and Aberdeen, aided by EMI, showed its potential and in 1978 a contract was placed for the development of the first serious clinical instrument. It was installed at the Hammersmith Hospital in 1980, at a time when EMI was seeking to leave the field of imaging. Unlike CT scanning, which was immediately successful, there were many teething problems with MRI and a phase of disillusion in the UK, if not in the USA. The resolution of the images was poor, there were problems with contrast so that some tumours could not be seen, and the speed of the scans was in no way comparable with CT scanning. There were even doubts as to its safety, for example in epilepsy. However, in 1981 the first patient studies began and the first series of patients was published from the Hammersmith in 1982. After that, development was rapid. Many workers contributed to the success of MRI and there is no doubt that the British teams were the first to produce good usable pictures. They established the basic principles, which have changed little over the years. In 1983 there were five clinical MRI systems in the world, of which four were in Britain.47 The technique was non-invasive, doing patients no harm although the noise and the isolation in the scanner were found by some people to be frightening. Development of MRI depended largely on improvements in the technology of powerful magnets and was incredibly rapid. Scans were soon at least comparable with the quality of CT images. The new system excelled in the head and spine, distinguishing the brain’s white and grey matter better than any previous system and improving diagnostic accuracy. Varying the pulse sequence enabled blood vessels to be displayed. MRI was clearly destined to be a further ‘quantum leap’.48 In joint disease it seemed likely to replace arthroscopy and arthrography. There was an expectation that CT scanning would be replaced by MRI, but CT scanning itself improved and provided much faster, simple and reliable images all over the body with less error from movement. 

Another new approach, positron emission tomography (PET), used radioactive atoms that emitted positrons. These could be introduced into compounds such as the sugars that are metabolised by the body, and injected into the blood stream. The PET scanner could then measure the gamma rays being emitted, and create an image of the tissues and the chemical changes that were taking place. This technique was rapidly applied to the study of brain disorders. 

By 1980 the application of computing to digitised images was changing the face of diagnostic imaging. In 1985 Professor David Allison, at the Hammersmith, knowing of experimental work elsewhere, raised the possibility of creating a film-less department of imaging. He began to interest the DHSS, charitable trusts and manufacturers in the idea. 

Infectious disease 

Between 1983 and 1985 the DHSS reviewed the Public Health Laboratory Service (PHLS), an essential part of the country’s protection against infectious disease. The review recommended that the responsibility for the administration and funding of peripheral laboratories be passed to health authorities. The government consulted on the recommendation and accepted the arguments in favour of an integrated laboratory and epidemiological network as a protection for the public health. 

Emerging diseases 

The pattern of infectious diseases was changing. Traditional diseases such as diphtheria, poliomyelitis and smallpox were less common or had even disappeared. Brucellosis, dysentery, measles, tetanus and tuberculosis had also declined.49 Others were emerging. Pathogens such as Campylobacter enteritis, Cryptosporidium, enteropathogenic E. coli and Norwalk-like viruses (responsible for winter vomiting disease) had new opportunities. Giardiasis producing diarrhoea and the emergence of typhoid strains showing resistance to antibiotics added to problems.50 A worldwide perspective had to be taken. Even cholera might on occasion reappear in the UK, after an absence of many years. Nowhere was further away than a 36 hour flight, as business and leisure travel increased. Demographic patterns influenced infectious disease, with the rapidly expanding, young urbanising populations in the developing countries, and ageing ones in the West. There were serious public and political concerns about food-borne and waterborne disease. Innovative ways of processing food introduced new hazards. In the past much of a nation’s food had been produced locally; now with an open market in food manufacture, faults in one country could lead to outbreaks throughout Europe. An outbreak of Salmonella poisoning in 1982 was traced to small chocolate bars imported from Italy. After a public warning, 3 million bars were recalled and the outbreak quickly came to an end. Another Salmonella outbreak in 1987 was traced to small sticks of German salami, popular with children.51 The cost of such outbreaks was considerable. Poultry were becoming an increasingly common food, but when reared intensively are readily infected with Salmonella. In the mid1980s a particular strain, Salmonella enteritidis phage type 4, acquired the ability to pass through the hen to infect the developing egg. Human infections with this strain increased rapidly. 

In the early 1980s cross-infection in hospital by methicillin-resistant Staphylococcus aureus (MRSA) was becoming increasingly serious. New strains were resistant to many antibiotics and revealed an increased ability to spread within and between hospitals. In August 1984 an outbreak of salmonella food poisoning at Stanley Royd Hospital, a large psychogeriatric hospital in Wakefield, Yorkshire, claimed the lives of 19 patients and led to a public enquiry.  Investigations showed the poor quality of hygiene in hospital kitchens, a widespread problem often known to management but sometimes not remedied. New kitchens were expensive, there were other priorities, and the NHS could claim Crown immunity, to the irritation of environmental health inspectors. The Stanley Royd outbreak also revealed management failures and the lack of anyone clearly identified as responsible for the handling of outbreaks.52 

In the mid1980s the incidence of meningococcal septicaemia rose in the UK, and, although fluctuating year on year, remained high. Chiefly affecting the young, it had a fatality rate around 10 per cent; there were 1,0002,000 cases annually, of whom 150,200 might die. People no longer expected a healthy child to sicken and die rapidly from an infection and cases attracted national publicity. Legionnaires’ disease became better recognised, the infecting organism was identified, and cases were now regularly reported. One outbreak in 1981, occurring among men working on a power station site, was traced to a water system in a cooling tower. A larger one in April 1985 occurred at the Stafford District General Hospital (DGH). It affected 101 patients, 28 of them dying. Again it was related to the design of the air-conditioning system and shortcomings in maintenance of water-spray cooling systems. In November 1986 a new disease in cattle, bovine spongiform encephalopathy (BSE), was identified by the Central Veterinary Laboratory. The first case was found in the herd of West Sussex farmer Peter Stent, who had contacted vets after he found one of his cows behaving in an abnormal way. It was not clear if it was transmissible.  The conclusion was that BSE was a "prion" disease like scrapie and that it could have been caused by infected animal carcasses or offal processed into cattle feed. Epidemiological studies were begun. 

Malaria 

In the 1970s mosquitoes were becoming insecticide-resistant and the number of cases of malaria rose substantially. In 1980 there were 1,670 with nine deaths, a number that fell slightly with improving mosquito control in the Indian subcontinent. Usually the disease appeared in travellers within a month of their return to the UK but occasionally the delay would be much longer. The tropical disease hospitals in Liverpool and London saw few cases in the early stages, and GPs, faced by patients with an unexplained fever, were sometimes slow to make the diagnosis, particularly if malarial prophylaxis had been taken.53 

Sexually transmitted disease and AIDS 

The incidence of gonorrhoea fell steadily during the decade and more than halved to about 20,000. The decline in syphilis was even steeper, from more than 2,500 to fewer than 200 cases. Cases of herpes infection, Chlamydia and genital warts increased in numbers and, until the arrival of AIDS, were the major cause for public concern. Genital herpes, in particular, received enormous attention. Emotive articles in the press suggested that herpes sounded the death knell of an individual’s sexlife.54 The increasing size of the problem, the incurable and untreatable nature of the condition, neonatal infection and the association with carcinoma of the cervix were all discussed. 

AIDS

The disease of the decade was acquired immune deficiency syndrome (AIDS).55 It was unusual in that from the outset it was highly politicised, and the policies adopted owed much to the activities of those initially most affected, the gay community, and its network of friendships. An uncommon form of pneumonia in five homosexual men was reported in the USA by the Center for Communicable Disease in 1981, rapidly followed by reports of cases in the UK and an increase a previously rare form of cancer, Kaposi’s sarcoma. As experience was gained, it was appreciated that infection was seldom diagnosed immediately. Many had poor immunological resistance to infection before developing severe illness. AIDS was part of a spectrum of disease and produced a wide range of symptoms, including neurological defects. The gay community in the UK learned rapidly from the experience of friends in the USA, organised itself to obtain government help, spread the message about reducing the number of partners and safer sexual behaviour, and developed systems to support sufferers. In 1982 the Communicable Disease Surveillance Centre (CDSC) began to monitor death certificates. From 1983 other groups were recognised as at risk, recipients of blood transfusion, intravenous drug abusers, people with haemophilia, Haitians and children of infected parents. The cause was not known until 1983 when the human immunodeficiency virus (HIV) was identified in France.  BBC TV’s Horizon ran a programme about the problem in New York, but in contrast to the size of the epidemic in the USA there were only 15 cases in the UK in 1983 and 74 in 1984. Lacking effective treatment, public health measures were the only way the spread of infection might be reduced. There was no evidence that the virus was spread by casual or social contact but the abandonment of promiscuity, homosexuality and drug abuse, while it might have been effective, hardly seemed a practical control measure.56 

The safety of blood and blood products became an issue and perhaps the worst treatment disaster in the history of the NHS.  Initially there was scepticism about the extent of the risk.  Many blood products were imported from the USA from commercial organisations that recruited paid donors, some in prison and others drug users, potentially infected populations. People with haemophilia, who had gained greatly from treatment with Factor VIII, were now afraid not only of AIDS but also that withdrawal of treatment could take them back to the early 1960s when the disease produced joint damage and pain, and greatly shortened life expectancy. From 1984 the UK improved its own blood product preparation, heat treatment was used to eliminate transmission from blood products, and from October 1985 transfusion centres routinely screened donors for HIV.  By this time, however, thousands of people had been infected, a later report (by Lord Archer) putting the numbers of haemophiliacs infected with Hepatitis C at 4,620 of whom 1,243 also were infected with HIV.  Within 20 years almost 2,000  had died.  Many campaigned to discover whether Government had been too slow to react, and whether Government though acting in good faith had underestimated what was later seen as a high risk.  Other countries, for example Canada and Ireland, had similar problems but instituted compensation fare more rapidly.

The early recognition that people from Haiti were frequently sick was later followed by the realisation that the disease was frequent in the Congo, and that after  Belgian withdrawal from that country, technicians from Haiti were often recruited.  They, like aid workers, were often young and sexually active.

A test for the virus was developed in 1984. It was discovered that there was a high incidence of AIDS in Central Africa, and people who had been sick in the late 1970s could be identified retrospectively as suffering from AIDS.57 The test was used to screen blood donations in 1985. Not until the middle of that year were heat-treated, and therefore safer, blood products available. In the early 1980s large quantities of cheap heroin arrived in Edinburgh. The police arrested drug dealers and confiscated needles; the result was that drug abusers simply shared needles and by 1985 half the drug abusers tested were HIV positive. An outbreak among adolescents at a school near Edinburgh, in 1984/5, showed how appalling were the consequences for these young patients and the babies a few of them bore. AIDS was a catalyst in refocusing drug abuse policy on minimising harm. The homosexual and the drug-using cultures were different, although some links were formed. 

AIDS involved just about every contentious aspect of human behaviour and, given the voyeurism of the press, individuals would be regarded as unimportant compared with the story that they could tell. Merely by contracting the disease, against their own wishes, the early cases might become public figures about whom the press felt people had a right to know. Anxiety, even hysteria, came to surround the disease as sufferers lost their jobs, were evicted from housing, children of patients were expelled from their schools and one with AIDS as a result of treatment for haemophilia became the centre of media attention.58 The press covered cases in which doctors with AIDS continued at work and guidelines were developed by the medical profession and the DHSS to safeguard the public. An injunction was granted against the News of the World, banning it from revealing the identity of two doctors undergoing treatment.59 Much that was learnt about the disease was the result of open discussion with gay men. When these were patients it was essential to maintain confidentiality, even though it meant narrowing the number of professionals with knowledge of individual cases. Confidentiality might be seen not only as a personal issue but also as a public health one: only by safeguarding confidentiality could essential information on the epidemiology of AIDS be obtained. 

Between 1981 and 1985 policy was developed from below; little was known about the disease and most of that came from the gay press, gay men and patients. Key people in the UK ensured that the disease had less impact in the UK than in many places, in particular the CMO Donald Acheson, who ate and slept AIDS from 1985 onwards. He was central to the process of listening to the clinical specialties involved and the gay community, developing their ideas and relaying them to ministers, forcing them to take it seriously. By 1984 Acheson was referring to AIDS as the greatest challenge in communicable disease for many decades. He established and oversaw an Expert Advisory Group on AIDS, ensuring that government had the best advice available and could move rapidly when prepared to do so. By 1985/6 AIDS was generally recognised as a major issue and collective fear developed. Nobody knew what would happen next, and what clinical or ethical problems would emerge. Some believed that AIDS should be treated like other grave communicable diseases, for example by notification, a view opposed on the ground that this would prevent sufferers from seeking help. Acheson used TV and the circulation of 20 million leaflets to minimise harm, preaching safe sex rather than no sex. Gay pressure groups painted AIDS as a human rights issue. While doctors regularly tested patients for other diseases without fully discussing all possibilities, patient consent here was necessary and pre-test counselling became almost mandatory. Informing sexual partners of infectivity was left to the patient. Prevalence studies were more difficult to mount, for epidemiology could not be conducted without the human rights issue. Important issues were also raised for the blood transfusion service. 

The number of patients, who mostly lived in London, was in no way comparable to those in the USA or Africa. Over the first few years the number of cases doubled every year, but by the end of the decade the numbers were rising less fast. By September 1986 more than 500 cases had been reported in Britain compared with 30,000 in the USA, where many were children, the offspring of drug-abusing parents. The pattern of the epidemic, not just in the UK but worldwide, was determined in part by sexual habits, the numbers of contacts and the prevalence of the disease among particular groups. In Africa it crossed rapidly into the heterosexual population; in the UK it did so to a far smaller extent. A liberal and scientific consensus developed. One early result was a government publicity campaign on ‘safe sex’ in explicit terms that Whitehall would not normally contemplate. In 1986/7 the government, strongly urged by Donald Acheson, Kenneth Stowe (the Permanent Secretary) and Robert Armstrong (Cabinet Secretary), launched a major, sustained and consistent publicity campaign, the TV adverts using a tombstone theme and subsequently an iceberg. The message concentrated on minimising risk, the danger of ignorance and the fact that AIDS could affect everyone. The gay community was not directly targeted, in part to avoid increasing public feeling that AIDS was a ‘gay plague’ and in part because this was the advice of the advertising agency. The following year the campaign concentrated on the danger to drug users of sharing syringes, and was deliberately designed to shock. One poster showed a body in a plastic bag; another a bloodstained syringe. TV companies were encouraged to make their own documentaries, widening the information available to the public and dealing with questions such as needle exchange schemes for drug abusers. The Daily Telegraph asked its readers for indulgence, saying that by its nature the epidemic could only be discussed and countered in terms more explicit than normal.60 HRH The Princess of Wales took a personal interest in people who were dying, and by her presence reduced the fears that normal social contact was risky. The first major breakthrough in treatment came in 1986 when Wellcome introduced zidovudine (Retrovir). Trials showed that it prolonged life, stopped weight loss and increased the wellbeing of patients with AIDS. Drug treatment, like the disease itself, became a political issue. 

Genetic medicine 

Knowledge in clinical genetics exploded as it began to be possible to map the fine structure of human genes. How genes controlled the structure of a single protein could be defined in molecular terms. Monoclonal antibodies, discovered in the mid1970s, were specific for one antigen and produced from a pure single-cell culture line. They revolutionised the study of immunity and opened the possibility of many new diagnostic tests, and perhaps even therapy. Genetic disorders accounted for a substantial fraction of human disease. Singlegene defects such as Huntington’s chorea, cystic fibrosis, phenylketonuria, thalassaemia and haemophilia were rare, but severe in their effects. Chromosomal abnormalities such as Down’s syndrome were more common, and there were even more conditions such as spina bifida and congenital heart disease with a genetic component. The first practical application was prenatal diagnosis for congenital and genetic defects. Methods included amniocentesis, visual examination of the fetus by endoscopy, measurement of alphafetoprotein in maternal serum and removal of placental tissue for examination (chorionic villus sampling).61 Diagnoses as early as 810 weeks made it possible to consider the likely outcome when deciding whether a pregnancy should be terminated. There was a tantalising possibility of replacing a missing enzyme or a defective gene, for example by destroying the bone marrow by irradiation and replacing it with health marrow from a compatible relative.62 

Gastroenterology 

Technology and pharmacy drove developments in gastroenterology. Video-chip cameras and better endoscopes made the assessment of stomach, duodenal and colonic disease swifter and easier for both doctor and patient. The diagnosis and treatment of benign tumours of the colon that might later become malignant could be carried out on a day-patient basis. There was resistance to the new technology among the older specialists, and sizeable endoscopic units were slower to develop in the UK than in other countries. 

From the 1950s onwards, the number of deaths and admissions for gastric and duodenal ulcers had been falling. Changes were taking place in what had been one of the commonest causes of admission to hospital. It was hard to know whether diagnosis was now more accurate, or treatment was better or that changing social conditions and diet were responsible. H2antagonists relieved symptoms so effectively that some people were given them as a ‘diagnostic test’: if a patient’s condition improved after taking an H2 antagonist, it was considered unnecessary for radiology or endoscopy to be done.63 Then workers in Perth suggested that peptic ulcer was an infectious disease caused by bacteria. In 1983 Robin Warren (a pathologist) and Barry Marshall (a physician), reported the presence of bacteria in the stomach wall and suggested that there might be a causal link between them and peptic ulcer, gastric cancer and other bowel diseases. When the theory was presented at a conference in Brussels it was regarded as preposterous, and Marshall gained a dubious notoriety. Scientists set out to prove him wrong, and could not do so. Believing that antibiotics might be capable of curing the infection, Marshall swallowed the bacteria himself, rapidly becoming sick.64 

Inflammatory bowel disease, such as chronic Crohn’s disease, could be treated by artificial nutrition, which varied from supplementing the normal diet to intravenous feeding. Improving the general state of health substantially was possible.65 

Surgical workload and surgical progress

Top 20 general surgical and urological operations in 1978 

Appendicectomy70,480
Inguinal hernia                     63,650
Benign breast disease         37,100
Cholecystectomy36,310
All anal operations35,160
Cystoscopy                         30,620
Varicose veins                    26,880
Malignant skin lesion           25,330
Circumcision                        21,920
Prostatectomy 17,420
Mastectomy                          14,670
Orchidopexy         11,580
Colectomy   10,570
Rectal carcinoma            9,240
Thyroidectomy                     8,500
Vagotomy                     8,280
Hydrocele 5,730
Femoral hernia     5,720
Amputation of leg 4,250
De-functioning colostomy3,940

Source: BMJ 1983, 66  

Since the start of the NHS, surgery had been dividing into evermore subspecialties but in district hospitals ‘general surgery’ remained central to surgical activity. The most commonly performed operations were long established. Roughly 645,000 general surgical operations were performed during 1978 in England and Wales.66 Sometimes, subspecialty expertise offered patients a substantially better outcome. For example, one man in ten would eventually need an operation for benign enlargement of the prostate: 80 per cent were done by general surgeons, who used a major abdominal procedure. There was an alternative method, resection by an instrument passed up the urethra, virtually painless, needing half the time in hospital and with a mortality less than half that of the open techniques. The argument for the specialist urological surgeon was now clear although it was no procedure for the occasional operator. The technique was not easy to learn.67 

Day surgery had long been encouraged, to provide good care at less cost. Day wards did not need to be staffed at night and at weekends; shorter time in hospital meant that more patients could be treated if theatres were available; and there were savings on ‘hotel facilities’. The practicability was established and many hospitals had excellent day surgical units. Yet even in hospitals committed to day surgery, the full potential was seldom exploited. In Southampton it was estimated that, excepting cardiac and neurological surgery, every surgical specialty needed day surgical facilities and the proportion of cases suitable lay between 40 and 80 per cent.68 

Minimal access surgery 

Minimal access surgery (often called colloquially ‘keyhole surgery’) was a major advance applicable to many more common procedures. It was the result of spectacular developments in the technology of operating instruments.69 Operative mortality and morbidity had been accepted as unavoidable for 150 years, but in the early 1980s it became apparent that less invasive methods could reduce complications and risks. Reducing surgical trauma reduced morbidity and mortality.  The first laparoscopic cholecystectomy was performed in 1985. Urologists had been in the forefront with transurethral prostatic resection. Between 1979 and 1983 there were radical changes in the treatment of kidney stones. First came their removal through tiny 1 cm tracks from the body surface, per-cutaneous nephrolithotomy.70 Secondly there was shockwave extracorporeal lithotripsy, a completely new form of treatment, developed by the German engineering firm Dornier. Focused shock waves, either sound or electrical, were passed through soft tissue to break the kidney stone into fragments. These then passed along the natural urinary passages to the outside. The first UK machine was installed in London at the Devonshire Hospital lithotripter centre in 1984. In the first 50 patients treated the average length of stay was 3.7 days; within two years 1000 patients had been treated, with a high success rate.71 

Endoscopic appendicectomy had been performed in Germany and medical gastroscopists and colonoscopists were rapidly relieving surgeons of the responsibility of treating ulcers and polyps. Vascular surgeons were doing endoscopic endarterectomies, using lasers to treat coronary artery obstruction. Orthopaedic surgeons were undertaking intra-articular operations of the knee and many other joints. Neurosurgeons, ENT surgeons and gynaecologists were also adopting the new techniques. With the development of CT scanning and ultrasound, endoscopes could be passed into the bile ducts making it possible to deal with stones even in elderly and medically sick people. Some types of obstructive jaundice could also be treated.72 By 1987 it seemed possible to predict the elegant and less traumatic way in which surgery would develop in the next decade.73 How dangerous the techniques might be in unskilled hands was not, at first, appreciated. 

Microsurgery made possible the successful reattachment of an amputated limb. At first the scope of the technique was limited by the size of the blood vessels that could be joined reliably. Developments in optical technology, micro-instruments, sutures and needles made it possible for surgeons to join small vessels and nerves, so that finger reattachment became practicable. Internal fixation would stabilise bones, joints might need repair, vessels and nerves were joined, and adequate skin cover obtained. Such surgery was extremely demanding of time and practicable only in specialised units. The younger the patient, the better the result.74 

Orthopaedics and trauma

Although fitting seat belts to new cars was compulsory, people were under no obligation to use them. The evidence that they would save lives was not seriously questioned, but there were questions of civil liberty. From 1983, however, wearing belts became compulsory. All regions now had major accident plans. The absence of such plans in the early years of the health service had been responsible for confusion at the time of the Harrow and Lewisham crashes. Regular training exercises paid dividends at Manchester airport in 1985: a fire occurred on a Boeing 737 at takeoff, with 137 on board. Toxic smoke inhalation was a major problem; 52 died on the aircraft, 85 escaped. Wythenshawe Hospital was rapidly warned, the consultant in charge was there within minutes, triage began and as patients arrived they were handled systematically.75 

The growing need for surgical treatment of fractured neck of the femur and for arthritis of the hip, reaching almost epidemic proportions, overwhelmed the wards allocated to the trauma service and spilled into the beds needed for general surgery and elective orthopaedics. By 1987 total hip replacements for arthritis numbered 35,000 per year, and total knee replacements 10,000. In the 1950s there had been a substantial failure rate with total hip replacement, but by the 1970s it was recognised as one of the outstanding surgical successes of the previous 20 years.76 Charnley’s own cases, now counted in their thousands, showed that less than 1 per cent a year needed revision because of loosening. Acrylic cement used to glue the new head into the shaft of the femur was well accepted. Operative complications - infection and pulmonary embolism  were few and patients were discharged in days rather than weeks. Initially most patients were elderly and many had to wait; the first operation lasted them all their lives. Increasingly, however, younger and more energetic patients were operated on, and the revision rate within five years might be as high as 25 per cent. A repeat operation took twice as long and patients required longer in bed afterwards. More than 100 different patterns of hip replacement became available, varying widely in price. Despite the Charnley hip being one of the first to be used in large numbers, after 25 years none of the newer ones had been shown to match it.77 However, the high molecular weight polyethylene used for the Charnley cups wore slowly, at about 0.1 mm per year. Revision operations were necessary and the cement used to fix the components was suspected.78 Other methods of fixation were tried, for example porous metal components into which bone cells might grow, and prostheses coated with bone salts before implantation. Then the possibility was suggested that the particles produced by polyethylene wear might be to blame. Some of McKee’s original metal-on-metal hip replacements were still giving good service and showing little signs of wear after many years. Surgeons began to experiment again with metal components, now excellently engineered.

Arthroscopy became increasingly important in orthopaedics, particularly for disorders of the knee. Minimal access surgery was applied to the treatment of lumbar disc prolapse. Using an operating microscope the disc could be dealt with through a 2 cm incision, and the surgeon could see inside the disc space. The operation could be completed in half an hour, and most patients could leave hospital within two or three days instead of two or three weeks.79 

Cardiology and cardiac surgery 

Cardiac ultrasound began to be a useful clinical tool in the mid1970s. The development in the 1980s of cross-sectional echocardiography revolutionised non-invasive diagnosis, particularly for congenital heart disease, and was capable of supplanting cardiac catheterisation for most purposes apart from coronary angiography. It produced good anatomical images, and the introduction of pulsed, continuous wave and colour Doppler flow mapping improved the knowledge of heart function, providing an accurate method of looking at spatial information about the velocity of blood flow within the heart and major vessels.80 

It was clear that much heart disease could be traced to smoking. Advances in therapy were merely repairing the effects of a preventable disease. The Royal College of Physicians (RCP) refined estimates of the relationship of diseases to smoking. It was causing 100,000 deaths annually, and a third of deaths in middle age.81 Cigarette smoking fell steadily in the 1970s and early 1980s, especially in men; for example, the proportion of adults who smoked fell from 51 per cent of men and 41 per cent of women in 1974 to 36 and 32 per cent, respectively, in 1984. Thereafter, though, the decline became slower, particularly in younger adults. A clear relationship existed between price and consumption. Passive smoking also appeared to increase the risk of disease. Pressure for the introduction of widespread screening of blood cholesterol levels was, however, resisted by the Standing Medical Advisory Committee. Evidence did not exist to justify the cost of screening in terms of any benefits that might result. 

The treatment of angina improved with the introduction of beta-blocking drugs (e.g. propranolol), which increased the capacity for pain-free exercise. An important new class of drugs, calcium channel-blocking agents, were introduced for angina. They reduced the strength of cardiac muscle contraction and the work the heart did, altering heart rhythm and dilating blood vessels. They were soon used for abnormal cardiac rhythms and for high blood pressure as well.82 

The ability to resuscitate people suddenly and severely ill, if the breathing and the circulation could be maintained, was behind the creation of a new discipline, the paramedics.83 There was no evidence that doctors were any better at preserving life in these emergencies. The US city of Seattle was early to develop the idea, basing paramedics with the fire service because, by its nature, a fire service has few routine commitments and fire stations are well distributed. The Seattle paramedics received 1,600 hours’ training and could administer any of the 40 drugs they carried. Because roughly a third of the community was also trained in cardiopulmonary resuscitation, the efficacy of the whole service was increased. In Belfast and Brighton coronary ambulances had been introduced and it was shown that, with advanced life support skills, many victims of heart attacks could be saved.84 Hampton, in Nottingham, was critical of services such as that in Belfast where doctors were used. Paradoxically, if a GP had been called, an accurate diagnosis would be made but time would have passed; Hampton thought a mobile unit was unnecessary because the most dangerous moments had passed and the risk of death before being admitted to hospital was low. Progress could not be made on the basis of a few special vehicles. A complete restructuring of the ambulance service was needed, separating the 10 per cent of real emergency work from the 90 per cent that was no more lifesaving than a good taxi or bus service. He believed that all emergency vehicles should carry a defibrillator and be staffed by crews who had received advanced training.85 Some ambulance services, such as those in Nottingham, introduced this for some of their crews, and in 1984 a national training programme for paramedics was adopted on government recommendation.86 Groups of GPs also banded together, equipped themselves properly and worked with the ambulance service to provide early help and resuscitation to road accident victims (BASICS, the British Association of Immediate Care Schemes). 

In the early 1960s, pilot schemes had shown that fibrinolytics (clotbusters), though expensive, could be used safely in patients with acute myocardial infarction. A succession of clinical trials suggested that mortality could be reduced by their early administration, immediately on admission to hospital or, if there was going to be a delay, by the GP.87 Oral anticoagulants, popular in the 1950s, had suffered a decline in use because of problems with serious episodes of bleeding and doubts about their efficacy. As methods of controlling the dosage improved, anticoagulants were reassessed to see if there was any benefit from their longterm administration. Exercise regimens were introduced as people recovered from a heart attack.88 

Effective drugs had been available for the treatment of high blood pressure since the 1950s. Because of their uncomfortable side effects, though, only the most severe cases were treated until better drugs became available. By the 1970s they were. There was debate about the advantages in treating people with mild hypertension, the level at which treatment should begin and the extent to which pressure should be reduced. With 175 general practices, the MRC set up an experiment to examine the benefits of treating mild hypertension. It was found there was a reduction in the incidence of strokes, with a much smaller effect on heart attacks, from both diuretics and betablockers.89 In the ten years since the trial began there had been further advance in the treatment of raised blood pressure, but the question of whether to treat had now been settled. 

If a patient had chronic angina from coronary artery disease that did not respond to drug treatment, and the vessels were anatomically suitable, there was no longer any dispute that surgery, coronary artery bypass, was effective. Coronary artery bypass grafting secured a firm foothold. In 1978 the units in the Thames regions did 1,720 operations and the numbers steadily rose, making exceptionally heavy demands on nursing staff. With elective operation there was a better than 95 per cent chance of surviving the operation, with a 90 per cent chance of improvement; 7080 per cent of patients were cured of their symptoms.90 There was also a place, not clearly determined, for emergency surgery after acute myocardial infarction.91 The number of coronary artery bypass grafts undertaken nationally rose from 2,297 in 1977 to 6,008 five years later.92 Some units operated on few patients and a joint report of the Royal Colleges of Physicians and Surgeons recommended that, to maintain expertise, centres investigating and operating on the heart should have at least three cardiac surgeons each performing not fewer than 200 open heart operations a year.93 Coronary artery bypass grafting was the topic of the first consensus conference in the UK, organised by the King’s Fund. Economic studies of the procedure were presented. Alan Williams, from York, costed the operations and calculated the cost per quality adjusted life year (QALY) gained. Particularly in severe cases of angina it compared well with valve replacement for aortic stenosis and the insertion of pacemakers for heart block. It was probably less costeffective than hip replacement.94 In 1976 another surgical technique was developed, percutaneous transluminal (‘balloon’) angioplasty. The coronary arteries were displayed radiologically (coronary arteriogram) and a fine double-lumen balloon catheter was passed down the coronary artery to the site of the obstruction. The balloon was then inflated and the atheroma squashed, increasing the blood flow to the heart.95 Lasers and high speed revolving cutters could also be introduced into the coronary arteries. These were, however, difficult techniques best performed by the experienced. It might result in acute and abrupt occlusion, when emergency open heart surgery would be needed, at increased risk. In one patient in three there was restenosis, which meant that angioplasty might need to be repeated. 

In 1982 the first total artificial heart was implanted in Salt Lake City, Utah with the goal of permanently sustaining life; earlier attempts had been to provide a brief bridge to a heart transplant.  The patient, a dentist whose heart was rapidly failing, received an implanted device driven by an external pump and lived for 112 days, before dying of organ failure unrelated to the implant.  The technology was sound, though the surgical technique and anticoagulation was untested.  It had been shown that an artificial heart could sustain life for a long period. 

Organ transplantation 

In 1978 Roy Calne confirmed the potency of cyclosporin A (discovered in 1976 at the Sandoz laboratories) in the prevention of rejection. When it became commercially available in 1983 this largely replaced azathioprine.96 With the improved drug regimens, patients previously with no hope could expect a good chance of restoration to near normality. The capacity to perform transplants was limited both by the size of the units and by the limited supply of organs. 

Heart transplants, suspended after the early failures, were resumed in 1979 at Papworth by Terence English, and in 1980 at Harefield by Magdi Yacoub, a brilliant and charismatic surgeon who worked round the clock and inspired great devotion among his staff. Both centres were equipped for advanced cardiac surgery, with sufficient medical, nursing and technical personnel, and support in pathology, immunology and microbiology. There were three reasons for resumption. First the Stanford Medical Center had produced convincing evidence that heart transplantation could be an effective form of treatment. Secondly there had been a change in the public’s attitude towards the concept of brain death, which made heart donation easier. Thirdly there had been improvements in preserving hearts between removal and reimplantation.97 Private donations of £300,000 to both units met part of the costs. An evaluation undertaken in 1984 showed a three year survival of 54 per cent and a cost of about £12,700 for the operation and six months’ postoperative care. The programme expanded, further units being established in Newcastle and Manchester with the ultimate aim of providing this service on a regional rather than a national basis. By 1987 combined heart-lung transplantation, first reported in 1982, was also moving out of the experimental phase.

Liver transplantation, though a great ordeal for a patient who was already sick and often grossly malnourished, became steadily more effective. About 70 per cent of children and 60 per cent of adults would be alive a year later. Bone marrow transplantation was also increasingly successful, though costly. The centres treating patients were few and almost entirely in London. Sir Douglas Black, the DHSS’s chief scientist, recommended further centres in the regions to make the service more widely available. About three patients a week were having a transplant in 1982, most commonly for leukaemia, both myeloid and acute lymphoblastic in type. A few people with aplastic anaemia, or with a liability to infection from a deficient immune system, were also treated.98 

Renal replacement therapy 

Transplant operations performed in NHS hospitals in the UK 1979-1988 

Year        Kidney   Heart      Heart/lung             Liver      

1979        842          3

1980        988          25

1981        905          24

1982        1,033       36                                            21

1983        1,144       53            1                              20

1984        1,443       116          10                            51

1985        1,366       137          37                            88

1986        1,493       176          51                            127

1987        1,485       243          72                            172

1988        1,575       274          101                          241

total        12,244     1,087       272                          720 

Source: DHSS. On the state of the public health 1988.99  

By the early 1980s young patients with renal failure and no complicating factors had an excellent outlook. Most felt well and many were in fulltime employment. The pattern of treatment that emerged under the restraints of the NHS was strikingly different from that in other countries. The UK strategy was to restrain hospital dialysis, using home dialysis followed by renal transplantation, the single most important therapy in numerical terms from 1977.100 Implicit was the attitude that people incapable of performing independent dialysis or who were unlikely to receive a transplant would be rejected by the units, which were hard-pressed and short of money. Physicians in the selection and referral chain sensed this, so high-risk patients such as elderly people and those with diabetes tended to be excluded. Increasingly, though, it was shown that people in these categories too had reasonable survival rates.101 As the transplantation programmes grew, so did public concern about the criteria for determining death. A conference of the medical Royal Colleges had agreed these, but the Panorama programme on ‘black Monday’ led to a fall for several months in the number of kidneys available from patients diagnosed as ‘brain dead’.102

There were other ethical problems. How could the state keep a controlling hand on public expenditure when it concerned matters of health? How could the profession safeguard the interests of individual patients when, because of restricted resources, one patient’s transplant was another’s hip replacement? Antony Wing showed that, although the acceptance rate for treatment up to the age of 45 was much the same in Britain, France, Germany and Italy, far fewer older patients were accepted in Britain than in the other countries. There was no rule about acceptance; that was just what happened when doctors had to ration scarce resources. A research study revealed that British and American nephrologists had different standards of acceptance for treatment. Were British doctors acting against the interests of patients by rejecting many because of shortage of resources? Or were US doctors motivated in part by financial factors, accepting more patients for treatment and choosing the most profitable form of care  regular dialysis in a hospital unit?103 In 1980 the use of continuous ambulatory peritoneal dialysis (CAPD) became widespread. Instead of removing the patient’s blood and passing it through an external dialysis system, dialysis fluid was introduced into the patient’s abdominal cavity, and then removed along with the waste products that, in health, would have been removed by the kidneys. No large capital expenditure was required for this form of therapy, and there was a rapid and radical change in the upper age limit of patients accepted for treatment.104 By the end of the decade twice as many patients were on CAPD than haemodialysis at home. Units increasingly ran an integrated approach, offering patients CAPD, haemodialysis or renal transplantation, as appropriate. In December 1984 the government announced, for the first time, a target for the number of new acceptances annually: 40 new patients per million per year. Britain remained far behind Europe where, because of the existence of insurance-based health care, targets were never required.

Neurology and neurosurgery

Advances in biochemistry, immunology and molecular genetics improved the recognition an