ADCs: Origins and Obstacles

Terry Chapman, University of Bath PhD Student, Bath ASU Researcher

Introduction

This is the fourth article in a series of five blog posts that are putting ADCs in the spotlight. This article will describe the origins of and obstacles faced during ADCs development from the beginning of the 20th century to today. Please read the introductory post ‘ADCs: What are they and why do they matter?’ before this post, if you are unfamiliar with ADCs. Look out for ‘ADCs: Pipeline & Progress’ coming soon.

Conjugating toxic compounds to antibodies was first proposed by Paul Ehrlich at the beginning of the 20th century but scientific understanding and available technology was not sufficient to synthesis ADCs. Initial work by Kohler and Milstein [1] in the 1970’s paved the way for other scientists to begin selectively culturing monoclonal antibodies. Additional key concepts required for a successful ADC were learned during the 1980’s and 1990’s as a result of investigating why the first generation of ADCs did not work as expected.

Three black and white portraits of scientists; doctors Erlich, Kohler and Milstein
Figure 1 – Doctors: Erlich, Kohler & Milstein

Key Concepts

Early ADCs used murine or chimeric antibodies. These are recognised as foreign entities by the immune system which raises an immune response. This leads to removal of many ADCs from the blood stream before they can deliver their payloads consequently reducing the concentrations attainable and duration of efficacious therapy. The immune response also causes side effects such as: rash, flu-like syndrome, systemic inflammatory response syndrome or anaphylaxis. To avoid unacceptable levels of immunogenicity, mAbs used in ADCs should ideally be humanized or human. , for example The monoclonal antibody rituximab or the ADC brentuximab vedotin are however chimeric antibodies and although weakly immunogenic are still useable.

Murin to Human Mab Comparisons Annotated 2
Figure 2 – Ascending from left to right, increasing murine sequence content in a mAb, from fully human to fully murine

Initial attempts at creating ADCs were targeted to receptors that were not selective enough to tumours, resulting in unacceptable toxicity.  Another problem was use of linkers that were not stable enough, leading to high levels of payload dissociation while still in circulation resulting in non-selective cytotoxicity. Linker instability was still a problem in the 2000’s as gemtuzumab ozogamicin (GO) releases 50% of its payloads over 48 hours in circulation. Due to this design flaw GO increased fatality rate, compared to alternative therapies. Licensed ADCs use linker chemistry that is extracellularly stable and cleaved or degraded intracellularly.

Figure 3 – Common attachment chemistries to cysteine and lysine residues

First generation ADCs were envisaged to target conventional chemotherapeutics to tumour sites. Due to the complex internalisation process, ADCs attain low concentrations of the administered warheads at tumour sites. Conventional agents are not therapeutically effective at the intracellular concentrations generated by ADCs, to compensate second generation ADCs use payloads up to 4000 times more toxic, such as mertansine, than conventional chemotherapeutics, such as doxorubicin.

Looking Forward

Key concepts such as mAb immunogenicity, linker stability and payload potency have been learned from first generation and early second generation ADCs. Trastuzumab emtansine incorporates the most currently held key concepts, incorporating potent payloads and stable linkers that degrade intracellulary and specific internalising target receptors. It is demonstratably therapeutically superior to other existing therapies for metastatic HER2+ breast cancer. The development of ADCs has been an arduous task with abundant obstacles. The outcome however has developed into a new and effective drug class.

References

  1. Köhler G, Milstein C. 1975. Continuous cultures of fused cells secreting antibody of predefined specificity. Nature. 975 Aug 7;256(5517):495-7.