Updated: Jul 19

November 8, 2021

by 20/15 Visioneers

September 2021 Contents

1. Executive summary

2. Preface

3. Historical perspective

4. Main actors in the HTS play

4.1 Targets on screening

4.2 Assay modalities and technologies

4.3 Bioreagents

4.4 Compound collections and chemical diversity

5. Automation trends

5.1 Robots

5.2 Smart and Cloud Labs environments

6. Data Sciences and Information Technologies

7. Social commitment in HTS

7.1 Public-private partnerships

7.2 Sustainable screening

8. A look to the future: AI and HTS

9. Conclusions

10. Bibliography 1. EXECUTIVE SUMMARY

High throughput screening (HTS) has been a paramount tool in the drug discovery process over many decades. After years of maturation, the discipline has evolved from the initial “game of numbers” principle, based on the belief that the larger the number of samples being analyzed, the higher the probability of success, to a more information-driven strategy. Enzymes (kinases included) and GPPCRs remain the most popular target classes in the discipline, but novel biological systems like targeted protein degradation (TPD) have been incorporated into the HTS arena opening the possibility of modulating physiological processes previously considered inaccessible to therapeutic intervention. Innovative chemical strategies such as DNA-encoded library technology (DELT) are being widely used to expand the chemical diversity of the molecules being screened, and molecular biology tools have enabled expression systems suited for the preparation of multiprotein complexes at an adequate scale for screening purposes. Those advances are facilitating the modulation of targets that remained elusive until now. Flexibility in the design of robotic instruments, with modular systems contributing to a fluidic layout of HTS labs, allows for rapid implementation of novel assays in a flexible environment, distant from the rigid factory-like settings so common in the first decade of this century. This expansion of HTS boundaries comes along with a democratization of HTS that makes it more accessible to research groups with financial constraints. For instance, Cloud Labs enable the use of expensive equipment at reasonable pay-for-use costs. Likewise, initiatives to promote public-private partnerships facilitate massive data access to research groups and foster fruitful collaborations among them. Artificial intelligence tools are starting to enable the massive screening of molecules with only a limited number of real assay points, intertwining virtual and physical HTS. The internet of things enables more powerful and real-time quality control of operations. New liquid handlers facilitate the miniaturization of assays, causing precious savings that will also reduce the environmental impact of HTS, an area of concern recently the subject of intense efforts. Finally, improved data management has proved essential to maximizing the proficiency of the HTS process, with tools to harmonize and unify data structure being developed. Notably, such tools will be critical to make the data available for data science and analytics like artificial intelligence that will clearly drive the HTS of the future. Overall, as HTS has evolved, it continues being a cornerstone in drug discovery, and will likely remain so, for years to come.

Figure 1: How HTS advances have positively impacted on issues traditionally associated with the discipline. 2. PREFACE

With their focus on finding new molecules for therapeutic purposes, pharmaceutical companies embraced high throughput screening (HTS, i.e., the simultaneous evaluation of a vast number of molecules in biological assays expressing a given functionality, with the goal of identifying modulators of such functionality) several decades ago as a key strategy to identify new chemical matter. Starting from modest efforts intended for screening hundreds or thousands of molecules, technology evolution fostered a change in the discipline which evolved to ultra-high throughput screening at the dawn of the 21st century, where millions of molecular entities were screened for every target in a short period of time. This sudden revolution transitioned in the last two decades to a calm evolution where quantitative changes were replaced by qualitative ones. Novel strategies in different fields have converged (and are converging) to slowly transform the HTS scenario, as it becomes more mature, to increase its success rate. In this document we attempt to summarize the current status of HTS, highlighting the opportunities to further improve its efficiency with a view on future directions for the discipline. It will be focused mainly on target-based HTS, leaving phenotypic screening to be discussed in a separate document. Likewise, it will not cover genomic screening, an emerging therapeutic approach that has been reinvigorated with recent CRISPR advances (an excellent review on this topic can be found in a recent article1).


In the last two decades of the 20th Century a number of discoveries caused a significant leap forward in the drug discovery process. Until then, drug discovery was based primarily on serendipity and empirical observations in rudimentary physiological models, using animal tissues at first, and assays based on cell cultures and purified proteins later. These latter assays, though more efficient than the animal tissues models, were burdensome and only allowed the evaluation of a few tens of chemical compounds per week. Therefore, the number of opportunities to develop novel therapeutics was rather limited. But advances in recombinant DNA technologies during the 1980s fostered the availability of recombinant proteins that could be used as targets in assays intended to find ligands capable of modulating their function, either in cell-free or in cell-based models. This was the seminal point that gave rise to high throughput screening as a discipline in the following decade.

Advances in fluorescence-based reagents in the 1990s enabled the development of more efficient assays that could be performed in 96-well plates, thereby allowing the simultaneous screening of many molecules. Likewise, the low volume of the assays caused precious savings in chemical compounds, moving from spending 5-10 mg per assay tube, to 10-100 µg per assay well. The first published articles visible in PubMed with the term “high throughput screening” date from 19912-4 using radioactive tracers to monitor ligand binding. However, most of the HTS work performed in pharmaceutical companies was not disclosed at that time since it was perceived as a competitive advantage and therefore its results were considered as highly sensitive information. Actually, HTS campaigns were first initiated in the 1980s with a throughput that grew from less than 1,000 compounds screened per week in 1986, to nearly 10,000 in 19895. As throughput increased, corporate compound collections typically containing 10,000-20,000 samples were too small to suffice for the nature of the HTS endeavor, conceived as a game of numbers: the larger the number of samples being screened, the higher the chances to find an interesting molecule. In addition, those collections were usually a reflection of the previous activity of the company and often included poorly developable molecules such as dyes. These facts contributed to limited chemical diversity in the set of compounds being screened, hence precluding the chances to find novel chemotypes. However, the explosion of combinatorial chemistry and parallel synthesis methods by mid-1990s, and a new appreciation for develop-ability properties for molecules brought on by Lipinski6in 1997, changed the landscape completely, and in the early 2000s compound collections scaled up to include at least half a million different molecules.

The aforementioned advances in fluorescence chemistry gave rise to a plethora of reagents suitable to all forms of fluorescent readouts: fluorescence intensity (FLINT), fluorescence polarization (FP), Förster resonance energy transfer (FRET), time-resolved fluorescence (TRF), fluorescence intensity distribution analysis (FIDA) or confocal fluorescence lifetime analysis (cFLA). All these technologies enabled the development of highly efficient assay formats in homogeneous “mix and read” mode, avoiding cumbersome washing steps and therefore increasing considerably the screening throughput. Those advances led to a further decrease in assay volumes, transitioning from 96-well plates in the mid-1990s (dealing with 100-200 µL assays) to 384-well plates (10-50 µL assays) at the end of the decade and 1536-well plates (3-10 µL assays) in the early 2000s. This was accompanied by significant progress in automation, with the development of more proficient robotic systems which made low-volume liquid handling more reliable. The founding of the Society for Biomolecular Screening (SBS, currently Society for Laboratory Automation and Screening, SLAS) prompted the adoption of microplate standards that were paramount to harmonize the development and manufacturing of new instrumentation, from liquid handlers to plate readers or incubators, and helped their integration in big robotic platforms. The bottleneck shifted from the laboratory to the information technology (IT) desks as new tools capable of dealing with thousands of data were demanded. The first versions of these tools were prepared initially as rudimentary, customized systems developed within each pharma company (usually as sophisticated macros operating in Excel) and later as more robust commercial LIMS platforms capable of capturing, processing and integrating all the information in corporate databases in a data integrity-compliant manner. Simultaneously, new quality control criteria were implemented to monitor and ensure assay performance, avoiding unwanted waste of costly reagents, and sound statistical criteria started to be used for the appropriate selection of hits.

These advances contributed to the industrialization of HTS, which evolved into “ultra-HTS” (uHTS), defined as the ability to screen more than 100,000 compounds every day against every single target. The availability of the human genome sequence7 in 2001 nourished the identification of novel therapeutic targets and all these facts triggered the definitive explosion of HTS in the pharmaceutical industry during the 2001-2005 period. And, as it happens with any sudden explosion of technology, this caused a hype of expectation that shortly afterwards led to disappointment as some of the unfounded (and somewhat naive) expectations did not pay off, at least in the short term. Most of the reproaches against HTS were elegantly discussed and refuted in a Nature Reviews in Drug Discovery article8 published in 2011.

The fruits delivered by HTS are now becoming more apparent. It is very difficult to trace the origin of drugs already in the market or in late clinical trials to find out if they originated from a HTS campaign, but surrogate approaches are demonstrating that the contribution of screening strategies to the discovery of novel molecular entities has increased in the last few years and therefore the discipline seems to be paying off, as observed in Figure 2. In the 1990-99 period, 244 novel molecular entities (NMEs) were approved by the FDA and 105 out of them (43%) were new in their scaffold and shape (i.e., they were not fast followers, nor inspired by known chemotypes). In the 2000-09 period, this percentage grew modestly to 51% (84 out of the 164 NMEs approved) but the number of these novel molecules decreased (from 105 to 84). However, in the 2010-19 period the percentage expanded to reach 67% (164 out of the 245 NMEs approved) and the figure boosted drastically (from 84 to 164) with screening believed to be responsible for this growth9. *

Figure 2: Evolution of drug discovery outcomes in the last three decades. NMEs, novel molecular entities. The term “innovative molecules” refers to those not inspired in known chemotypes, thus being new in their scaffold and shape. The percentage of innovative molecules is calculated with respect to the number of NMEs. As often happens with hype cycles, the disappointment following the hype was superseded by a degree of maturation and productivity which has been reached by HTS in the last few years. This maturation is reflected in the adoption of convergent, flexible strategies and the renunciations of previous myths and dogmas that were common in the early 2000s. For instance, diversity screens are combined with fragment-based screenings and rational drug design, complementing each other. Medium- or low-throughput assays, banned in screening platforms two decades ago as inefficient to screen large compound collections, are nowadays accepted using small subsets of the compound libraries or iterative approaches if those assays are a more valid reflection of the physiology of the target or if they are the only viable option, thus enabling the screening of targets previously considered intractable. These are just a few examples showing how HTS strategies have evolved and matured. In the following chapters a more detailed analysis of the different aspects of HTS will be elaborated with some considerations about future trends. As already mentioned in the Preface chapter, most of these reflections will focus exclusively on target-based HTS, leaving the rich field of opportunities brought about by phenotypic screenings to be analyzed in a future document.



Given the sensitive nature of HTS campaigns in pharma companies, determining accurately the nature of the targets being screened in those companies is an almost impossible task since in many cases this information might not be fully disclosed. An approximation can be taken by looking at screening data publicly available in databases like PubChem BioAssay, with most of these data being deposited by academic and public institutions ( Alternatively, HTS data published by pharma companies in the most relevant journal in the field (SLAS Discovery, formerly Journal of Biomolecular Screening) can be retrieved for this analysis, but this likely would be incomplete since not all companies publish comprehensive data from their HTS campaigns. Therefore, the information available in the PubChem BioAssay database has been used as the most feasible approach to evaluate current trends in the type of targets being screened.

Figure 3: Frequency distribution of target types (human and non-human). A: Distribution in HTS-related assays (N=332,549 assays) registered in the PubChem BioAssay database. Targets were grouped according to the classification of the IUPHAR/BPS Guide to Pharmacology. The group “enzymes” includes kinases and the group “receptors” comprises all type of receptors, including GPCRs and nuclear receptors. B: Distribution of targets among the FDA-approved drugs until December-2015 according to the Supplemental Information available in the original article from Santos et al.10. This frequency distribution is done on the total number of different targets (N=813), not on the number of drugs targeting them (i.e., each individual target is only counted once even if it is targeted by more than one drug). C: A closer look to the distribution in B but focused only in small molecule drugs approved in the 2011-2015 period. See main text for further explanation. An exploration of this dataset, done by using the “Classification Browser” tool and arranging the information by target type according to the IUPHAR/BPS Guide to Pharmacology (, reveals that more than 300K HTS-related assays (either primary HTS, secondary and confirmation assays, dose-response assays and mechanistic biology assays for HTS hits) have been deposited in the database since 2004 . Figure 3A shows how these assays are distributed by target class, with enzyme assays being the most frequent (49%, kinases being included in this group), followed by receptors (40%, including G-protein coupled receptors (GPCRs), nuclear receptors and other membrane receptors). Worth noting, some of the assays considered in this analysis aimed to identify or characterize toxicity issues, hence the significant presence of transporters assays (2%). It is interesting to see how this distribution is similar to that observed in the panel of targets of FDA-approved drugs until 2015 (Fig. 3B), although receptor assays are more poorly represented in the latter panel. This difference is likely due to the fact that this target class includes many GPCRs considered intractable until the late 1990s. However, a number of events in the 2000s increased their perceived tractability: the development of novel molecular biology tools capable of generating transient and stably transfected cell lines thus making these targets amenable to screening campaigns; advances in crystallography techniques that uncovered the 3D structure of many of these proteins, therefore enabling rational drug design and virtual screening approaches; and the development of novel assay technologies. Among the latter, it is worth mentioning significant breakthroughs like 1) the discovery of fluorescent calcium indicators11 that led to the development of FLIPR™ technology which revolutionized GPCR drug discovery in the late 1990s12, 2) the design of luminescence-based technologies (e.g., “enzyme fragment complementation” systems used to monitor β-arrestin recruitment as a generic method for all GPCR types or the formation of cAMP in Gs receptors) and 3) the development of non-radioactive, FRET-based, ligand-binding assay formats. These advances caused a boost in GPCR-targeted HTS campaigns and consequently they are largely represented in Fig. 3A. These increased efforts in GPCR drug discovery are starting to pay off. A closer look at the FDA-approved drugs in the 2011-2015 period (Fig. 3C) shows that the presence of GPCR targets is significantly higher in this period than in all the historical series (21% vs11%), i.e., a significant part of the approved GPCR-targeted drugs is concentrated in recent years. With this encouraging success, and considering the relevance of GPCRs in cell physiology, it is likely that this target class will continue to be prevalent in the drug discovery portfolios of many pharmaceutical companies and therefore continuous improvement in assay formats is likely to occur. Particularly, it is expected that cheaper and more accessible instruments to monitor intracellular calcium release (the preferred methodology when screening for Gq receptors) will become available, especially these days where public institutions, with more restricted budgets, are increasingly active in HTS. Indeed, less expensive alternatives to FLIPR™ like FlexStation™ from Molecular Devices (San Jose, CA) are available and it is foreseeable that this trend will continue and expand in the coming years.

Also included in the receptor group, there are many nuclear receptors that were extensively pursued as drug targets in the early 2000s. Interest in this target class has somewhat declined in the last decade. Nevertheless, there are many proficient assay platforms allowing the development of high throughput assays for nuclear receptors, most of them being functional and based on the transcription event triggered by the receptor to lead to the expression of a reporter gene, which is easy to monitor in a high throughput-friendly format.

Enzymes are the largest target class being screened and the one with the largest representation in the target set of FDA-approved drugs. Worth mentioning, kinases have been included within this group in the analysis shown in Fig. 3. As it happened earlier with GPCRs, kinases were the subject of an explosion in drug discovery efforts in the first decade of the present century, especially for cancer drug discovery. However, many other non-kinase enzymes are responsible for the large impact of this target class. Indeed, a closer look at the group of enzyme targets of FDA-approved drugs shows that only 16% of them were kinases (including non-protein kinases of human and microbial origin) and the remaining 84% were other enzymes. Nonetheless, and consistent with the explosion mentioned above, many of the drugs targeting human protein kinases were approved in the 2011-15 period; likewise, kinases are largely represented in the target sets of HTS-related assays within the PubChem BioAssay database: 43% of the screened enzymes are kinases. Among the non-kinase enzymes there are many proteases, phosphodiesterases, reductases, dehydrogenases, phospholipases and, interestingly, a wide number of enzymes involved in the transfer of methyl groups, this being consistent with the burst in epigenetics drug discovery programs from 2005. Different than GPCRs, where preferred assay technologies involve the measurement of intracellular calcium release, there is not a universal method to be used with enzymes. Some groups of enzymes (kinases, phosphatases, NAD(P)H-dependent dehydrogenases), sharing common substrates or products, may benefit from a common assay format, but this is not applicable to all enzymes. Therefore, bespoke assays for individual enzymes are not unusual. On the other hand, in the hit selection and optimization phases of many drug discovery programs, especially in enzyme-targeted ones, there is a growing trend to consider not only the thermodynamics of drug-target interactions (measured through surrogates of the equilibrium constant: EC50 and the degree of effect at a given drug concentration) but also their kinetics (i.e., off-rates and target residence times)13. Consequently, it would not be surprising to observe a trend towards assay formats enabling continuous, on-line readouts instead of formats exclusively allowing end-point detection, since the former are more suitable for the acquisition of kinetic data.

Ion channels are the third target class more frequently considered in drug discovery. Since patch-clamp methods typically used in classical pharmacology studies for these targets are not amenable to high throughput efforts, alternative assay formats were developed. One of the most frequent approaches includes the use of ligand-binding assays, but such assays are not considered truly functional, and they are biased towards the identification of compounds binding to the same site of the probe ligand. Functional alternatives were developed, and the most popular one includes the use of voltage sensor probes (VSPs) which consist of FRET probe pairs which are disrupted upon membrane depolarization. Likewise, ion-specific fluorescent probes (like those used for GPCRs) are used to monitor changes in intracellular ions, e.g., calcium for calcium channels and for non-selective cation-channels. Efflux assays are also used in several drug discovery programs, although to a much lower extent than those mentioned above since they use radioactive tracers (usually 86Rb+) or demand low-throughput atomic absorption spectroscopy instruments to detect non-radioactive rubidium; in addition, they offer low temporal resolution. However, efforts in the last decades to adapt the gold-standard patch-clamp technology to high-throughput settings are maturing with the development of planar patch clamps14. IonWorks™ from Molecular Devices was the first platform to be developed (superseded by their new versions IW Quattro™ and IW Barracuda™) and new instruments are currently available incorporating the precision and accuracy of manual electrophysiology: PatchXpress™ (Molecular Devices), IonFlux™ (Fluxion Biosciences; Alameda, CA), Qube™ and QPatch HT/HTX™ (Sophion; Ballerup, Denmark) and Patchliner™ and SynchroPatch (Nanion; Munich, Germany) are among those. Although their throughput is incomparably high with respect to manual patch-clamp, it is still low when compared to other HTS-friendly technologies. However, they are very useful when screening small compound libraries or for screening campaigns using iterative approaches (like those described in chapter 8) to analyze a lower number of compounds. Still, the price of this instrumentation is nearly prohibitive for many medium or small companies that will have to rely on alternative methods. It is anticipated that prices will decrease in the near future and the predictable trend for ion channel screening will likely include a combination of these technologies.

Most of the targets considered in drug discovery programs were selected with the intention of modulating cellular processes that were deemed critically involved in the onset of pathological conditions. Deciphering cell signaling pathways in the 1980s and 1990s preceded the interest in GPCRs, kinases and nuclear receptors playing critical roles in those pathways. In the 2000s, unveiling the importance of epigenetics in modulating cell physiology fostered the interest in many targets involved in the process and therefore a large number of enzymes acting as “writers” (DNA methyl transferases, lysine histone methyltransferases), “erasers” (histone deacetylases, lysine demethylases) and “readers” (bromodomains) of epigenetic changes have been included in pharma early drug portfolios in the last 10-15 years. This effort has partially paid off with the first and second waves of FDA-approvals for drugs targeting epigenetic targets, namely DNA methyltransferases (DNMTs) and histone deacetylases (HDACs). Six drugs were approved among the two waves, all of them for the treatment of hematological malignancies: the first wave delivered azacitidine (2004) and decitabine (2006), targeting DNMTs, as well as vorinostat (2006) and romidepsin (2009), targeting HDACs; the second wave delivered belinostat (2014) and panobinostat (2015), both targeting HDACs. Since then, more efforts have focused on other epigenetic targets, but several issues are forestalling further success. One of them is the observed poor translation of in vitro results into in vivo efficacy data due to several reasons, the main one being that most epigenetic targets work in multiprotein complexes that are not being included in the design of the in vitro assays (this point and its possible solutions will be discussed in section 4.3). In addition, concerns on toxicity usually arise during the development of epidrugs due to both their poor selectivity versus the intended target and the interplay among epigenetic targets and chromatin related proteins. Finally, lack of sufficient target validation also impacts on the poor translational success of many epigenetic drug discovery programs. Nonetheless, several epidrugs candidates are currently in clinical trials and efforts within the scientific community to overcome the challenges described above will surely increase success in this field.

Targeted protein degradation (TPD) is the latest approach in the drug discovery radar and the one leading the current trends at the dawn of the 2020s. The main conceptual difference in this case is that TPD is not only being considered as a physiological process to be modulated for therapeutic purposes but rather as cellular machinery to be exploited to modulate protein targets that were neither druggable by small molecule ligands nor accessible by biological agents. Basically, TPD is a strategy intended to label specific target proteins for proteosomal degradation. It requires finding a suitable ligand for the target protein that can be conjugated through a suitable linker to a signal molecule that recruits an ubiquitin E3 ligase. The resulting molecule, called PROTAC (“PROteolysis Targeting Chimera”) is a heterobifunctional small molecule formed by a linker and two warheads: one warhead binds to the target protein, and the other recruits the E3 ligase. In this way, the target protein can be ubiquitinated and degraded by the proteasome. The first successful use of PROTACs was published in 2001 using a peptide conjugated to a covalent ligand of the protein targeted for degradation15. Since then, the field has rapidly evolved to the design of non-peptide small molecules acting as sensors for recruiting E3 ubiquitin ligases. The discovery of the E3 ligase cereblon as the target for thalidomide and analogues and the clinical success of these molecules in treating multiple myeloma has propelled a significant amount of work in the field. Such efforts have mostly tried to exploit the E3 family in the search for the ideal ligase cognate for each target, while looking for the most appropriate ligands. This is a nascent field predicted to evolve and grow in the next few years and will benefit from efficient HTS strategies. Since new E3 ligases will likely be discovered (as of 2019, only 7 out of circa 600 E3 ligases in the human genome had been exploited for TPD16), novel ligase sensors will be needed hence suitable binding assays will have to be developed. Likewise, new assay systems will have to be designed to screen for the most efficient ligands in inducing degradation. Companies like Amphista (London, UK), Lycia (San Diego, CA), C4 Therapeutics (Watertown, MA), Captor (Wroclaw, Poland), Arvinas (New Haven, CT) or Cullgen (San Diego, CA) are actively working in the field using their own platforms and designing assays to investigate the efficiency of the hits found, considering not only the affinity for the intended target and the cognate E3 ligase, but also the extension and the duration of action of the induced degradation. It is therefore predictable that the pharmaceutical industry will devote more resources to the field and significant progress will be made in upcoming years. Notably, as outlined below in section 4.4, TPD is a perfect match for DNA-encoded library technology (DELT, explained in the same section), and this fact will undoubtedly lead to successful outcomes.


In the previous section, some assay modalities were discussed concerning particular target classes. However, it is worth mentioning that target-agnostic technologies will likely influence the evolution of HTS in the near future. Particularly, biophysical, label-free technologies are endowed with features that make them attractive from a HTS perspective as they do not need artificially created ligands or labelled molecules that may be slightly but significantly different to natural ligands. Such technologies would therefore enable the development of assays that resemble more closely the physiological function of the target. In addition, label-free technologies are not susceptible to interferences with labeled reagents that usually lead to unwanted false positives and negatives. Therefore, they do not need orthogonal secondary assays to eliminate such false positives, resulting in valuable savings in time and costs.

Among the most extended label-free assay formats amenable for HTS it is worth mentioning those based in waveguide, impedance or surface plasmon resonance (SPR). The latter was one of the first technologies to be developed for drug discovery in the mid-1990s although its use for screening purposes is relatively recent, when appropriate instruments have become available. Though different technologies with different underlying events, SPR and waveguide are based in changes in the properties of reflected light following a change in the mass of a target previously immobilized on a suitable surface. Analogously, impedance-based methods register changes in the electrical conductivity of an electrode array located at the bottom of a well containing cells, once such cells have undergone morphological changes as a consequence of treatment with a given compound (Figure 4). Besides enabling the development of “less artificial” assays, these technologies provide biophysical data that can illuminate the mode of action of identified hits, thus offering a means for hit prioritization. Despite these benefits, the implementation of these technologies in the HTS arena has not been as widespread as was predicted ten years ago. Several factors can explain the low uptake, the most important being their lower throughput compared to other HT-technologies, the cumbersome preparation of reagents and plates (which also impacts throughput) and the high costs of instruments and, specially, consumables. Nonetheless they are still used in secondary screenings as described below for thermal-based methods.

Figure 4: Fundamentals of traditional label-free technologies used in HTS. A: Surface plasmon resonance. B: Waveguide. C: Impedance. Changes in mass (A, B) or in morphology (C) due to biological interactions (lower panels) induce a change in the reflection angle (A), the reflected wavelength (B) or the electrical impedance (C). Following these pioneer technologies, other approaches have become part of the screening armamentarium. Mechanistic assays based in shifts in thermal stability upon ligand binding are frequently considered in drug discovery programs: isothermal titration calorimetry (ITC), cellular thermal shift assay (CETSA) or microscale thermophoresis (MST) are becoming increasingly popular (the latter has been included in this group although is not a label-free technology properly speaking). However, they are also low throughput and therefore are more likely to be used for screening small libraries or in fragment-based screening exercises given the useful biophysical information they provide.

To date, the most successful label-free technology for HTS purposes is mass spectroscopy (MS). Initially, MS was coupled to multiplexed liquid chromatography to increase throughput, but this approach still required cycle times of more than 30 s per sample which were unacceptably high for HTS purposes. Changing to solid phase extraction caused a threefold increase in throughput, still considered low. However, the development of highly efficient MALDI systems such as the RapifleX® MALDI PharmaPulse® (Bruker; Billerica, MA), capable of working with low volumes in 1536-well plates has caused a dramatic decrease in cycle times to 0.5-1 s per sample and therefore this technology is now being successfully applied in HTS campaigns17. Still, interferences caused by salts are a significant problem when using MS in HTS. The emergence of self-assembled monolayers for MALDI (“SAMDI”) has overcome this important issue. SAMDI (developed by SAMDI Tech; Chicago, IL) utilizes biochip arrays with self-assembled monolayers (SAMs) of alkanethiolates on gold allowing assay execution under any condition: once the assay is completed, the reaction mixture is transferred to the SAM biochip, where the analyte of interest is captured and the rest of the reaction mixture is washed away. Finally, a suitable matrix is applied to the biochip and the sample is analyzed in a MALDI instrument18. Therefore, the only limitation for running screening assays using SAMDI is that the analyte has to be immobilized in the SAM biochip, but a vast number of immobilization chemistries are available and can be easily developed for most analytes.

Recent advances allow the acquisition of bidimensional arrays of hundreds of thousands of MS spectra in cells and tissue samples enabling MS-imaging, which is already being used for screening19. The successful implementation of MS for HTS purposes is being followed by other spectroscopy modalities like nuclear magnetic resonance or Raman spectroscopy and it is not unusual to find reports of HTS campaigns run with these technologies. These advances offer a promising field to be developed in the near future, increasing the arsenal of screening technologies and enabling the HTS of targets difficult to analyze by classical conventional technologies.


The provision of bioreagents has been one of the bedrocks to enable a successful HTS campaign. As explained in the introduction, advances in DNA recombination in the 1980s contributed to the availability of constructs that allowed the preparation of large quantities of recombinant proteins with high purity degree and transfected (transient or permanent) cell lines either overexpressing the target of interest or showing a target-driven specific phenotype, thus providing a suitable readout for high throughput assays. In the beginning of this century it was quite common in big pharma to have in-house resources and large production facilities equipped with bioreactors and protein purification instrumentation to prepare their own reagents. There is now a growing trend to outsource the preparation of bioreagents, a decision which is more common in small biopharma and academia. Classical suppliers of life science reagents (Sigma Aldrich, Merck, Promega, VWR, TebuBio, etc) can provide bespoke bioreagents at reasonable prices depending on scale. Likewise, small companies have flourished in many countries offering similar services. Therefore, it is not difficult to find a solution for bioreagents provision at all scales, from large HTS premises in big pharmaceutical companies to small groups in biotech startups or academic consortia. Furthermore, many commercial suppliers offer tailored cell lines expressing selected targets, ready to use for screening assays after a few optimization steps. Companies like Promega (Madison, WI) or Eurofins DiscoverX (Fremont, CA) possess a rich catalogue of cell lines for screening modulators of the most demanded targets according to scientific trends. A good example these days is the availability of cell lines for a wide number of targets in immuno-oncology.

That said, the availability of recombinant proteins or transfected cell lines should never be taken for granted. Some proteins are hard to express in functional form and in enough quantities for HTS purposes. Improvements in assay technologies and miniaturization trends in HTS have reduced the amount of bioreagents needed, but still some proteins can be difficult to express in a functional state. Nevertheless, advances in molecular biology have helped reduce those challenges to a minimum, and savvy molecular biologists usually find a way to overcome expression and/or production issues. Such advances in molecular biology currently permit scientists to express multiprotein complexes instead of individual entities, a significant leap forward allowing experimental in vitro systems to reflect more accurately disease pathophysiological processes.

As outlined in section 4.1 for epigenetic targets, many cellular functions are possible due to the activity of multiprotein complexes performing in a coordinated way, with individual components modulating the activity of others. Though the presence of large multiprotein complexes in cellular systems has been known for some time, it has recently been discovered that there are many small (no more than five components) protein complexes and, interestingly, many disease target proteins are part of these small protein complexes20. Given the prevalence of disease targets in such complexes, it follows that monitoring their activity in isolation seems an excessively simplistic approach that may fail to find the right therape