Everyone wants better medicine. Throughout history, various cultures have identified and codified the vast multitude of medicinal herbs, substances, and other living creatures, with many being deified in an ode to their value and cultural significance. Over the years, the art of drug design emerged, and not unlike other art, began as an imitation of nature; the synthesis of quinine, for instance, was a high point in the annals of medical history. With the promulgation of science, researchers began probing the medical properties of certain chemicals (or groups), with the understanding of their varied modes of action giving rise to the field of drug design.

Over time, it became clear that the spatial geometry of the molecules is of more relevance than their chemical composition, opening the floodgates of the vast world of synthetic drug design. The role of computers began here, as digital techniques like Computer Assisted Drug Discovery (CADD) enabled us to expedite laborious database searches to identify possible matches in 3D structures, helping us discover much smaller molecules of comparable geometry. With the surfeit of organic molecules growing steadily every day, we need computing tools that can rummage through databases to find appropriate candidates in drug design.

Pharma’s dependency on database searching makes it a prime candidate for the implementation of quantum computing, owing to its ability to catalyze CADD. But before we delve into that, let us take a closer look into the labyrinthine world of QC, hoping to accrue a working knowledge of its method.

QC: The rudiments

The rise of quantum mechanics is arguably the greatest development in science for the better part of the last century. It has been key in solving some of the great challenges of modern physics, but its application in IT comes as a surprise to many. Simply put, the quantum mechanical innovations, both theoretical and practical, have helped reimagine the smallest unit of information in computing- the ‘bit’.

To better understand this, we need to look at story of computers and the transistors that form their core. Around the turn of the 21st century, scientists at the Silicon Valley were working towards reconceptualizing the transistors through a drastic reduction of their size, with 14nm transistors being the norm today. At this level, transistors may begin to prove ineffective as electrons start exhibiting quantum mechanical behavior via quantum tunneling, marking a technological dead-end for computational progress, which in turn reflects as the end of technological progress.

However, scientists were able to nullify this roadblock (or here the lack of one) through the revolutionary quantum computer model. Here, the binary bits are replaced by qubits, which, while remaining binary in theory, exist in a superposition of various quantum states. The values of a qubit can be of any level 2 quantum system, like a spin or magnetic quantum number, or a single photon in its state of horizontal or vertical polarization taking values 0 or 1, mirroring their classical binary counterparts. Courtesy of the bizarre rules of the quantum realm, the qubits do not need to be in either states independently; they can exist in various proportions of both these states simultaneously until observed(illustrated in the Schrodinger’s cat thought experiment), owing to the property of superposition as mentioned above. It may be looked at just like when the binary photon passes through a polarizer (the observation); it can only have one orientation. As a result, the superposed qubits can store vast data points as compared to classical bits; consider a combination of 20 qubits in superposition: this would give us 1,048,576 values, all of which can be used for computation once the superpositions are collapsed.

The computational process must be juxtaposed with its classical colleague to understand what we’re dealing with: while a normal logic gate works with various inputs to give a single output, the quantum gates can now use superposed qubits to entangle them, and simply measure the values to collapse them to their binary values. The sheer speed that this method promises is self-explanatory. This means an exponential acceleration in computation, reflecting in various aspects of pharmacology, from discovery to clearance and marketing.

The R&D ramifications of QC
The scope of QC in pharma is not limited to, but lies predominantly in drug discovery, namely the understanding of structure-property relationships and identification of stereoisomers to create matching proteins that are smaller in molecular weight and helping identify the spatial structures of target molecules. One key area where this may have a pivotal impact may be in the assessment of how proteins fold, a phenomenon that may happen in a plethora of permutations and combinations. The quantum simulations can predict molecular structures with unprecedented speeds, but also provide us with highly accurate models. Some of the most intense research in this area happens now in creating QC models for target identification and hit generation, with mixed results. And while converting some of these real-world problems into quantum mechanical ones are onerous, it is as clear as day that quantum computers, in the right hands, are a handful in powering technology to its next stage.

For now, it may suffice to say that the future of QC in pharma reposes in a superposed state of probabilities, and only time can open the box to reveal what the true nature of its reality holds for the progress of medicine