Robotic System could Reduce the Cost of Drug Discovery

aricle_roboticsystemcouldreducethecostofdrug

Developing a new drug is expensive and time-consuming. One major challenge is the sheer number of experiments needed to prove a drug is safe. Literally thousands of experiments with each potential treatment represents a huge investment in time and money. Robotic systems might hold the key to reducing development costs and also speeding up the process.

Eliminating the physical repetition of experiments would bring development costs down. This may now be possible thanks to encouraging results from researchers at Carnegie Mellon University who applied robotic system technology to their experiments.

To develop a new drug, researchers must hone down an almost infinite number of experiments to the minimum number required to prove a drug is safe. The multiple experiments are necessary in order to:

  • learn a drug will interact with proteins
  • what mutations might result
  • possible side effects

The researcher must either test exhaustively in order to check for all eventualities or to use targeted testing.

Targeted testing has limitations because it requires a considerable body of data before the researcher can safely opt in or out of individual experiments. The idea behind robotic systems is that a computer chooses which experiments to run based on data-based observations as well as learning from earlier test outcomes. This equates to a form of active learning and could vastly increase the efficiency of experiments through targeted choices.

Carnegie Mellon researchers created a robotically driven system that accurately predicts the outcomes of experiments by observing the effect of a large number of drugs on a large number of proteins. Not only was the robotic system proven accurate but it only needed to run 29% of the possible combinations to reach a conclusion. This mix of accuracy and economy opens the door to faster, cheaper experiments.

In first trials, the research team at Carnegie Mellon developed a system using known data (rather than unproven experimental data) to test the robot system’s reliability. The results were so encouraging they have now moved on to allowing the program to independently choose which experiments to do and then passing them off to liquid-handling robots working in conjunction with a special microscope.

They had the system predict the potential interactions between 96 drugs and 96 different mammalian cell clones without executing every possible combination (9,216 in total). In the first round, the robotic system needed to spot the changes in the phenotypes of host cell cultures in 96 experiments. The system then used this information to predict the possible effect on other cell cultures and decide which experiment to run next. The idea was to spot prevailing trends to avoid unnecessary experiment duplication.

As the automated system progressed through more rounds, it predicted more patterns on how the drugs were causing phenotypic changes in the host cell cultures. At the end of the experiment, the researchers analyzed the data and found the robot’s predictions were 92% accurate while having run only 29% of the possible combinations.

Based on these results, robotic systems have the potential to decide which experiments are necessary to accurately predict patterns of drug behavior. By limiting waste and unnecessary experimentation the cost of drug development could come down, with obvious benefits to the end user, the patients who desperately need these treatments.

You may also like...