Abstract: Modern medicine has given us effective tools to treat some of the most significant and burdensome diseases. At the same time, it is becoming consistently more challenging to develop new therapeutics: clinical trial success rates hover around the mid-single-digit range; the pre-tax R&D cost to develop a new drug (once failures are incorporated) is estimated to be greater than $2.5B; and the rate of return on drug development investment has been decreasing linearly year by year, and some analyses estimate that it will hit 0% before 2020. A key contributor to this trend is that the drug development process involves multiple steps, each of which involves a complex and protracted experiment that often fails. We believe that, for many of these phases, it is possible to develop machine learning models to help predict the outcome of these experiments, and that those models, while inevitably imperfect, can outperform predictions based on traditional heuristics. The key will be to train powerful ML techniques on sufficient amounts of high-quality, relevant data. To achieve this goal, we are bringing together cutting edge methods in functional genomics and lab automation to build a bio-data factory that can produce relevant biological data at scale, allowing us to create large, high-quality datasets that enable the development of novel ML models. Our first goal is to engineer in vitro models of human disease that, via the use of appropriate ML models, are able to provide good predictions regarding the effect of interventions on human clinical phenotypes. Our ultimate goal is to develop a new approach to drug development that uses high-quality data and ML models to design novel, safe, and effective therapies that help more people, faster, and at a lower cost.
Ещё видео!