Why ChatGPT and the like will never gain acceptance in drug discovery

June 13, 2023

Large language models (LLM) continue to dominate the AI conversation, being hyped for use for just about anything. I genuinely wouldn’t be surprised to find an AI-based company that plans my outfits and decides which socks match my mood. Whilst some of these applications will be amazing, others will be under-thought-through, over-engineered solutions to problems that don't even exist. One thing they all have in common, though, is the lack of transparency regarding the training data used for these models. In many cases, the data sources are kept under wraps, raising lawsuits (Getty Images for example), and red flags about potential biases, inaccuracies, or ethical issues. This secrecy doesn't just hinder external validation and reproducibility of AI-generated insights; it also leaves us questioning the trustworthiness of the AI models themselves.

But wait, there's more. Prior to LLM, hallucinations were only really talked about in very small subsections of the population, now everyone is acutely aware of AI models generating outputs that seem coherent but are actually factually incorrect or unrelated to the input. These hallucinations can be misleading and may result in misguided decisions, especially when researchers rely on AI-generated predictions for critical tasks like drug discovery.

So, what's the solution?

Enter explainable AI (xAI). Explainable AI models don't just provide accurate predictions but also offer transparent, interpretable, and understandable explanations for their decisions. With xAI, we can address the challenges posed by large language models like ChatGPT, fostering trust, transparency, and informed decision-making in various industries, including drug discovery

In the realm of drug discovery, AI-powered algorithms have the potential to accelerate the development of novel therapies and provide valuable insights into molecular interactions. Embracing explainable AI can help us harness the full potential of AI-driven insights while mitigating the risks associated with large language models. It's time to shine a light on these black boxes and ensure that AI serves as a responsible and effective tool in our pursuit of life-changing therapies.

So, what is Explainable AI?

Explainable AI refers to the development of AI models that are not only accurate but also provide clear, interpretable, and understandable explanations for their predictions. This transparency allows researchers and stakeholders to better comprehend the models’ decision-making processes, leading to increased trust, improved model validation, and informed decision-making.

Why is Explainable AI Important in Drug Discovery?

  • Trust and Transparency: In the drug discovery process, making informed decisions is crucial. Explainable AI helps researchers gain trust in AI-generated predictions by providing transparent, interpretable, and rational explanations. This enables researchers to confidently act on AI-driven insights and therefore enhance the overall efficiency and effectiveness of the drug discovery process.
  • Regulatory Compliance: Regulatory agencies often require justifiable evidence for the safety and efficacy of drug candidates. Explainable AI can provide the necessary rationale for AI-generated predictions, facilitating regulatory approval and ensuring compliance with industry standards.
  • Model Validation and Improvement: Explainable AI allows researchers to understand the underlying reasons for a model's predictions, which can help identify potential biases or shortcomings in the model. This, in turn, enables continuous improvement of the AI model, ensuring robust and reliable predictions.
  • Collaboration and Communication: Effective communication between multidisciplinary teams is essential in drug discovery. Explainable AI allows researchers from diverse backgrounds, such as biology, chemistry, and data science, to understand AI-generated insights and collaborate effectively, fostering innovation and driving the drug discovery process forward.

How should I Implement Explainable AI in Drug Discovery?

  • Model Selection: Choose AI models that provide a balance between accuracy and interpretability. For example, decision trees, linear regression, and Bayesian networks are more easily interpretable than deep learning models.
  • Feature Importance Analysis: Identify and rank the most influential features or molecular descriptors contributing to the AI model's predictions. This can help researchers understand the underlying relationships between molecular structures and their properties or bioactivities.
  • Visualisation Techniques: Employ visualisations to illustrate the AI model's decision-making process, such as partial dependence plots or heatmaps, to help researchers intuitively grasp the model's rationale.
  • Local Explanation Methods: Use techniques like LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations) to generate instance-specific explanations, offering insights into individual predictions.

What's Next? Are we ready for the Future of Drug Discovery?

Explainable AI is the driving force to change the drug discovery landscape, shattering the barriers between human comprehension and intricate AI models. By offering unprecedented transparency, nurturing trust, and turbocharging decision-making, explainable AI empowers researchers to unleash the untapped potential of AI-driven insights, propelling us toward groundbreaking therapies that will transform lives. If you are ready to join the vanguard of scientific discovery and harness the power of explainable AI in your research - reach out to the trailblazing experts at Ignota Labs today, and together, let's reshape the future of drug discovery!

Contact Us

The Bradfield Centre, 184
Cambridge Science Park Rd,
Milton, Cambridge CB4 0GA
img