What’s the safe way to use AI in research?
Ever since artificial intelligence (AI) breached the mainstream, companies and institutions have begun using it for research; either publicly or (worringly) without disclosure. But do the opportunities outweigh the risks? What’s the safe way to use AI in research? How can labs and researchers themselves be more ethical, academically rigorous and legally sound when using this technology? Let’s explore the concerns and how they might be overcome in today’s piece.
Why is AI being used in research?
AI can do human tasks (especially repetitive ones or those around recognising patterns) very quickly, much more quickly than a person can. And that means, when you use it to enhance your research projects, you can streamline data analysis or do predictive modelling for scientific disciplines a lot faster. As such, it’s being used in simulations, code generation and modelling across STEM research applications the world over. But where the problems arise is when researchers use public or open-source AI tools without the correct guardrails in place.
What are the risks of unrestricted AI use?
Public and open-source AI tools suffer from high levels of misinformation and fabricated data due to the kinds of information sources they’re trained on. Unlike properly gated tools like Ask Apollo, tools like DeepSeek and ChatGPT are trained on resources like the Common Crawl or Wikipedia, which everyone has access to. They’re based on the general internet (with all its many variations of credibility). So, any falsehoods in that source data will create incorrect information in the results. There’s also the issue of data privacy and security. It’s unclear who has access to queries you type or proprietary information you input into these public systems. Lastly, there’s inherent bias in AI-generated results because there’s often bias in the AI models or training data itself and/or a lack of transparency over why certain results were generated by the AI tool.
How to make AI use in research safer
So, how can you make AI use in research safer? Well, one way is to deploy an internal AI, like AskApollo, into your fire-walled network environment. Then, you can train it on only clean data sets and check the responses for accuracy versus your human experts. Disclose your AI usage in your methodology and acknowledgements so it can be subject to external peer scrutiny as well. And keep logs of all your AI-generated input and outputs for transparency purposes. When researchers use proprietary AI as an assistant, and then apply their own analysis and reasoning; we find the output is consistently better. Have a look at what it can do for brain tumours, skin cancer and more.
Ready to talk about how one of our ML or AI platforms can support your operation to improve research outcomes? Simply get in touch with a member of our team today.