Ethical, human-focused AI development
Our Apollo platform starts with one simple prompt: “What are you trying to predict?” From there, researchers, producers and innovators can leverage the power of artificial intelligence to solve some of the world’s biggest problems from novel drug treatments to farm yield management. But what’s the significance of asking a single question to a user? And should they be expected to code and configure these systems? Answers to queries like these all form part of an ethical, human-focused AI approach that we believe should be the standard for AI development both now and in the future.
What are the core tenets of ethical, human-focused AI development?
Let’s dive into our ethos around creating artificial intelligence and machine learning tools:
Explainable AI
Explainable AI or XAI has two sides. The first is in helping the expert who needs to use it understand what to do to get their desired output. Apollo does this by asking a simple question prompt. The second part is to make sure that the results are understandable and traceable so they can be fact-checked by that expert. Bizarrely, unlike in many other applications, showing how the sausage is made can actually increase confidence in AI and its output. So, when scoring for skin cancer, for example, an explainable AI like ours will state the basis of this recommendation and even heat map the slides for extra confidence.
Ethical AI
Innovations in AI shouldn’t come on the backs of creatives or developing nations. By using a proprietary solution, you are ensured of code provenance and the quality of the training data used. You’ll also not be benefiting from stolen data or LLMs made under inhumane conditions. When you work with someone like Agxio, you won’t be adding to the “new digital sweatshops across nations like Kenya, India, the Philippines and Venezuela [who] engage millions of workers in data [labelling] – a [labour-intensive] process essential to training AI algorithms” that have been reported recently.
Useable AI
Finally, we believe that subject matter experts should not have to become programmers themselves to make the best use of modern technology. AI; at least the platforms we make, is preventing skill bleed by augmenting human creativity and insights without forcing specialists to learn complex systems just to practice their specialism or use today’s tools. AI should support human creativity and ingenuity across areas as diverse as medicine, research, policy or agriculture without replacing them or requiring a tonne of upskilling.
Ethical, human-focused AI development might be our approach, but we think it should be the standard. If you’d like to investigate use cases for innovative platforms like Apollo, please get in touch today.