1. To improve the safety of medical AI systems, an approach from development to implementation is essential.
Input Phase: A Data Controller should audit the data used for training and testing AI to ensure that it is legally acquired and ethically sourced, avoiding any infringement of data privacy laws. This includes verifying the quality and integrity of data, ensuring it is representative and free from biases that could affect the AI’s performance.
Process Phase: AI researchers should adopt transparent model development practices, ensuring their methods are original and do not infringe on the intellectual property of others. Open documentation throughout the development process is crucial to maintaining scientific integrity and reproducibility.
Implementation Phase: Once deployed, the AI system’s performance must be evaluated continuously, especially among minor populations and underdeveloped regions where training data may be scarce. This helps in ensuring the AI does not favor resource-rich populations, thereby maintaining equity in healthcare delivery. Additionally, the AI system should be designed to assist rather than replace human judgment to prevent misdiagnosis in critical cases. Ethical training for doctors on using AI as a supportive tool rather than a definitive diagnostic source is imperative to ensure they understand the AI’s limitations and maintain responsibility for patient care.
2. Transparency: Clear and understandable explanations of how the AI system works, including how it uses data in training and important features, are essential.
– Regulation: Compliance with all relevant healthcare regulations and standards is necessary to ensure the AI system is legally and ethically sound.