Abstract
Recent advances in the science and technology of artificial intelligence (AI) and growing numbers of deployed AI systems in healthcare and other services have called attention to the need for ethical principles and governance. We define and provide a rationale for principles that should guide the commission, creation, implementation, maintenance, and retirement of AI systems as a foundation for governance throughout the lifecycle. Some principles are derived from the familiar requirements of practice and research in medicine and healthcare: Beneficence, nonmaleficence, autonomy, and justice come first. A set of principles follow from the creation and engineering of AI systems: Explainability of the technology in plain terms; interpretability, that is, plausible reasoning for decisions; fairness and absence of bias; dependability, including "safe failure"; provision of an audit trail for decisions; and active management of the knowledge base to remain up to date and sensitive to any changes in the environment. In organizational terms, the principles require benevolence- A iming to do good through the use of AI; transparency, ensuring that all assumptions and potential conflicts of interest are declared; and accountability, including active oversight of AI systems and management of any risks that may arise. Particular attention is drawn to the case of vulnerable populations, where extreme care must be exercised. Finally, the principles emphasize the need for user education at all levels of engagement with AI and for continuing research into AI and its biomedical and healthcare applications.