Enhancing Patient Care With AI / Our Commitment to Using AI Safely, Responsibly, and Equitably
Our Commitment to Using AI Safely, Responsibly, and Equitably
To ensure the responsible use of artificial intelligence (AI) at Stanford Health Care, we collaborate with a broad group of stakeholders—including clinicians, researchers, ethicists, policymakers, and members of the patient community.
Our Guiding Principles
Patient-Centered, Always
AI is a tool to support your care team, not a substitute for it. We adopt technologies that help our clinicians focus more on you—improving communication, enabling more time for meaningful interactions, and enhancing decision-making.
Safe, Responsible Innovation
Before any new AI tool is used in our care delivery system, it undergoes rigorous clinical evaluation and oversight. We only implement technologies that meet the highest standards for safety, effectiveness, and reliability.
Your Privacy Matters
Protecting your personal health information is non-negotiable. We adhere to strict data privacy and security protocols to ensure your information stays safe.
Clear Communication
We’re committed to taking the mystery out of AI and helping you understand how we use it as you come into contact with AI during your care.
Our Frameworks
Before any new AI tool is adopted at Stanford Health Care, it is assessed by our governance group using a Stanford-developed evaluation framework we call “FURM”: Fair, Useful, and Reliable AI Models. This framework, which is available for other health care systems, is used to determine the usefulness of the AI tool being considered, ethical impacts, and ongoing monitoring needs.
Explore More Articles on Our Governance, Frameworks, and Commitment to Safety and Privacy
Institutional Leadership
Learn more about how Stanford Medicine is leading the development and use of trustworthy AI in health care.