This project focuses on addressing one of the prompt questions from the OpenAI grant program relating to the AI model behavior.
The objective of this project is to build trust between users and AI-assisted healthcare, particularly in the context of AI diagnosis tools. This project aims to investigate and develop an AI diagnosis tool that offers accurate, transparent, and personalized diagnoses, supported by robust evidence and expert consensus. thereby bridging the trust gap between users and AI systems.
How can we build trust between users and AI-assisted healthcare, through designing an AI diagnosis tool that provides accurate, transparent, and personalized diagnoses?
Business Goal
1. Foster a symbiotic relationship between healthcare providers and patients by leveraging our platform to enhance service efficiency and expand access to medical care.
2. Enable healthcare professionals to efficiently access and comprehend patients' medical histories, optimizing their time and effort.
User Goal
Enhance personal health management by offering a 24-hour service that reduces wait times, alleviates anxiety, and streamlines communication and diagnosis.
1. Build an empathetic and trustworthy AI-powered healthcare platform to offer immediate assistance and guidance to users.
2. Establish network with local healthcare facilities to ensure seamless healthcare coordination and support.
Our goal is to rapidly learn the essential industry information as we have limited experience in AI technology in healthcare. We conducted a competitor analysis of existing prominent AI-driven healthcare applications to examine the current landscape. Throughout the process, we identified 4 main issues in the current landscape.
In conclusion, we need to build trust between medical AI tools and users with access to accurate, transparent, personalized diagnoses. Meanwhile, the application and role of AI in healthcare should be primarily supportive and assistive. It cannot be a substitute for professional medical diagnosis and should be operated under human supervision.
How can we build trust between users and AI-assisted healthcare through design an AI diagnosis tool that provides accurate, transparent, and personalized diagnoses?
Based on the insights and feedback we collected and analyzed from the research. We decided to focus on developing 5 main features, which are communication, accuracy, personalization, community, and liability. which would tackle the three main concerns of the users.
Based on the research findings, we create a user flow and wireframe for our AI Family Doctor tool.