Research
MakingAITransparent&Trustworthy
PhD work in explainable AI, bridging machine learning accuracy with real-world interpretability in sports and health.
My ongoing PhD research bridges two critical gaps: between machine learning accuracy and real-world interpretability, and between technical innovation and human responsibility. Through doctoral research at Universidad San Jorge, I'm developing methodologies for making AI transparent and trustworthy — applied in both healthcare and sports science.
Explainable AI for
Biomechanical Analysis
XAI for Clinical Pre-diagnosis
Gait analysis + interpretable ML for biomechanical assessment
Research Domains
Explainable AI
Making ML models transparent for clinical professionals. SHAP for quantifying feature contributions, LIME for local interpretability, Grad-CAM for visualizing important regions in medical images.
Biomechanical Analysis
Analyzing human movement patterns to detect pathology and assess clinical risk. Gait analysis, kinematic pattern recognition, feature extraction from motion capture data, anomaly detection.
Clinical Decision Support
Developing AI that enhances rather than replaces clinical judgment. Understanding clinician workflows, designing interfaces that promote expertise, validating against clinical standards.
Healthcare AI Ethics
Responsible development in sensitive contexts. Fairness and bias detection, regulatory compliance, patient privacy and data governance, accountability frameworks.
Key Findings
Explainability Increases Adoption
Healthcare professionals are significantly more likely to act on AI recommendations when they understand the reasoning behind them.
One Technique Isn't Enough
Different stakeholders benefit from different explanation types. Clinicians, patients, and regulators each need tailored approaches.
Context Matters Enormously
The same model can be trustworthy in one clinical context and problematic in another. Explanations must be tailored to workflows.
XAI Reveals Hidden Biases
Explainability techniques help identify when models rely on clinically irrelevant features or show systematic demographic biases.
AI in Sports
& Performance
Building on my research in explainability and healthcare AI, my current focus applies these principles to sports performance and athlete development. This involves biomechanical analysis for injury prevention, data-driven training optimization, multidisciplinary athlete monitoring systems, and interpretable AI that coaches and medical staff actually trust.
The goal is technology that empowers the people behind high performance — coaches, sports scientists, physiotherapists, and the athletes themselves — with AI they can understand and act on.
Research
partnerships?
Open to collaborating on AI, explainability, sports technology, and applied research.
Email Me →