Skip to main content


  • CAFE Research Group
    • aims to develop and apply methods and techniques for gathering and extracting meaningful information from complex, real-world data in Cybersecurity and other domains. The goal is to transform this data into actionable intelligence, ensuring the security, accountability, fairness, and explainability of AI-based automated decisions.
    • aims to provide experiential learning opportunities to students, and advance the state of research by collaborating with academic and industry partners.
  • Research Domains
    • Cybersecurity – Our research explores methods to enhance the explainability and interpretability of AI-based decision systems in the context of cybersecurity. We are keenly interested in external knowledge infusion, such as domain knowledge and social values, to improve the decision and explanation process.
    • Social Good – We strive to leverage data, science, humanities, and AI to address humanitarian and environmental problems, making a positive impact on people’s lives worldwide.
    • Health Informatics – Our goal is to leverage social networks, medical data, and AI to provide unprecedented insights into diagnostics, treatment, and patient outcomes.
      Prospective Ph.D. students
      interested in exploring bioethical issues related to CRISPR (such as human selection), healthcare disparities, biases and ethical concerns in healthcare/biology-specific datasets and AI algorithms, as well as in improving the explainability/interpretability of healthcare/biology-specific AI models, or similar topics, can apply directly through the CCIB (Center for Computational and Integrative Biology) to express their interest in joining our research group. Our lab is always open to current undergraduate or graduate students for computational research and collaboration.
  • Research Topics
    • Zero-day Exploit and Digital Forensics – We explore methods to make AI-based decision systems and their outcomes faster, more explainable, and interpretable to defend/mitigate zero-day attacks. Additionally, we are interested in the digital forensics of malware/ransomware behavior.
    • Explainable AI – We investigate methods to make AI-based decision systems and their outcomes more accountable, explainable, and interpretable. Toward this goal, we are interested in external knowledge infusion (e.g., domain knowledge, social values) into the decision and explanation process. We also aim to develop novel data engineering methods and knowledge infusion techniques.
    • Fair AI – We explore methods to make AI-based decision systems and their outcomes fairer and more accountable, particularly in avoiding negative impacts on marginalized groups. We are looking into external knowledge infusion (e.g., domain knowledge, laws) in the decision process for this purpose as well.


Through these research endeavors, we aim to contribute to the responsible and impactful advancement of Artificial Intelligence and Machine Learning, shaping a future where these technologies positively shape society and address pressing global issues.