Projects

Causal Fairness for Machine Learning

AI bias has become a significant concern among researchers and the public, and our team is addressing this challenge by focusing on causal fairness, emphasizing that causal effects are crucial for measuring and mitigating bias in AI systems. Our pioneering work in causal and counterfactual fairness ensures that AI models produce equitable outcomes by evaluating what could have happened under different circumstances. In collaboration with the University of Arkansas, we are expanding this research to explore causal fairness in dynamic and non-iid (non-independent and identically distributed) settings, advancing fairness research in complex and evolving environments.

Responsible AI in Healthcare

This project focuses on developing responsible and ethical AI models to support lung cancer diagnosis. In collaboration with radiologists from Prisma Health, we aim to create AI solutions that are fair, equitable, and explainable, ensuring that the technology benefits all patient populations without bias. By prioritizing transparency, the project will help clinicians understand the AI's decision-making process, fostering trust and improving patient outcomes. This initiative is generously funded by Prisma Health and the South Carolina EPSCoR program.

Robust Learning for Computer Vision

This project focuses on improving semantic segmentation of off-road terrains by leveraging hyperspectral cameras and advanced techniques in causal inference. Specifically, we aim to enhance segmentation performance by synthesizing nighttime pseudo-RGB views, which provide critical visual information in low-light conditions. By coupling hyperspectral data with causal inference methods, we can generate more accurate representations of off-road environments, ensuring better performance in challenging conditions. This approach opens new possibilities for robust and reliable terrain analysis, especially in complex, dynamic environments. This project is supported by United States Army CCDC Ground Vehicle Systems Center.

Responsible and Efficient LLMs

In collaboration with researchers at the University of Maryland, our team is focused on developing responsible and efficient large language models. We aim to create LLMs that not only deliver high efficiency but also prioritize fairness. By addressing challenges such as bias mitigation and resource optimization, we are advancing LLM technology to ensure it is both ethically sound and computationally efficient, contributing to more sustainable and equitable AI applications.

AI and Cybersecurity Education

Our team is dedicated to developing comprehensive curriculum and hands-on labs in the fields of AI and cybersecurity. These unique educational materials are designed to provide students with both theoretical knowledge and practical skills, preparing them for real-world challenges in these rapidly evolving fields. To explore the latest labs and resources, please check out AI-Cybersecurity Lab.

Thanks the generous support from

  • National Science Foundation,
  • the South Carolina EPSCoR program,
  • Prisma Health,
  • the United States Army CCDC Ground Vehicle Systems Center.
National Science Foundation Prisma Health Ground Vehicle Systems Center South Carolina EPSCoR