Skip to main content

Our Research


Finder, Evaluator, Explainer, Generator (FEEG): A Bloom’s taxonomy-based query classification framework for LLMs and generative AI

This study introduces FEEG, a Bloom’s Taxonomy-inspired query classification framework that improves LLM reliability by categorizing queries based on intent, enhancing response predictability, accuracy, and human-AI interaction in RAG-based generative systems.

View More

Generative AI systems, particularly those using large language models (LLMs) with Retrieval-Augmented Generation (RAG), often struggle with inaccuracies and hallucinations despite advances in prompt engineering and retrieval optimization. This study shifts focus toward query quality and proposes a taxonomy-driven approach to classify queries based on their cognitive and functional intent.

Building on Bloom’s Taxonomy, the authors introduce the Finder, Evaluator, Explainer, Generator (FEEG) framework, a Taxonomical Query Classifier (TQC) designed to differentiate queries by their dominant intent. Through a geoscience domain experiment and a structured RAG-LLM pipeline, the study demonstrates measurable performance variation across query categories using a Weighted Composite Accuracy Score (WCAS). The findings highlight that generative queries often outperform explanatory and evaluative ones, underscoring the importance of systematic query classification in improving AI reliability, transparency, and human-AI interaction design.


Generative Artificial Intelligence Use in Healthcare: Opportunities for Clinical Excellence and Administrative Efficiency

Generative Artificial Intelligence (Gen AI) has transformative potential in healthcare to enhance patient care, personalize treatment options, train healthcare professionals, and advance medical research.

View More

This paper examines various clinical and non-clinical applications of Gen AI. In clinical settings, Gen AI supports the creation of customized treatment plans, generation of synthetic data, analysis of medical images, nursing workflow management, risk prediction, pandemic preparedness, and population health management. By automating administrative tasks such as medical documentations, Gen AI has the potential to reduce clinician burnout, freeing more time for direct patient care. Furthermore, application of Gen AI may enhance surgical outcomes by providing real-time feedback and automation of certain tasks in operating rooms. The generation of synthetic data opens new avenues for model training for diseases and simulation, enhancing research capabilities and improving predictive accuracy. In non-clinical contexts, Gen AI improves medical education, public relations, revenue cycle management, healthcare marketing etc. Its capacity for continuous learning and adaptation enables it to drive ongoing improvements in clinical and operational efficiencies, making healthcare delivery more proactive, predictive, and precise.


Online gambling forums as a potential target for harm reduction: an exploratory natural language processing analysis of a reddit.com forum

Globally, there has been a rapid increase in the availability of online gambling. As online gambling has increased in popularity, there has been a corresponding increase in online communities that discuss gambling.

View More

The movement of gambling and communities interested in gambling to online spaces presents new challenges to harm reduction. The current study analyses a forum from a popular online forum hosting website (reddit.com) to determine its suitability as a source for data to inform gambling harm reduction in online spaces.

The current study provides an exploratory analysis of 1,141 unique posts and 11,668 comments collected from the online forum r/onlinegambling. The dataset covers posts and comments from August 5, 2015, to October 30. Natural language processing (NLP) techniques were used to identify common terms and phrases, identify topics with high rates of participant engagement and perform a sentiment analysis of posts and comments.

Sentiment analysis results showed that the majority of posts and comments were positive, but there were substantial numbers of negative and neutral content. Positive content was often congratulatory and focused on winning, neutral posts more commonly focused on practical advice, and negative posts were more commonly concerned with avoiding operators perceived as illegitimate by forum participants.


Adaptive cognitive fit: Artificial intelligence augmented management of information facets and representations.

The rise of big data and AI has increased information complexity, challenging human understanding. The Adaptive Cognitive Fit (ACF) framework shows how AI-enhanced representations improve performance in such environments.

View More

Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information and consequently affect human performance. Extant research in cognitive fit, which preceded the big data and AI era, focused on the effects of aligning information representation and task on performance, without sufficient consideration to information facets and attendant cognitive challenges.

Therefore, there is a compelling need to understand the interplay of these dominant information facets with information representations and tasks, and their influence on human performance. We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary for these complex information environments. To this end, we propose and test a novel “Adaptive Cognitive Fit” (ACF) framework that explains the influence of information facets and AI-augmented information representations on human performance. We draw on information processing theory and cognitive dissonance theory to advance the ACF framework and a set of propositions. We empirically validate the ACF propositions with an economic experiment that demonstrates the influence of information facets, and a machine learning simulation that establishes the viability of using AI to improve human performance.


The Rise of Artificial Intelligence Phobia! Unveiling News-Driven Spread of AI Fear Sentiment using ML, NLP and LLMs

Analyzing 70,000 AI-related news headlines, this study finds persistent fear-based language driving public anxiety and misconceptions. It urges responsible media coverage and stronger AI education.

View More

This study examines how the use of alarmist and fear-inducing language by news media contributes to negative public perceptions of AI. Nearly 70,000 AI-related news headlines were analyzed using natural language processing (NLP), machine learning (ML), and large language models (LLMs) to identify dominant themes and sentiment patterns. The theoretical framework draws on existing literature that posits the power of fear-inducing headlines to influence public perception and behavior, even when such headlines represent a relatively small proportion of total coverage.

This research applies topic modeling and fear sentiment classification using BERT, LLaMA, and Mistral, alongside supervised ML techniques. The findings show a persistent presence of emotionally negative and fear-laden language in AI news coverage. This portrayal of AI as dangerous to humans or as an existential threat profoundly shapes public perception, fueling AI phobia that leads to behavioral resistance toward AI, which is ultimately detrimental to the science of AI. Furthermore, this can have an adverse impact on AI policies and regulations, leading to a stunted growth environment for AI. The study concludes with implications and recommendations to counter fear-driven narratives and suggests ways to improve public understanding of AI through responsible news media coverage, broad AI education, democratization of AI resources, and the drawing of clear distinctions between AI as a science versus commercial AI applications, to promote enhanced fact-based mass engagement with AI while preserving human dignity and agency.


Unlocking Business Value with Generative AI! Economic Value Assessment for Chatbots and Gen AI ROI Discovery

This study examines the real economic value of generative AI and chatbots by analyzing research, industry reports, and pricing data. It offers insights into productivity, cost savings, and ROI to guide policymakers, businesses, and researchers.

View More

While industry sources are reporting projected or anticipated productivity gains and cost savings, a dependable analysis of these claims remains absent. This paper reviews extant research, economic studies, industry reports, and other pertinent information to evaluate the impacts of generative AI and chatbots, especially on return on investment (ROI) measures. We offer an analytical view of the financial impacts of generative AI, especially chatbots, with cases and pricing information from generative AI service providers. Our goal is to provide policymakers, business leaders, and researchers with an exploratory understanding of how business value can be unlocked with generative AI. This is an abbreviated version of the paper to meet the page limit of the conference.


Would You Please Like My Tweet?! An Artificially Intelligent, Generative Probabilistic, and Econometric Based System Design for Popularity-Driven Tweet Content Generation

This study develops an automated decision support system for social media managers that uses econometrics, ML, and Bayesian models to predict engagement and generate high-impact Tweet content, addressing the “blank screen” problem.

View More

Recent advances in this form of artificial intelligence has been suggested to allow content creators and managers to transcend their tasks from creation towards editing, thus overcoming a common problem: the tyranny of the blank screen. In this research, we address this topic by proposing a novel system design that will suggest engagement-driven message features as well as automatically generate critical and fully written unique Tweet message components for the goal of maximizing the probability of relatively high engagement levels. Our multi-methods design relies on the use of econometrics, machine learning, and Bayesian statistics, all of which are widely used in the emerging fields of Business and Marketing Analytics. Our system design is intended to analyze Tweet messages for the purpose of generating the most critical components and structure of Tweets. We propose econometric models to judge the quality of written Tweets by way of engagement-level prediction, as well as a generative probability model for the auto-generation of Tweet messages. Testing of our design demonstrates the need to take into account the contextual, semantic, and syntactic features of messages, while controlling for individual user characteristics, so that generated Tweet components and structure maximizes the potential engagement levels.


Cultivation of Human Centered Artificial Intelligence: Culturally Adaptive Thinking in Education (CATE) for AI

This study introduces the CATE-AI framework to make AI education culturally adaptive and inclusive. By integrating human behavior theories and human-centered AI principles, it enhances understanding while reducing confusion, AI-phobia, and resistance among diverse learners.

View More

It is necessary to teach AI topics on a mass scale. While there is a rush to implement academic initiatives, scant attention has been paid to the unique challenges of teaching AI curricula to a global and culturally diverse audience with varying expectations of privacy, technological autonomy, risk preference, and knowledge sharing. Our study fills this void by focusing on AI elements in a new framework titled Culturally Adaptive Thinking in Education for AI (CATE-AI) to enable teaching AI concepts to culturally diverse learners. Failure to contextualize and sensitize AI education to culture and other categorical human-thought clusters, can lead to several undesirable effects including confusion, AI-phobia, cultural biases to AI, increased resistance toward AI technologies and AI education. We discuss and integrate human behavior theories, AI applications research, educational frameworks, and human centered AI principles to articulate CATE-AI. In the first part of this paper, we present the development a significantly enhanced version of CATE. In the second part, we explore textual data from AI related news articles to generate insights that lay the foundation for CATE-AI, and support our findings. The CATE-AI framework can help learners study artificial intelligence topics more effectively by serving as a basis for adapting and contextualizing AI to their sociocultural needs.


Artificially Intelligent Readers: An Adaptive Framework for Original Handwritten Numerical Digits Recognition with OCR Methods

This study presents an adaptive AI-based OCR framework using CNNs to accurately recognize handwritten digits, handling personalized handwriting and limited datasets through custom augmentation and model improvements.

View More

OCR applications, using AI techniques for transforming images of typed text, handwritten text, or other forms of text into machine-encoded text, provide a fair degree of accuracy for general text. However, even after decades of intensive research, creating OCR with human-like abilities has remained evasive. One of the challenges has been that OCR models trained on general text do not perform well on localized or personalized handwritten text due to differences in the writing style of alphabets and digits. This study aims to discuss the steps needed to create an adaptive framework for OCR models, with the intent of exploring a reasonable method to customize an OCR solution for a unique dataset of English language numerical digits were developed for this study. We develop a digit recognizer by training our model on the MNIST dataset with a convolutional neural network and contrast it with multiple models trained on combinations of the MNIST and custom digits. Using our methods, we observed results comparable with the baseline and provided recommendations for improving OCR accuracy for localized or personalized handwritten text. This study also provides an alternative perspective to generating data using conventional methods, which can serve as a gold standard for custom data augmentation to help address the challenges of scarce data and data imbalance.


MOMCare with AI: A Dual Embedding based RAG-LLM Chatbot for Postpartum Depression

MOMCare is a chatbot using AI and medical-specific mechanisms to provide empathetic, accurate support for mothers with postpartum depression, demonstrating safe and effective mental health interventions.

View More

According to the World Health Organization (WHO), around 13% of women experience postpartum mental health disorders, with rates rising to nearly 20% in developing countries. PPD is a condition that affects many women worldwide, but because of the social stigma and the lack of accessible mental health support, it often goes undiagnosed or untreated. This paper presents MOMCare, a chatbot designed to support mothers navigating the challenges of PPD. MOMCare has a retrieval-augmented architecture with an end-to-end pipeline from data preprocessing to response generation. It employs hybrid classification, a dual embedding system, a dual verification guardrail, and a medical domain-specific reranking mechanism to generate empathetic and relevant PPD responses.

This refined design of Retrieval Augmented Generation (RAG) ensures fast and factual response by reducing noise in retrieval and providing abundant context to gpt-3.5-turbo. MOMCare was evaluated using both automated and human metrics. Results show strong performance in both evaluations, which underlines the potential for chatbot interventions in the postpartum mental health domain. This system is robust enough to take new data and create a conversation generation pipeline that includes new information. Expanding the knowledge base using the conversation history with the users is also in development. The MOMCare chatbot and its features were built on sound ethical principles of healthcare and Artificial Intelligence (AI) and present a strong design emphasis on safety and fairness. Note: This is the accepted manuscript of a paper accepted for publication in the Springer proceedings (Smart Innovation, Systems and Technologies series) of the 10th International Conference on Information and Communication Technology for Intelligent Systems (ICTIS 2025), held in New York on May 23, 2025. The final version will be published on SpringerLink.


COVID-19 Public Sentiment Insights and Machine Learning for Tweets Classification

This study analyzes COVID-19 Twitter sentiment, tracking fear trends and comparing ML methods, with Naïve Bayes achieving 91% accuracy. It offers insights into pandemic fear and guidance for textual sentiment analysis.

View More

There is therefore a tremendous need to address and better understand COVID-19’s informational crisis and gauge public sentiment, so that appropriate messaging and policy decisions can be implemented. In this research article, we identify public sentiment associated with the pandemic using Coronavirus specific Tweets and R statistical software, along with its sentiment analysis packages. We demonstrate insights into the progress of fear-sentiment over time as COVID-19 approached peak levels in the United States, using descriptive textual analytics supported by necessary textual data visualizations. Furthermore, we provide a methodological overview of two essential machine learning classification methods, in the context of textual analytics, and compare their effectiveness in classifying Coronavirus Tweets of varying lengths. We observe a strong classification accuracy of 91% for short Tweets, with the Naïve Bayes method. We also observe that the logistic regression classification method provides a reasonable accuracy of 74% with shorter Tweets, and both methods showed relatively weaker performance for longer Tweets. This research provides insights into Coronavirus fear sentiment progression, and outlines associated methods, implications, limitations and opportunities.


ESHRO: An Innovative Evaluation Framework for AI Driven Mental Health Chatbots

This study presents ESHRO, a framework for evaluating mental health chatbots on empathy, safety, and effectiveness, demonstrated using the ELY Chatbot to enhance AI-driven mental health support.

View More

Despite it being a prevalent issue, access to mental health support remains limited for many people, a challenge exacerbated by the pandemic (Lattie, 2022). In recent years, AI chatbots have emerged as a potential avenue to overcome these obstacles. With the rise of the development and use of such mental health support chatbots, it has been integral to have evaluation frameworks that ensure that these chatbots consistently provide empathetic, safe, and effective responses to the users.

For this purpose, this paper introduces ESHRO, an innovative evaluation framework to analyze the LLM-generated responses on five critical metrics: Empathy, Safety, Helpfulness, Relevance, and Overall Quality. By incorporating multidimensional metrics and integrating both automated and human evaluation, ESHRO overcomes many limitations of existing frameworks. Moreover, to showcase its application, we developed ELY Chatbot, an AI-driven mental health chatbot developed to deliver emotional support and motivation. We utilized the ESHRO framework to evaluate it. The ESHRO framework demonstrates the potential to improve evaluations of mental health chatbots. The paper concludes by discussing limitations and highlighting opportunities for future research, ultimately paving the way for safer, more empathetic, and more impactful mental health solutions.


When Machines Create! Envisioning Our Future With the Transformative Power of Generative AI

Modern generative AI, widely accessible and practical, drives innovation and value creation when combined with frameworks like ACF, while emphasizing human-centric use and managing associated risks.

View More

Artificial intelligence (AI) can be viewed as a model of human intelligence’s capabilities, at least in part. In this sense, AI ‘machines’ have been generative since its inception in the 1950s and we should not have been surprised by what we are now seeing in the form of “generative AI” (gen AI) applications, but we are! The reason behind the recent widespread appreciation of the generative aspects of AI applications is due to the ease of availability (all that is needed is a connected browser on any device!) of such AIs to the masses, the increased speeds at which gen AI outputs are being churned out and the impressive usefulness of such rapidly created output. Gen AI has achieved fast-food status on a consumer level and it can be industrialized, commoditized and woven into the socioeconomic fabric of human society. Combined with the power of strategic human enhancive AI architectures such as adaptive cognitive fit (ACF), we can anticipate gen AI to help unleash iterations of rapid and complex advancements with purported benefits which will be treated as hyper-value creation opportunities and hitherto obscure risks (Samuel, et al., 2022). The focus should eventually shift to ACF and similar architectures which will help nurture a society that supports mass-human ascendancy over AIs, as opposed to the converse.


Emoji Augmented AI Chatbot (EACh): Improving NLU and NLG for Social Communications in RAG-LLMs with Emoji Awareness

This study presents EACh, an emoji-augmented AI chatbot using RAG architecture to interpret and generate emojis accurately, enhancing conversational relevance and emotional alignment in digital communication.

View More

Emojis have become integral to digital communication, conveying nuanced emotional cues that enrich textual meaning (Bai et al., 2019). However, current large language model (LLM) chatbots struggle with emojis and they often misinterpret the meaning (or fail to recognize) of emojis, often making their replies irrelevant (Delobelle et al., 2019; Xie et al., 2025). In our experiments, LLMs demonstrate a mere 55% accuracy with emoji classification. To address this, we propose an emoji-augmented AI chatbot (EACh) incorporating natural language understanding (NLU) and natural language generation (NLG) for social communications with emoji-awareness, by developing a design that enhances the LLM’s ability to interpret emojis and generate appropriate emojis.

Our approach utilizes a Retrieval-Augmented Generation (RAG) architecture to incorporate emoji-specific knowledge into the LLM’s reasoning (Jiang et al., 2023). EACh will perform two functions: Emoji interpretation (when a user message contains an emoji, the system retrieves its entry from an emoji-specific knowledge database to assist the model in accurately interpreting its intended meaning and emotional context) and Emoji generation (when formulating a response, to the model queries the database for an emoji that aligns with the desired tone or sentiment of the message, ensuring it is contextually appropriate). Sentiment and emotion analysis, along with generative modeling, have been critical facets of NLU (Samuel et al., 2020; Garvey et al., 2021).


Wildfire Generative AI Chatbot Track: Artificial Intelligence (AI)

This study presents an AI-powered wildfire chatbot that uses big data and machine learning to assess risks, provide real-time guidance, and support disaster preparedness for residents and first responders.

View More

By harnessing data analysis and machine learning, AI can detect high-risk areas, predict fire behavior, and provide early warnings (Western Fire Chiefs Association, 2024). The NOAA reported significant drought and heat with wildfires that devastated several western states from 2020 to 2022. Each of those years exceeded the 1.2 million acres burned since 2016 (NOAA, 2023). In 2023, the devastating Lahaina fire in Maui killed 114 and left about 850 missing, while the Olinda and Kula fires burned 1,081 and 202 acres, respectively (WFCA, 2023). On January 7, 2025, deadly wildfires in Los Angeles killed 29 people, including residents defending their homes. The Palisades fire burned 23,448 acres and destroyed over 6,800 structures, while the Eaton Fire reached 14,021 acres and wiped out 10,491 structures (Stelloh et al., 2025).

AI assistants like Alexa provide weather forecasts, and a wildfire AI chatbot could similarly assess wildfire risk and offer emergency guidance using AI techniques, big data, and high-performance computing (Samuel et al., 2022). This paper presents an efficient solution using a wildfire AI chatbot to help local firefighters, law enforcement, and the public detect wildfire threats and access critical information necessary during such emergencies for evacuation, home protection, help prepare a disaster supply kit, develop a family communication plan, and resource allocation (Habitat for Humanity, 2025). The research develops a personalized chatbot for wildfire risk classification that enhances community preparedness, empowers residents with tailored safety measures, and supports first responders during emergencies using AI. Users can ask the chatbot any wildfire-related questions, and the chatbot will process the content and context to provide helpful and accurate information. The goal is to design an intelligent system that understands user needs and delivers tailored information, empowering residents to take timely safety measures.