© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
The growing demand for digital transformation in higher education has highlighted the limitations of conventional bureaucratic systems. This study aims to develop and evaluate a structural model for implementing generative AI chatbots in campus administration, focusing on their ability to deliver sustainable service innovation. Integrating behavioral modeling and computational logic, the research adopts a mixed-methods approach. A questionnaire was distributed to 300 respondents, and data were analyzed using Partial Least Squares Structural Equation Modeling (SmartPLS). This study integrates 11 latent constructs—including AI capability, system usability, information quality, service availability, privacy and security, institutional support, user satisfaction, service experience, customer relationship management (CRM), administrative efficiency, and digital literacy (as a moderator)—into a validated structural model. The findings reveal that all primary structural paths are statistically significant (p < 0.001). Notably, customer relationship management (CRM) demonstrates a very strong effect on Administrative Efficiency (β = 0.833, p < 0.001; R² = 0.694), confirming its central role in translating satisfaction and service experience into organizational outcomes. In addition, the study introduces an operational AI algorithm and a multi-criteria optimization model that simulate trade-offs between CRM and efficiency. These computational insights provide university leaders with practical decision-making tools for aligning chatbot deployment with strategic goals such as cost savings, service scalability, and student retention.
generative AI, chatbot, customer relationship management (CRM), campus bureaucracy, SmartPLS, sustainable innovation
The digital transformation of public services has become a critical agenda across sectors, including higher education. Universities are increasingly expected to modernize their service delivery systems in line with the demands of agility, accessibility, and automation. However, many higher education institutions continue to rely on legacy administrative processes that are manual, fragmented, and heavily dependent on physical presence. As a result, campus bureaucracies often exhibit inefficiencies such as long wait times, repetitive paperwork, and inconsistent service delivery, which ultimately hinder student satisfaction and institutional responsiveness.
In this context, the adoption of generative artificial intelligence (AI) particularly in the form of chatbots has emerged as a promising strategy to automate administrative services and enhance communication between students and institutions. Unlike traditional rule-based bots, generative AI chatbots offer adaptive, context-aware interactions that are capable of engaging users in a more human-like and flexible manner. However, despite their growing popularity, the implementation of such systems in university settings often lacks strategic direction and sustainable frameworks. Most existing deployments focus on functional efficiency without addressing broader outcomes such as customer relationship management (CRM), policy alignment, and long-term innovation capability.
This study seeks to address that gap by developing a structural and mathematical model that explains how AI chatbot systems influence key performance indicators of university administration. Building on a behavioral framework validated through Partial Least Squares Structural Equation Modeling (SmartPLS), this study also introduces a multi-criteria optimization model to simulate and evaluate system performance in real conditions. The integration of these two perspectives behavioral and computational enables a more holistic understanding of how satisfaction, service experience, and CRM can be strategically aligned to drive sustainable service innovation.
(1). To achieve this, the following research questions are addressed:
(2). How do system-level factors such as AI capability, usability, and information quality influence user satisfaction and service experience?
(3). What role does CRM play in mediating the relationship between satisfaction and administrative efficiency?
(4). How can mathematical modeling and algorithmic logic support strategic implementation of generative AI chatbots in campus administration?
By answering these questions, the study contributes to the literature on AI-enabled public services and offers practical guidance for universities seeking to modernize their bureaucratic systems in a sustainable, user-centric manner. The rapid advancement of large language models (LLMs) such as ChatGPT has enabled the development of chatbots capable of understanding natural language and generating context-aware responses in real time. In the public service domain, generative AI chatbots are increasingly used to streamline customer service, improve accessibility, and reduce operational costs. In the education sector, chatbots are applied to tasks such as answering frequently asked questions, processing student service requests, and supporting online academic guidance. Their potential lies not only in automation, but also in delivering services that are responsive, scalable, and available 24/7.
Unlike rule-based bots, generative chatbots are capable of learning from data, adapting to varying input patterns, and producing dynamic answers beyond predefined scripts. These features make them particularly well-suited for handling complex, unstructured, and repetitive administrative queries in university settings. As such, AI-powered chatbot systems are no longer just optional add-ons they are evolving into essential components of intelligent service ecosystems that support institutional digital transformation. The integration of generative AI and chatbots into higher education has sparked considerable scholarly attention over the last five years, particularly in relation to pedagogical innovation, technology adoption, and ethical concerns. These studies form the empirical and theoretical foundation for the current research.
Kooli [1] provided a comprehensive ethical analysis of AI and chatbot use in academia, emphasizing that while these tools offer significant innovation potential, they also introduce risks related to misuse, bias, and dehumanization. The study advocated for ethical adaptation and sustainability to ensure the benefits of AI systems are fully realized in education. From a technological acceptance perspective, Falebita and Kok [2] explored the interplay between technological readiness, self-efficacy, and attitudes among undergraduate students, using PLS-SEM. Their findings emphasized that students’ attitude plays a primary role in AI tool adoption, beyond perceptions of ease or usefulness.
In a complementary review, Luo et al. [3] examined 63 empirical studies on AI-based learning tools and categorized their roles into assessment, intelligent tutoring, and feedback mechanisms. While cognitive outcomes were frequently improved, skill-based gains varied widely. Ayanwale and Molefi [4] applied an expanded diffusion theory of innovation and found that compatibility, trialability, and perceived trust strongly influenced students’ intention to adopt chatbots in education, whereas perceived ease of use showed weaker effects. Yadegaridehkordi et al. [5] investigated academic staff’s willingness to adopt ChatGPT using Structural Equation Modeling–Artificial Neural Network (SEM-ANN). Their findings highlighted the role of anthropomorphism and hedonic motivation in predicting performance expectancy and willingness to use.
Similarly, McGrath et al. [6] reviewed empirical studies on ChatGPT’s integration in higher education, noting an absence of consistent theoretical frameworks and a dichotomy between utopian and dystopian discourse surrounding AI’s future role. In their narrative review, Davar et al. [7] identified both benefits and barriers of AI chatbots, including ethical concerns like data privacy and academic integrity. They emphasized the potential of chatbots as virtual tutors and assessment tools, while calling for careful design to mitigate associated risks. Kleine et al. [8] conducted a daily diary study applying Technology Acceptance Model (TAM) and TAM3 models, revealing that perceived ease of use and usefulness significantly predicted chatbot usage intensity among students, mediated by emotional responses and social norms.
Fošner and Aver [9] provided a regional perspective from Slovenia, revealing that while students recognize the efficiency of chatbots, they are concerned about their impact on creativity and critical thinking. The findings called for sustainable, ethical integration policies in curricula. Schei et al. [10] extended the Technology Acceptance Model- Unified Theory of Acceptance and Use of Technology (TAM-UTAUT) framework in an Indonesian context, revealing that attitude toward behavior, anxiety, and performance expectancy strongly influenced behavioral intention and student performance with AI tools. Wang et al. [11] offered a large-scale bibliometric analysis of AI in education, identifying trends in adaptive learning, profiling, and intelligent assessment. The review identified gaps in theoretical foundations and highlighted future research directions. Koteczki and Balassa et al. [12] and Sofiyah et al. [13] proposed a robust SEM model for chatbot adoption, uncovering critical moderating variables such as technological proficiency, gender, and trust. Their findings support customized AI adoption frameworks that address user-specific needs. Stöhr et al. [14] and Sofiyah et al. [15] performed a large-scale survey across Swedish universities and found gender and disciplinary differences in attitudes toward ChatGPT, with female and humanities students expressing more concern, while males and engineering students showed greater optimism and usage.
Sova et al. [16] conducted a systematic review of GenAI implementation case studies, synthesizing pedagogical frameworks such as Technological Pedagogical Content Knowledge (TPACK) and Substitution, Augmentation, Modification, and Redefinition (SAMR) to guide responsible and impactful use of generative AI in classrooms. Jin et al. [17] investigated global institutional adoption of GenAI through the lens of diffusion of innovations theory. They observed proactive strategies emphasizing integrity and training, but noted gaps in privacy frameworks and equitable access. Yan et al. [18] conducted a scoping review of LLMs in education, identifying automation potentials and ethical risks. They proposed a human-centered development model to enhance transparency, privacy, and accountability [19, 20].
Previous studies on chatbots in higher education have largely focused on adoption, usability, or student attitudes [3, 6, 12], but very few have investigated their strategic integration with broader organizational outcomes such as CRM or administrative efficiency. This creates a gap in the literature, as most implementations treat chatbots as technical tools rather than strategic instruments for long-term institutional transformation [21-23]. To address this, the present study reframes the research agenda by asking:
(1) How do system-level factors such as AI capability, usability, and information quality influence user satisfaction and service experience?
(2) What role does CRM play in mediating the relationship between satisfaction and administrative efficiency?
(3) How can mathematical modeling and algorithmic logic support strategic implementation of generative AI chatbots in campus administration?
(4) How can SEM outputs be incorporated into optimisation algorithms to design an integrated framework for sustainable service innovation?
The novelty of this study lies in its integration of behavioral modeling and computational optimization. Unlike previous research that examined either user perceptions or technical algorithms in isolation, our approach leverages SEM path coefficients as empirical weights to guide multi-criteria optimization. This allows us to translate human-centered behavioral insights into computational trade-offs, creating a hybrid framework that not only validates the relationships between satisfaction, CRM, and efficiency, but also provides actionable simulation tools for decision-makers. To bridge this gap, this study proposes a dual approach that:
(1). Validates a behavioral structural model using SmartPLS, connecting AI capabilities to CRM and efficiency through satisfaction and experience, and
(2). Introduces a mathematical optimization model for simulating decision trade-offs and system priorities using weighted multi-criteria logic.
By integrating these perspectives, this research offers a more comprehensive view of how generative AI chatbots can be implemented not just effectively, but strategically and sustainably in the context of campus bureaucracy. This study aims to develop and validate a hybrid model that captures both the behavioral and computational dimensions of implementing generative AI chatbots in university administrative services. On the behavioral side, the study employs a structural equation model using SmartPLS to examine how system features such as AI capability, usability, and information quality influence user satisfaction, service experience, and ultimately CRM and administrative efficiency. On the computational side, a mathematical decision model is proposed to simulate how these factors interact under constrained conditions. The core of this model is a weighted multi-criteria optimization function $F\left( x \right)=\sum {{w}_{i}}{{x}_{i}}$ where ${{x}_{i}}$ represents standardized system performance indicators and ${{w}_{i}}$ are decision weights based on empirical findings or institutional priorities. A multi-objective extension is also formulated to capture trade-offs between CRM and efficiency outcomes using a policy-sensitive parameter $\lambda .$ This dual-modeling approach supports both validation of user-perceived impact and practical decision-making under realistic constraints. The main contribution of this study lies in bridging the gap between AI system capabilities and long-term institutional strategy for sustainable service delivery. First, it offers a validated structural model that empirically connects chatbot features to key organizational outcomes, including CRM and administrative efficiency. Second, it introduces a practical computational framework that simulates implementation trade-offs using optimization logic, enabling scenario analysis for policy makers. Third, the study presents a modular architecture for generative AI chatbot deployment, emphasizing features such as prompt engineering, RAG-based response logic, and policy-aware filtering thereby addressing both the technical and governance aspects of AI implementation. Together, these contributions position the chatbot not merely as a support tool, but as a strategic innovation instrument aligned with the broader goals of digital transformation in higher education.
2.1 Research design
The study employed purposive sampling of 300 students from Universitas Sumatera Utara, selected specifically because they had prior experience using campus digital services. While this approach ensured relevance to the context of chatbot adoption, it inherently limits the generalizability of the findings. Therefore, this research is positioned as an exploratory case study, with future replication across multiple universities recommended to validate external applicability.
A key innovation of the methodology is the integration of behavioral and computational approaches. The path coefficients obtained from the SmartPLS structural equation model were not only used to validate relationships among constructs but also directly incorporated as empirical weights into the multi-criteria optimization model. In this way, the SEM results served as inputs to simulate trade-offs between CRM and administrative efficiency under real-world constraints such as budget or service quality requirements. This methodological integration ensures that the computational algorithm is grounded in empirical behavioral evidence rather than running in isolation.
2.2 Development of constructs and indicators
The research model comprises eleven latent constructs that reflect both system-level features and strategic outcomes of chatbot implementation. These include six exogenous variables: AI Capability (X1), System Usability (X2), Information Quality (X3), Service Availability (X4), Privacy and Security (X5), and Institutional Support (X6). Two mediating constructs User Satisfaction (M1) and Service Experience (M2) capture users’ psychological responses, while CRM Enhancement (Y1) and Administrative Efficiency (Y2) are modeled as final outcome variables. Digital Literacy (Z1) is introduced as a moderating variable to assess user readiness and variability in system interaction. Each construct was operationalized using 2 to 4 reflective indicators, derived and adapted from validated instruments in prior studies, and measured on a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree) [24-26]. The final questionnaire, consisting of 34 items, was reviewed by academic experts and tested in a pilot study to ensure clarity, reliability, and contextual appropriateness. To visualize the theoretical framework developed in this study, Figure 1 Structural Model of the Research presents the proposed relationships between constructs within the context of generative AI chatbot implementation in campus bureaucracy. This model was designed to empirically test how various technological, experiential, and strategic variables contribute to improved administrative performance through user satisfaction and CRM enhancement [27-30].
The structural model consists of eleven latent constructs. Six exogenous constructs AI Capability (X1), System Usability (X2), Information Quality (X3), Service Availability (X4), Privacy & Security (X5), and Institutional Support (X6) serve as primary predictors that influence two mediators: User Satisfaction (M1) and Service Experience (M2). These mediating variables affect Customer Relationship Management (CRM) Enhancement (Y1), which ultimately leads to Administrative Efficiency (Y2) as the final dependent outcome. A critical moderator, Digital Literacy (Z1), is introduced to examine how individual digital competencies influence the relationships between input variables (X) and satisfaction (M1). The model captures both functional system attributes and user-centered experiences, enabling a nuanced understanding of AI integration beyond technical performance. By placing CRM at the strategic center of the framework, the model shifts the focus from transactional interactions to long-term relationship value. This structure aligns with the goal of promoting sustainable service innovation in higher education by bridging digital transformation with human-centric outcomes.
Figure 1. Structural model of research
2.3 Data collection
Primary data were collected through an online survey involving 300 students at Universitas Sumatera Utara. A purposive sampling strategy was employed to target respondents across different faculties and academic levels who have experience using campus digital services or chat-based administrative assistance. The survey instrument included demographic items and construct-specific statements reflecting the research model. Ethical approval was obtained prior to distribution, and participant anonymity and confidentiality were maintained throughout. Data screening procedures revealed no missing values or extreme outliers, and the sample size was sufficient for SmartPLS analysis, which recommends at least 10 times the number of indicators for the most complex construct. The dataset was subsequently used both to validate the structural model and to inform the parameters of the mathematical optimization formulation. To ensure the robustness and generalizability of the structural model, data were collected from a diverse population of university students at Universitas Sumatera Utara using stratified purposive sampling. The final sample consisted of 300 respondents from various academic levels and faculties, who completed a structured online questionnaire designed to capture perceptions of AI chatbot features, user experience, CRM potential, and administrative efficiency. The demographic distribution of these respondents is presented in Table 1.
Table 1. Demographic profile of respondents
| No. | Variable | Category | Freq. (n) | Percent (%) | 
| 1 | gender | male | 128 | 42.7% | 
| female | 172 | 57.3% | ||
| 2 | age group | ≤ 20 years | 85 | 28.3% | 
| 21–23 years | 174 | 58.0% | ||
| ≥ 24 years | 41 | 13.7% | ||
| 3 | academic level | undergraduate (s1/d4) | 218 | 72.7% | 
| diploma (d3) | 54 | 18.0% | ||
| graduate (s2) | 28 | 9.3% | ||
| 4 | faculty origin | science & technology | 136 | 45.3% | 
| social sciences & humanities | 102 | 34.0% | ||
| health sciences | 62 | 20.7% | ||
| 5 | familiarity with chatbot | never used | 47 | 15.7% | 
| used occasionally | 166 | 55.3% | ||
| used frequently | 87 | 29.0% | 
The respondent pool was balanced in terms of gender, with 57.3% female and 42.7% male participants. Most respondents (58%) were aged between 21 and 23 years, aligning with the typical undergraduate demographic. In terms of academic level, 72.7% of participants were enrolled in undergraduate programs (S1/D4), while the remainder were diploma (D3) and graduate (S2) students. The faculty representation included Science & Technology (45.3%), Social Sciences & Humanities (34.0%), and Health Sciences (20.7%), indicating multidisciplinary input on digital transformation issues. Notably, while 15.7% of respondents had never used chatbots, the majority (55.3%) had used them occasionally, and 29.0% reported frequent usage, suggesting adequate familiarity with the core technology under investigation. This diverse yet relevant sample strengthens the internal validity of the study and ensures meaningful interpretation of the structural model results.
2.4 Mathematical approach on structural equation modeling
To formalize the decision structure of the proposed generative AI chatbot system, we define a multi-criteria optimization model that incorporates the main constructs of the behavioral framework. We define each ${{x}_{i}}\in \left[ 0,1 \right]$ as a normalized performance score and let the path coefficients from the SmartPLS model be used as empirical weights. Then, the total utility function representing the overall system performance is:
$\max F\left( x \right)={{w}_{1}}{{x}_{1}}+{{w}_{2}}{{x}_{2}}+{{w}_{3}}{{x}_{3}}+{{w}_{4}}{{x}_{4}}+{{w}_{5}}{{x}_{5}}+{{w}_{6}}{{x}_{6}}$ (1)
However, since the ultimate goals are improvements in CRM (y₁) and Administrative Efficiency (y₂), both of which are influenced by mediators (m₁, m₂), we define the following functional relationships:
${{m}_{1}}={{\alpha }_{1}}{{x}_{1}}+{{\alpha }_{2}}{{x}_{2}}+{{\alpha }_{3}}{{x}_{3}}$ (2)
${{m}_{2}}={{\beta }_{1}}{{x}_{4}}+{{\beta }_{2}}{{x}_{5}}+{{\beta }_{3}}{{x}_{6}}+{{\beta }_{4}}{{z}_{1}}$ (3)
${{y}_{1}}={{\gamma }_{1}}{{m}_{1}}+{{\gamma }_{2}}{{m}_{2}}$ (4)
${{y}_{2}}={{\delta }_{1}}{{y}_{1}}+{{\delta }_{2}}{{y}_{2}}$ (5)
Substituting the dependencies into the utility function, we get:
$\begin{matrix} F\left( x \right)=\lambda {{y}_{1}}+\left( 1-\lambda \right){{y}_{2}} =\lambda \left( {{\gamma }_{1}}{{m}_{1}}+{{\gamma }_{2}}{{m}_{2}} \right)+\left( 1-\lambda \right)\left( {{\delta }_{1}}{{y}_{1}}+{{\delta }_{2}}{{m}_{1}} \right) \\ \end{matrix}$ (6)
Now, replacing m₁ and m₂ from the first set of Eqs. (2) and (3). Thus, the final form of the objective becomes:
$\begin{matrix} F\left( x \right)=\lambda \left[ {{\gamma }_{1}}\left( {{\alpha }_{1}}{{x}_{1}}+{{\alpha }_{2}}{{x}_{2}}+{{\alpha }_{3}}{{x}_{3}} \right)+{{\gamma }_{2}}\left( {{\beta }_{1}}{{x}_{4}}+{{\beta }_{2}}{{x}_{5}}+{{\beta }_{3}}{{x}_{6}}+{{\beta }_{4}}{{z}_{1}} \right) \right] +\left( 1-\lambda \right)\left[ {{\delta }_{1}}{{y}_{1}}+{{\delta }_{2}}\left( {{\alpha }_{1}}{{x}_{1}}+{{\alpha }_{2}}{{x}_{2}}+{{\alpha }_{3}}{{x}_{3}} \right) \right] \\ \end{matrix}$ (7)
where, y₁ is also a function of m₁ and m₂, as shown above, allowing recursive substitution if needed for simulation. To reflect real-world system limitations, we can define additional constraints such as:
(1). Cost constraint
$C\left( x \right)=\sum\limits_{i=1}^{6}{{{c}_{i}}{{x}_{i}}}\le {{C}_{\max }}$ (8)
where, ${{c}_{i}}$ is the estimated cost coefficient of feature ${{x}_{i}}$.
(2). Complexity constraint
$L\left( x \right)=\sum\limits_{i=1}^{6}{{{l}_{i}}{{x}_{i}}}\le {{L}_{threshold}}$ (9)
(3). Minimum service effectiveness
$\begin{matrix} {{m}_{1}}\ge {{m}_{\min }} & {{m}_{2}}\ge {{m}_{\min }} \\ \end{matrix}$ (10)
2.5 Chatbot algorithm architecture
The architecture of the generative AI chatbot developed in this study follows a modular and strategic workflow that ensures responsiveness, contextual accuracy, and policy compliance in campus administration services. The chatbot operates based on a multi-phase process starting with intent recognition, where user queries are vectorized and matched using cosine similarity to a predefined intent embedding space. This enables the system to classify the query into a service category (e.g., letter request, academic calendar, internship registration). Once the intent is identified, the chatbot constructs a dynamic prompt by combining predefined templates based on Standard Operating Procedures (SOPs) with the user input.
To enhance the contextual richness of the response, the architecture integrates a Retrieval-Augmented Generation (RAG) layer that retrieves relevant information from a structured document base. These retrieved documents (e.g., university regulations, academic calendar) are concatenated to the prompt, forming an enriched input fed into a pre-trained LLM, such as GPT. The chatbot then generates a candidate response, which undergoes policy filtering via a rule-based compliance layer to ensure that outputs are aligned with official regulations and do not breach institutional standards.
To ensure that the AI-powered chatbot performs in alignment with institutional bureaucracy while maintaining user-centric responsiveness, a procedural algorithm was developed to govern its internal logic. This algorithm integrates natural language processing techniques with retrieval-based augmentation and policy-based control, ensuring that the chatbot delivers responses that are both contextually appropriate and operationally valid. The core elements include the recognition of user intent, prompt construction, document retrieval, and filtering mechanisms to ensure regulatory compliance. These elements are structured into a repeatable and transparent process that can be adapted and audited.
Algorithm 1 generative chatbot service outlines the operational logic of the chatbot system. The process begins by identifying the most likely intent category through cosine similarity between the query vector and the intent embedding space. Once the intent is matched, a prompt is constructed using a combination of static configuration and dynamic user input. This prompt is then enriched using document retrieval from the knowledge base (RAG), generating a response through a large language model. Finally, the generated response passes through a compliance filter before being delivered to the user. The algorithm also logs interactions for continuous service improvement and strategic review.
| Algorithm 1. Generative Chatbot Service | 
| Input: Q: User Query D: Document Knowledge Base V: Pre-trained Vector Embeddings C: Chatbot Configuration Output: ${{R}_{valid}}$: Final Filtered Response $\text{Interaction }\!\!~\!\!\text{ Log}$: User Chatbot Interaction History Stepwise Procedure: 
 Identify most relevant intent based on cosine similarity between query and embedding vectors. 
 Generate initial prompt: ${{P}_{\text{base}}}\leftarrow \text{Template}\left( C,\hat{i} \right)$ Combine user query: ${{P}_{\text{full}}}\leftarrow {{P}_{\text{base}}}+Q$ 
 Use Retrieval-Augmented Generation:$K\leftarrow TopK\left( D,Q,k=3 \right)$ Append to prompt: ${{P}_{RAG}}\leftarrow {{P}_{full}}+K$ 
 Generate response with LLM: $R\leftarrow GPT\left( {{P}_{RAG}} \right)$ 
 Check against policy rules: $\begin{aligned} & \text { If } \operatorname{match}(R, S O P)= True \\ & R_{ {valid }} \leftarrow R \\ & \text { else } \\ & R_{ {valid }} \leftarrow \text { Fallback Message }\end{aligned}$ 
 Record session: $Interaction~Log\leftarrow Log\left( Q,{{R}_{valid}},t \right)$ 
 Collect feedback score $S$, compliance $C$, and error rate $E$ Calculate multi-objective utility: $OBJ\leftarrow {{\lambda }_{1}}.\frac{S}{5}+{{\lambda }_{2}}.\frac{C}{5}+{{\lambda }_{3}}.\left( 1-\frac{E}{{{E}_{\max }}} \right)$ 
 | 
As illustrated in Algorithm 1, the architecture not only supports real-time interaction but also allows systematic optimization by logging user interactions, applying policy filters, and measuring satisfaction feedback, which makes the system.
3.1 Measurement model
The measurement model was assessed using several criteria including outer loadings, composite reliability (CR), average variance extracted (AVE), and discriminant validity via HTMT. All indicators exhibited strong outer loadings (above 0.70), confirming indicator reliability. Composite reliability values ranged from 0.87 to 0.93, exceeding the minimum threshold of 0.70, ensuring internal consistency. Convergent validity was supported by AVE values above 0.50 for all constructs. Discriminant validity was also confirmed, except for the slight HTMT overlap between CRM Enhancement and User Satisfaction (HTMT = 0.944), which remains marginally acceptable for conceptually close constructs. These results affirm the robustness of the measurement model. To ensure the reliability and validity of constructs in the structural model, the measurement model was evaluated through key indicators, including CR, AVE, R-square (R²), and effect size (f²). This assessment aims to confirm that the constructs used in the model demonstrate high internal consistency, adequate convergent validity, and sufficient predictive strength. CR values above 0.7 and AVE values above 0.5 indicate that all constructs meet the threshold for convergent validity. Meanwhile, the R² values explain how much variance in the endogenous variables is accounted for by the predictors, and f² indicates the strength of each exogenous construct's effect. The summary of these measurement results is presented in Table 2.
Table 2. Measurement model
| Construct | CR | AVE | R² | f² | 
| AI capability (X1) | 0.893 | 0.736 | 0.136 | |
| System usability (X2) | 0.910 | 0.771 | 0.004 | 0.199 | 
| Information quality (X3) | 0.889 | 0.719 | 0.008 | 0.267 | 
| Service availability (X4) | 0.902 | 0.759 | 0.038 | |
| Privacy & security (X5) | 0.901 | 0.751 | 0.081 | |
| Institutional support (X6) | 0.887 | 0.712 | 0.109 | |
| User satisfaction (M1) | 0.912 | 0.755 | 0.340 | |
| Service experience (M2) | 0.900 | 0.749 | 0.211 | |
| CRM enhancement (Y1) | 0.918 | 0.789 | 0.514 | 0.523 | 
| Administrative efficiency (Y2) | 0.927 | 0.806 | 0.694 | 2.266 | 
The results presented in Table 2 show that all constructs exceed the accepted threshold for CR (≥ 0.87), indicating excellent internal consistency. The AVE values are also well above 0.7, reflecting strong convergent validity. Notably, Administrative Efficiency (Y2) demonstrates a high R² of 0.694, meaning that 69.4% of its variance is explained by the model an indication of substantial predictive power. Furthermore, the highest effect size (f² = 2.266) is found in the relationship between CRM and administrative efficiency, highlighting its critical influence. These findings reinforce the importance of integrating user satisfaction, service experience, and CRM to foster long-term sustainable service innovation in campus bureaucracy. To visualize the structural relationships between the latent constructs and to evaluate the magnitude of influence among them, the final output of the SmartPLS structural equation modeling is presented in the path diagram below. This figure shows the standardized path coefficients, R² values of endogenous variables, and outer loadings of the measurement items. Each construct is represented with its associated indicators, and directional arrows show the hypothesized influence pathways. The thickness and value of each path coefficient indicate the strength of the relationship.
Figure 2. Structural model with path coefficients and R² values
As illustrated in Figure 2, all major hypotheses are supported with strong path coefficients and substantial R² values. For example, User Satisfaction (M1) is influenced most strongly by Information Quality (X3) and System Usability (X2) with path coefficients of 0.459 and 0.366 respectively. The R² of M1 is 0.340, indicating that approximately 34% of its variance is explained by the selected predictors. Similarly, CRM Enhancement (Y1) has an R² of 0.514, and Administrative Efficiency (Y2) achieves the highest explained variance at 0.694. Notably, the path from Y1 to Y2 is 0.833, highlighting the critical mediating role of CRM in translating user experience into administrative outcomes. The model also shows that both User Satisfaction and Service Experience significantly predict CRM, with coefficients of 0.504 and 0.492 respectively, reinforcing the dual-path strategy toward innovation. These results confirm the robustness of the proposed model and the strategic integration between technological, experiential, and relational components in enhancing digital campus services.
To further understand the underlying mechanisms of the proposed model, an indirect effect analysis was conducted to test the mediating roles of User Satisfaction (M1) and Service Experience (M2) on the relationship between AI-related predictors and the outcome variables. This analysis helps explain how latent constructs influence one another not only directly but also through intermediary variables. The bootstrapping procedure revealed several significant indirect effects, with p-values well below the 0.05 threshold, indicating strong statistical support for the mediating relationships. A summary of the mediated paths and their effect sizes is shown in Table 3.
Table 3. Direct-indirect effect with 95% bootstrapped confidence intervals
| Path | Effect Size (β) | p-value | 95% CI (Lower-Upper) | 
| X1 → M1 → Y1 | 0.153 | 0.000 | [0.112, 0.198] | 
| X2 → M1 → Y1 | 0.184 | 0.000 | [0.139, 0.231] | 
| X3 → M1 → Y1 | 0.226 | 0.000 | [0.175, 0.276] | 
| X4 → M1 → Y1 | 0.080 | 0.003 | [0.028, 0.131] | 
| X5 → M1 → Y1 | 0.117 | 0.000 | [0.071, 0.164] | 
| X6 → M1 → Y1 | 0.136 | 0.000 | [0.087, 0.187] | 
| X1 → M1 → Y1 → Y2 | 0.127 | 0.000 | [0.089, 0.169] | 
| X6 → M1 → Y1 → Y2 | 0.113 | 0.000 | [0.073, 0.154] | 
| M1 → Y1 → Y2 | 0.420 | 0.000 | [0.356, 0.487] | 
| M2 → Y1 → Y2 | 0.410 | 0.000 | [0.344, 0.473] | 
The results in Table 3 confirm the strong mediating role of User Satisfaction (M1) and Service Experience (M2) in the relationship between upstream technological and institutional factors and downstream outcomes such as CRM Enhancement and Administrative Efficiency. To enhance the robustness of the mediation analysis, 95% bootstrapped confidence intervals (CIs) were added to Table 3 alongside the path coefficients and p-values. These intervals provide a more precise understanding of the reliability of indirect effects. For example, the indirect effect of User Satisfaction (M1) on Administrative Efficiency (Y2) through CRM (Y1) was β = 0.420, p < 0.001, with a 95% CI [0.356, 0.487], confirming the stability of this pathway. Similarly, the indirect effect of Service Experience (M2) on Administrative Efficiency (Y2) through CRM was β = 0.410, p < 0.001, with a 95% CI [0.344, 0.473]. Since none of the CIs included zero, all indirect effects can be considered statistically significant and robust. This enhancement increases confidence in the mediating role of CRM and strengthens the interpretation of the structural model.
3.2 Effect size of variables
This chart displays the effect size (f²) of each predictor variable on its respective endogenous construct. Notably, the strongest effect is observed in the path Y1 → Y2 (CRM Enhancement → Administrative Efficiency) with an f² value of 2.266, which is exceptionally large according to Cohen’s guidelines (0.02 = small, 0.15 = medium, 0.35 = large). This confirms that CRM Enhancement has a substantial impact on improving administrative efficiency, reinforcing its role as a key strategic mediator. Other substantial contributors include M1 → Y1 and M2 → Y1, supporting the idea that both user satisfaction and service experience shape CRM performance. To assess the impact of each predictor variable on its respective endogenous variable, we calculated the effect size (f²) using SmartPLS. This metric quantifies the contribution of each exogenous construct in explaining the variance of the dependent constructs. According to Cohen’s guideline, f² values of 0.02, 0.15, and 0.35 indicate small, medium, and large effects respectively. These values are crucial in identifying which variables carry substantial weight in the structural model and contribute meaningfully to predictive relevance.
Figure 3 presents a bar chart illustrating the effect size of each significant relationship. Notably, the strongest effect size is found in the path from CRM Enhancement (Y1) to Administrative Efficiency (Y2), with an f² value of 2.266. This is far above the threshold for a large effect, emphasizing that CRM plays a critical role in driving institutional performance improvements. Other noteworthy contributors include M1 (User Satisfaction) and M2 (Service Experience) toward Y1 (CRM), reinforcing their strategic function in enhancing the institutional service pipeline. Meanwhile, upstream constructs like X1 (AI Capability), X2 (System Usability), and X3 (Information Quality) show moderate yet relevant contributions.
Figure 3. Effect size of variables
To complement the analysis of direct relationships and effect sizes, the indirect effects were examined to uncover mediation mechanisms within the structural model. This analysis is critical in understanding how upstream constructs influence the ultimate outcome (Administrative Efficiency) via intermediary latent variables such as User Satisfaction (M1), Service Experience (M2), and CRM Enhancement (Y1). The magnitude and significance of these indirect effects were evaluated using bootstrapping in SmartPLS, and are illustrated in the following chart.
Figure 4 depicts the five most prominent and statistically significant indirect effect pathways. The most dominant is the path M1 → Y1 → Y2, indicating that user satisfaction significantly influences administrative efficiency through its positive impact on CRM enhancement. This is followed closely by M2 → Y1 → Y2, reinforcing the role of service experience as another foundational factor in boosting CRM outcomes. These results support the model’score proposition that satisfaction-related variables must flow through CRM systems to produce meaningful organizational improvements. Notably, the upstream paths X1 → M1 → Y1 → Y2, X2 → M1 → Y1 → Y2, and X3 → M2 → Y1 → Y2 demonstrate the importance of technological and quality-related dimensions (such as AI Capability, System Usability, and Information Quality) in shaping service innovation. These indirect paths confirm that technical attributes must first elevate user experience and CRM functionality to contribute toward sustainable administrative transformation.
Figure 4. Significant effect
To evaluate the explanatory power of the structural model, the R² (coefficient of determination) values were analyzed for each endogenous construct. R² indicates how much of the variance in a dependent variable can be explained by its predictors, providing insight into the predictive relevance and strength of the model. The higher the R² value, the more effectively the independent constructs account for variations in the outcome variables.
Figure 5 presents the R² values for the key endogenous constructs: User Satisfaction (M1), Service Experience (M2), CRM Enhancement (Y1), and Administrative Efficiency (Y2). Among them, Administrative Efficiency (Y2) achieved the highest R² at 0.694, implying that nearly 70% of its variance is explained by preceding constructs namely, CRM Enhancement, which itself is driven by both satisfaction and service-related factors. This robust value highlights the model's effectiveness in predicting administrative outcomes.
Figure 5. Variance distribution
In addition, CRM Enhancement (Y1) has a strong R² value of 0.514, indicating that more than half of its variance is explained by User Satisfaction and Service Experience. The values for Service Experience (M2) and User Satisfaction (M1) are 0.211 and 0.340 respectively, suggesting moderate but meaningful predictive contributions from constructs like System Usability, Information Quality, and Institutional Support. Collectively, these findings validate the structural integrity of the proposed model and confirm that the chatbot service's impact on administrative efficiency is not isolated, but systematically driven through multi-layered user experience and CRM pathways. This emphasizes the importance of strengthening upstream user interactions to maximize downstream organizational performance.
To examine the influence of digital literacy as a moderating variable, we tested the interaction effect between User Satisfaction (M1) and Digital Literacy (Z1) on CRM Enhancement (Y1). The purpose was to determine whether the relationship between satisfaction and CRM outcomes changes depending on the level of digital competence among users. The moderation effect was analyzed using an interaction term within the SmartPLS structural model, and the interaction plot was generated to visualize this dynamic. The result shows a meaningful interaction, implying that digital literacy acts as a strategic enabler in maximizing CRM benefits from AI-driven services. Figure 6 displays the moderation effect of digital literacy. Two regression lines represent users with high versus low digital literacy. It is evident that for users with high digital literacy, the slope of the relationship between user satisfaction and CRM enhancement is significantly steeper. This means that as satisfaction increases, the perceived CRM enhancement increases more sharply for users who are digitally literate. Conversely, for users with low digital literacy, the slope is flatter, indicating a weaker influence of satisfaction on CRM performance.
Figure 6. Moderation effect of digital literacy on the M1 → Y1 relationship
This moderation effect reinforces the importance of digital readiness in supporting AI adoption outcomes. It suggests that organizations aiming to implement generative AI chatbot systems must invest not only in technological infrastructure but also in user capability development. Training and onboarding strategies that improve digital literacy could therefore amplify the benefits of satisfaction-driven engagement, ultimately strengthening CRM strategies and enhancing public service performance in campus bureaucracy. Table 4 presents a comprehensive summary of the hypothesis testing conducted through SmartPLS on the structural model designed to evaluate the impact of generative AI chatbot implementation on campus bureaucracy. The results encompass direct, indirect, and moderation effects across 11 latent constructs.
Table 4. Hypotheses test
| Hypothesis | Path | Effect Type | Effect Size | p-value | Result | 
| H1 | X1 → M1 | Direct | 0.303 | 0.000 | Supported | 
| H2 | X2 → M1 | Direct | 0.366 | 0.000 | Supported | 
| H3 | X3 → M1 | Direct | 0.158 | 0.000 | Supported | 
| H4 | X4 → M1 | Direct | 0.080 | 0.003 | Supported | 
| H5 | X5 → M1 | Direct | 0.117 | 0.000 | Supported | 
| H6 | X6 → M1 | Direct | 0.136 | 0.000 | Supported | 
| H7 | M1 → Y1 | Direct | 0.504 | 0.000 | Supported | 
| H8 | M2 → Y1 | Direct | 0.492 | 0.000 | Supported | 
| H9 | Y1 → Y2 | Direct | 0.833 | 0.000 | Supported | 
| H10 | X1 → M1 → Y1 | Indirect | 0.153 | 0.000 | Supported | 
| H11 | X2 → M1 → Y1 | Indirect | 0.184 | 0.000 | Supported | 
| H12 | X3 → M1 → Y1 | Indirect | 0.226 | 0.000 | Supported | 
| H13 | X4 → M1 → Y1 | Indirect | 0.080 | 0.003 | Supported | 
| H14 | X5 → M1 → Y1 | Indirect | 0.117 | 0.000 | Supported | 
| H15 | X6 → M1 → Y1 | Indirect | 0.136 | 0.000 | Supported | 
| H16 | M1 → Y1 → Y2 | Indirect | 0.420 | 0.000 | Supported | 
| H17 | M2 → Y1 → Y2 | Indirect | 0.410 | 0.000 | Supported | 
| H18 | X1 → M1 → Y1 → Y2 | Indirect | 0.127 | 0.000 | Supported | 
| H19 | X6 → M1 → Y1 → Y2 | Indirect | 0.113 | 0.000 | Supported | 
| H20 | M1 × Z1 → Y1 | Moderation | Significant | 0.000 | Supported | 
The direct relationships between input variables and user satisfaction (M1) are all statistically significant (p < 0.05), confirming H1 to H6. For instance, System Usability (X2 → M1) and Information Quality (X3 → M1) show relatively higher path coefficients (0.366 and 0.158 respectively), suggesting they are strong predictors of satisfaction in AI-based campus services. Similarly, User Satisfaction (M1) and Service Experience (M2) significantly predict CRM Enhancement (Y1), confirming H7 and H8. The strongest direct path is observed from CRM Enhancement to Administrative Efficiency (Y1 → Y2) with a coefficient of 0.833, validating H9 and highlighting CRM as the key conduit for bureaucratic reform. A series of mediating paths are validated through H10–H19. These include multi-layered indirect effects such as X1 → M1 → Y1 → Y2 and X6 → M1 → Y1 → Y2, which demonstrate how upstream variables like AI Capability and Institutional Support ultimately influence administrative outcomes through satisfaction and CRM mediation. Notably, M1 → Y1 → Y2 (H16) and M2 → Y1 → Y2 (H17) show the highest indirect effect sizes (0.420 and 0.410 respectively), confirming the strategic mediating power of CRM Enhancement in converting positive user experience into systemic efficiency.
Hypothesis H20 investigates whether Digital Literacy (Z1) strengthens the link between User Satisfaction (M1) and CRM Enhancement (Y1). The interaction term M1 × Z1 → Y1 is statistically significant (p = 0.000), confirming that the effect of satisfaction on CRM performance is more pronounced in users with high digital literacy. This implies that digital competence amplifies the success of AI-enabled tools, suggesting a crucial strategic direction for institutions aiming to foster chatbot-driven transformation. All 20 hypotheses are statistically supported, reinforcing the robustness of the proposed structural model. The findings clearly map out a pathway from foundational technological enablers (AI, usability, quality, support) through user-centric constructs (satisfaction and experience), mediated by CRM strategy, and culminating in improved administrative efficiency. The moderation role of digital literacy adds an important nuance, revealing that digital readiness is not just a background variable, but a critical enabler of AI adoption effectiveness.
3.3 Discussions
In addition to the structural equation modeling results, the multi-criteria optimization model and chatbot algorithm provide critical insights for strategic decision-making. The utility function, constructed using SEM-derived path coefficients as empirical weights, allowed us to simulate different implementation scenarios under institutional constraints. For example, one simulation revealed that prioritizing CRM maximization increased administrative efficiency by more than 20%, but also required higher investment in system development and complexity management. Conversely, another scenario demonstrated that by slightly reducing the weight assigned to usability and information quality, universities could maintain CRM performance while achieving greater cost efficiency. These trade-offs illustrate how the model can serve as a decision-support tool, enabling administrators to balance competing objectives such as service scalability, budget allocation, and user experience quality. Moreover, the chatbot algorithm, designed with a RAG layer and policy compliance filter, was shown to operationalize these trade-offs by adapting responses in real time while maintaining regulatory standards. Together, the optimization model and algorithm extend the contribution of this study beyond statistical validation, offering practical computational pathways for universities to strategically align generative AI chatbot deployment with broader institutional priorities. This study provides a comprehensive structural framework for understanding the role of generative AI chatbots in transforming bureaucratic processes within campus administration. The findings validate a multistage pathway starting from system-related and organizational enablers, flowing through user satisfaction and experience, and culminating in administrative efficiency.
The results confirm that CRM plays a central role in translating user satisfaction and service experience into measurable improvements in administrative efficiency (β = 0.833, p < 0.001). Beyond validating the statistical relationship, this finding carries important strategic implications for higher education institutions. First, by strengthening CRM systems through chatbot integration, universities can enhance student retention, as efficient and responsive services foster stronger trust and loyalty among students. Second, effective CRM-supported digital services contribute to university branding, positioning the institution as technologically advanced and student-centered, which in turn attracts prospective students and external partners. Third, improvements in administrative efficiency directly contribute to financial sustainability by reducing redundant processes, lowering operational costs, and reallocating resources to value-adding activities such as academic innovation and student support. In this sense, CRM is not merely an operational mediator but a strategic lever for long-term institutional competitiveness. By situating CRM at the heart of digital transformation, this study highlights how generative AI chatbots can serve as catalysts for both service innovation and sustainable organizational growth.
The empirical results offer several key insights, theoretical reflections, and practical implications for both scholars and decision-makers.
(1). User-centric determinants of satisfaction
The strong and significant relationships between System Usability (X2), Information Quality (X3), and User Satisfaction (M1) reinforce the critical role of intuitive design and information richness in determining how well users perceive chatbot services. These findings align with previous literature emphasizing that AI-based tools must not only be functional but also frictionless and contextually responsive to user needs. Interestingly, AI Capability (X1) also exerts a direct influence, indicating that users value the intelligence and contextual understanding embedded in generative chatbots. While Privacy & Security (X5) and Institutional Support (X6) showed slightly lower path coefficients, their significance reflects an underlying trust mechanism that enables adoption in bureaucratic contexts.
(2). CRM as a strategic mediator
One of the most impactful findings is the mediating role of CRM Enhancement (Y1) in translating user satisfaction and service experience into Administrative Efficiency (Y2). Both User Satisfaction (M1) and Service Experience (M2) contribute significantly to CRM, which in turn has the strongest direct effect on efficiency (β = 0.833, f² = 2.266). This supports the hypothesis that customer (student/staff) relationship strategies must be digitally enhanced for AI deployments to yield sustainable institutional outcomes. CRM here is not merely a functional extension, but a strategic bridge an operational interface where satisfaction-driven interactions evolve into systemic performance improvements.
(3). Digital literacy as a moderating enabler
The moderation analysis reveals a significant interaction between User satisfaction and digital literacy on CRM outcomes. The slope of the interaction plot indicates that users with higher digital literacy levels exhibit a much stronger satisfaction-to-CRM path. This suggests that digital readiness is a critical catalyst organization must invest in digital literacy not just for chatbot operation, but to amplify the Return on Investment (ROI) of AI-enabled transformation strategies. Without a digitally competent user base, even sophisticated AI tools may yield underwhelming outcomes.
(4). Multi-level and indirect effects
The indirect effects analysis demonstrates that variables such as AI Capability (X1), Institutional Support (X6), and Information Quality (X3) exert significant influence indirectly through multi-step paths. For example, X6 → M1 → Y1 → Y2 confirms the hypothesis that organizational buy-in indirectly drives performance outcomes when funneled through experiential and CRM stages. This underscores the importance of aligning institutional policies, resource allocation, and stakeholder engagement with technological deployments. Furthermore, M1 → Y1 → Y2 and M2 → Y1 → Y2 emerged as the most potent indirect paths, reinforcing the role of emotional and experiential interfaces in sustaining long-term digital reforms.
(5). Strategic and theoretical contributions
Theoretically, this study extends the traditional TAM by integrating organizational enablers (support, availability) and service layers (experience, CRM) into a cohesive framework. The mathematical modeling and path estimations offer a novel way to represent human-AI-system interaction in bureaucratic ecosystems. From a strategic perspective, the findings inform policy and digital transformation leaders in higher education that:
a. Technological factors alone are insufficient without CRM mediation and digital literacy.
b. Satisfaction and service quality must be viewed as strategic levers, not just user feedback.
c. CRM platforms need to be redesigned as AI-integrated orchestration layers for service innovation.
This study demonstrates that generative AI chatbots are not merely functional tools but strategic enablers of sustainable service innovation in higher education. By integrating structural equation modeling with multi-criteria optimization, the research offers a hybrid framework that bridges behavioral insights with computational decision support. A key theoretical contribution is the central role of CRM, which mediates satisfaction and experience to generate significant gains in administrative efficiency (β = 0.833, R² = 0.694). This confirms that effective digital relationship management is a prerequisite for transforming student interactions into institutional performance. For policymakers and university leaders, the findings translate into practical recommendations. Institutions should:
(1) Establish a cross-functional AI–CRM management team that combines IT, academic services, and public relations expertise;
(2) Leverage chatbot data analytics to support student retention strategies by identifying pain points in service delivery;
(3) Reallocate cost savings from administrative efficiency toward innovation initiatives such as personalized learning and digital literacy training. These measures ensure that chatbots contribute not only to immediate service improvement but also to long-term institutional resilience.
Future research should move beyond replication to explore comparative studies across multiple universities and regions, examine the longitudinal financial impact of chatbot-driven efficiency on institutional budgets, and extend the integrated SEM–optimization approach to non-educational public sectors such as healthcare or municipal governance. Such directions will enrich both theoretical understanding and practical applications of AI-enabled bureaucratic transformation.
This research was funded by Direktorat Penelitian dan Pengabdian kepada Masyarakat (DPPM) at Universitas Sumatera Utara scheme regular fundamental research (Grant No.: 55/UN5.4.10. K / PT.01.03/DPPM/2025).
[1] Kooli, C. (2023). Chatbots in education and research: A critical examination of ethical implications and solutions. Sustainability, 15(7): 5614. https://doi.org/10.3390/su15075614
[2] Falebita, O.S., Kok, P.J. (2025). Artificial intelligence tools usage: A structural equation modeling of undergraduates’ technological readiness, self-efficacy and attitudes. Journal for STEM Education Research, 8(2): 257-282. https://doi.org/10.1007/s41979-024-00132-1
[3] Luo, J., Zheng, C., Yin, J., Teo, H.H. (2025). Design and assessment of AI-based learning tools in higher education: A systematic review. International Journal of Educational Technology in Higher Education, 22(1): 42. https://doi.org/10.1186/s41239-025-00540-2
[4] Ayanwale, M.A., Molefi, R.R. (2024). Exploring intention of undergraduate students to embrace chatbots: From the vantage point of Lesotho. International Journal of Educational Technology in Higher Education, 21(1): 20. https://doi.org/10.1186/s41239-024-00451-8
[5] Yadegaridehkordi, E., Foroughi, B., Ghobakhloo, M. (2025). Factors affecting academic staff’s willingness to use ChatGPT for teaching and learning: A PLS-SEM and ANN approach. Innovative Higher Education, 1-28. https://doi.org/10.1007/s10755-025-09835-8 
[6] McGrath, C., Farazouli, A., Cerratto-Pargman, T. (2025). Generative AI chatbots in higher education: A review of an emerging research area. Higher Education, 89(6): 1533-1549. https://doi.org/10.1007/s10734-024-01288-w
[7] Davar, N.F., Dewan, M.A.A., Zhang, X. (2025). AI chatbots in education: Challenges and opportunities. Information, 16(3): 235. https://doi.org/10.3390/info16030235
[8] Rodríguez-Ortiz, M.Á., Santana-Mancilla, P.C., Anido-Rifón, L.E. (2025). Machine learning and generative AI in learning analytics for higher education: A systematic review of models, trends, and challenges. Applied Sciences, 15(15): 8679. https://doi.org/10.3390/app15158679
[9] Fošner, A., Aver, B. (2025). AI chatbots in higher education: Students’ beliefs and concerns. Sustainable Futures, 9: 100734. https://doi.org/10.1016/j.sftr.2025.100734
[10] Schei, O.M., Møgelvang, A., Ludvigsen, K. (2024). Perceptions and use of AI chatbots among students in higher education: A scoping review of empirical studies. Education Sciences, 14(8): 922. https://doi.org/10.3390/educsci14080922
[11] Wang, S., Wang, F., Zhu, Z., Wang, J., Tran, T., Du, Z. (2024). Artificial intelligence in education: A systematic literature review. Expert Systems with Applications, 252: 124167. https://doi.org/10.1016/j.eswa.2024.124167
[12] Koteczki, R., Balassa, B.E. (2025). Exploring generation Z’s acceptance of artificial intelligence in higher education: A TAM and UTAUT-based PLS-SEM and cluster analysis. Education Sciences, 15(8): 1044. https://doi.org/10.3390/educsci15081044
[13] Sofiyah, F.R., Dilham, A., Hutagalung, A.Q., Yulinda, Y., Lubis, A.S., Marpaung, J.L. (2024). The chatbot artificial intelligence as the alternative customer services strategic to improve the customer relationship management in real-time responses. International Journal of Economics and Business Research, 27(5): 45-58. https://doi.org/10.1504/IJEBR.2024.139810
[14] Stöhr, C., Ou, A.W., Malmström, H. (2024). Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study. Computers and Education: Artificial Intelligence, 7: 100259. https://doi.org/10.1016/j.caeai.2024.100259
[15] Sofiyah, F.R., Dilham, A., Lubis, A.S., Hayatunnufus, Marpaung, J.L., Lubis, D. (2024). The impact of artificial intelligence chatbot implementation on customer satisfaction in Padangsidimpuan: Study with structural equation modelling approach. Mathematical Modelling of Engineering Problems, 11(8): 2127-2135. https://doi.org/10.18280/mmep.110814
[16] Sova, R., Tudor, C., Tartavulea, C.V., Dieaconescu, R.I. (2024). Artificial intelligence tool adoption in higher education: A structural equation modeling approach to understanding impact factors among economics students. Electronics, 13(18): 3632. https://doi.org/10.3390/electronics13183632
[17] Jin, Y., Yan, L., Echeverria, V., Gašević, D., Martinez-Maldonado, R. (2025). Generative AI in higher education: A global perspective of institutional adoption Policies and guidelines. Computers and Education: Artificial Intelligence, 8: 100348. https://doi.org/10.1016/j.caeai.2024.100348
[18] Yan, L., Sha, L., Zhao, L., Li, Y., et al. (2024). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55(1): 90-112. https://doi.org/10.1111/bjet.13370
[19] Jiang, Y., Xie, L., Cao, X. (2025). Exploring the effectiveness of institutional policies and regulations for generative AI usage in higher education. Higher Education Quarterly, 79(4): e70054. https://doi.org/10.1111/hequ.70054
[20] Bhuiyan, M.A., Rahman, M.K., Basile, V., Ping, H., Bari, A.M. (2025). Adoption of ChatGPT for students' learning effectiveness. The International Journal of Management Education, 23(3): 101255. https://doi.org/10.1016/j.ijme.2025.101255
[21] Arce-Urriza, M., Chocarro, R., Cortinas, M., Marcos-Matas, G. (2025). From familiarity to acceptance: The impact of generative artificial intelligence on consumer adoption of retail chatbots. Journal of Retailing and Consumer Services, 84: 104234. https://doi.org/10.1016/j.jretconser.2025.104234
[22] Vega-Huerta, H., Gutierrez-Mejía, F., Calcina-Aguilar, B., Benito-Pacheco, O., De-la-Cruz-VdV, P., Maquen-Niño, G.L.E., Lázaro-Guillermo, J.C. (2025). ChatGPT-based conversational artificial intelligence system for virtual university admissions office client attention. Ingénierie des Systèmes d’Information, 30(4): 945-952. https://doi.org/10.18280/isi.300411
[23] Ikhsan, R.B., Fernando, Y., Prabowo, H., Gui, A., Kuncoro, E.A. (2025). An empirical study on the use of artificial intelligence in the banking sector of Indonesia by extending the TAM model and the moderating effect of perceived trust. Digital Business, 5(1): 100103. https://doi.org/10.1016/j.digbus.2024.100103
[24] Gultom, P., Marpaung, J.L., Weber, G.W., Sentosa, I., Sinulingga, S., Putra, P.S.E., Agung, V.R. (2024). Optimizing the selection of the sustainable micro, small, and medium-sized enterprises development center using a multi-criteria approach for regional development. Mathematical Modelling of Engineering Problems, 11(11): 2977-2987. https://doi.org/10.18280/mmep.111110
[25] Garzón, J., Patiño, E., Marulanda, C. (2025). Systematic review of artificial intelligence in education: Trends, benefits, and challenges. Multimodal Technologies and Interaction, 9(8): 84. https://doi.org/10.3390/mti9080084
[26] Liu, Y., Awang, H., Mansor, N. S. (2025). Exploring the potential barrier factors of AI chatbot usage among teacher trainees: From the perspective of innovation resistance theory. Sustainability, 17(9): 4081. https://doi.org/10.3390/su17094081
[27] Wang, W., Hackett, R.D., Archer, N., Xu, Z., Yuan, Y. (2025). Will AI-enabled conversational agents acting as digital employees enhance employee job identity? Information Management, 62(2): 104099. https://doi.org/10.1016/j.im.2025.104099
[28] Qasim, D., Bataineh, A.Q., Alhur, M. (2025). User-driven innovation in the telecom sector: The power of engaging customers in new service creation. International Journal of Innovation Studies, 9(2): 165-179. https://doi.org/10.1016/j.ijis.2025.05.001
[29] Liébana-Cabanillas, F., Ghazanfar, A.A., Higueras-Castillo, E., Wagner, R. (2025). Press to pay: The power of biometrics in financial transactions investigated by PLS-SEM, fsQCA, and NCA. Technological Forecasting and Social Change, 221: 124365. https://doi.org/10.1016/j.techfore.2025.124365
[30] Gultom, P., Nababan, E.S.M., Mardiningsih, Marpaung, J.L., Agung, V.R. (2024). Balancing sustainability and decision maker preferences in regional development location selection: A multi-criteria approach using AHP and Fuzzy Goal Programming. Mathematical Modelling of Engineering Problems, 11(7): 1802-1812. https://doi.org/10.18280/mmep.110710