Artificial intelligence as a moral mediator: emotional reciprocity driving happiness in hospitality
Artificial intelligence (AI) in hospitality is often portrayed as a cold, efficiency-focused tool, overlooking its potential to mediate emotional and ethical dynamics in the workplace. This study addresses the problem of how AI can ethically regulate emotional labor without dehumanizing work, and how emotional reciprocity contributes to workplace happiness. Using a quantitative, multigroup survey methodology, data were collected from 754 hospitality employees and 42 managers across hotels in Spain. Structural equation modeling examined the mediating role of AI-mediated emotional reciprocity (AI-MER) between emotional labor sustainability (ELS), shared prosperity (SP), human-centered leadership, and workplace happiness. Findings reveal that ELS is a foundational anchor enabling AI to mediat
1 Introduction
Can artificial intelligence ethically regulate emotional labor without dehumanizing work? This question strikes at the core of hospitality research, which often frames AI as a productivity tool, a personalization engine, or a potential threat to employment. Yet hospitality is fundamentally built on emotional reciprocity—a dynamic exchange of care, warmth, and empathy between employees and guests. Despite AI’s growing presence in service delivery, scholars note that “emotionally responsive artificial intelligence is reshaping our interactions by simulating or interpreting human emotions, raising ethical issues around misrepresentation and cultural bias” (Chavan et al. 2025). To address these ethical and operational gaps, AI must be reconceptualized as a moral–emotional mediator, capable of detecting, redistributing, and ethically regulating emotional labor in ways that protect human dignity and enhance workplace harmony.
The first critical function of AI in this framework is recognizing emotional overload among hospitality employees. AI systems can analyze communication volume, sentiment, and interaction patterns to identify when emotional demands exceed sustainable thresholds. Scholars caution that “the ethical challenges of affective computing include risks to privacy, emotional manipulation, and bias when using automated systems to interpret sensitive human emotions” (Mohammad 2021). By embedding safeguards and transparent protocols, AI can make emotional strain visible without reducing employees to data points, creating the foundation for a humane approach to emotional labor governance (Ramanayake et al. 2023).
Recognition alone is not sufficient; AI must also redistribute emotional labor to prevent burnout and promote fairness. Workforce management systems can reassign tasks, rotate staff handling high-conflict interactions, or equitably allocate emotionally demanding rooms. Research shows that “in hospitality settings, forced reliance on self-service technologies can significantly diminish perceived social value of service encounters, increasing feelings of dehumanization” (Almokdad et al. 2025). Ethically designed AI, however, transforms workload management into a shared organizational responsibility, enhancing psychological safety, fairness, and sustainability.
A third key function is triggering organizational care responses. Once AI detects high emotional strain, it can prompt supervisors to implement supportive interventions such as recovery breaks, check-ins, or schedule adjustments. Ethical frameworks emphasize that “emotion recognition technologies carry the potential for both prosocial benefits and harms, requiring careful ethical guidelines and oversight to ensure responsible deployment” (Mohammad 2021). In practice, AI amplifies empathetic leadership rather than replacing it, ensuring that well-being is structurally embedded rather than left to chance.
The ethical regulation of emotional labor also directly impacts guest satisfaction, extending beyond conventional personalization algorithms. Studies note that “the hospitality industry’s integration of AI is not only about efficiency or personalization but must contend with how people feel about AI’s ability to truly understand them” (Clergue 2025). By supporting employee emotional presence, AI fosters authentic guest interactions, where engagement and empathy, rather than scripted friendliness, drive service excellence.
It is important to clarify that AI systems in this study are not moral agents. AI does not possess ethical reasoning, moral consciousness, or autonomous decision-making capability. Rather, AI functions as a socio-technical operational mechanism, translating human-centered ethical norms embedded in organizational governance into actionable processes. Its role is mediatory and contingent, facilitating emotional labor visibility, recognition, and equitable redistribution, without originating ethical principles.
AI’s role in ethical labor governance connects to broader organizational outcomes such as shared prosperity and sustainable happiness. Literature stresses that “ethical frameworks for AI adoption increasingly emphasize human values and the need for human-centered design to ensure that technology serves stakeholders ethically and inclusively” (Popović et al. 2023). By recognizing, redistributing, and responding to emotional labor, AI not only protects employees but also strengthens systemic workplace happiness, operational fairness, and long-term organizational sustainability (Dobrosovestnova et al. 2022).
2 Conceptual framework built through managerial focus groups
To understand how artificial intelligence (AI), reframed as an ethical mediator of emotional reciprocity, shapes workplace happiness in hospitality, this study deliberately employed two sequential focus groups. The qualitative approach ensured that the conceptual framework emerged from managerial experience, rather than being imposed purely from theory. The first group was exploratory and abductive, uncovering the lived dynamics of AI in emotionally demanding hotel operations, while the second was confirmatory and integrative, refining and structurally ordering the key constructs. Together, they mapped a terrain where ethics, emotion, and technology intersect.
The exploratory focus group included 42 hotel managers representing 754 employees, randomly selected from roughly 1,200 hotels in the Ecostars database. This population ensured heterogeneity across hotel size, category, and operational complexity, while Ecostars provided a context where sustainability, operational responsibility, and ethical management were already salient. Discussions revealed a striking convergence: AI, emotional labor, leadership ethics, and equitable value distribution collectively point to workplace happiness as the central organizational outcome. Crucially, managers framed happiness not as fleeting mood or individual satisfaction, but as a systemic condition reflecting emotional sustainability, relational cohesion, and long-term well-being.
From these discussions, ten conceptual variables emerged, clustered into four domains. Emotional labor sustainability (ELS) captured the recognition, boundaries, and recoverability of emotional demands, preventing exhaustion. Ethical AI governance (EAIG) addressed transparency, fairness, and human oversight in AI deployment. Human-centered leadership (HCL) emphasized empathy, moral responsibility, and the translation of AI insights into supportive action. Shared prosperity (SP) described equitable distribution of economic, emotional, and well-being benefits across stakeholders. Additional variables enriched these domains: emotional equity (EE), organizational care capacity (OCC), guest emotional experience quality (GEEQ), sustainable organizational prosperity (SOP), with AI-mediated emotional reciprocity (AI-MER) and workplace happiness (WH) occupying pivotal roles. AI-MER integrates emotional load visibility, redistribution, and AI-triggered care responses, making emotional strain governable rather than invisible. Managers consistently linked these constructs to WH as a collective, emergent outcome.
The second focus group moved from exploration to consolidation, testing which constructs were conceptually meaningful and structurally essential. Through iterative discussion, managers refined the framework into six core variables: ELS, EAIG, HCL, SP, AI-MER, and WH, embedding remaining variables within these domains. Participants emphasized that these constructs form a causal sequence: foundational conditions (ELS, EAIG, HCL, SP) enable AI to act as a moral–emotional mediator, converging in AI-MER, which operationalizes ethical intentions and translates them into organizational practice.
The findings suggest a provocative shift in understanding AI: it is not a neutral instrument for efficiency or personalization, but a co-creator of moral and emotional order, capable of making invisible labor visible, distributing responsibility fairly, and activating organizational care. WH, therefore, is not an individual attribute but a systemic property emerging from ethically mediated emotional reciprocity.
This research advances hospitality theory by positioning AI as a structural ethical agent, redefining leadership, labor sustainability, and shared prosperity as interdependent conditions for collective well-being, and demonstrating that technology can serve humanity without dehumanizing the workforce (Lucas et al. 2025).
Although EAIG, HCL, SP, and AI-MER are normatively aligned, they are analytically distinct because they operate at different functional levels within the socio-technical system. EAIG refers to structural governance safeguards regulating AI use; HCL captures behavioral leadership enactment; SP reflects distributive outcomes concerning equity and value allocation; and AI-MER represents the operational mediation process integrating AI into emotional labor recognition and reciprocity. While positively correlated, these constructs occupy differentiated structural, behavioral, distributive, and operational roles rather than reflecting a single generalized value climate.
This study adopts a socio-technical perspective informed in part by managerial focus groups, which may introduce a top-down framing of WH and ELS. We acknowledge that definitions of well-being, fairness, and reciprocity are shaped by organizational power structures and may not fully capture frontline worker experiences, particularly in contexts of algorithmic monitoring. Critical scholarship on algorithmic management and emotional surveillance highlights that AI systems can intensify control, reduce autonomy, and commodify emotional expression. Accordingly, the positive associations identified here should be interpreted as contingent upon ethical governance and supportive leadership, rather than as universal or unproblematic outcomes of AI deployment.
3 Literature review
3.1 Emotional labor sustainability (ELS): recoverability and boundedness of emotional effort
Hospitality work is saturated with emotional effort, yet organizations often treat it as an inexhaustible personal resource, silently expected from frontline employees. ELS challenges this assumption by framing emotional labor as a finite organizational asset that must be made visible, bounded, and recoverable over time (Grandey 2000; Brotheridge and Lee 2003). ELS reframes exhaustion not as individual weakness or moral failing, but as a systemic risk arising when emotional demands accumulate unchecked, revealing the ethical stakes behind service excellence (Hochschild 2022; Wen et al. 2019). Focus group discussions with managers consistently underscored that service quality becomes morally compromised if emotional effort is neither recognized, redistributed, nor allowed periods of recovery (Diefendorff et al. 2005; Lim and Moon 2024). In this light, ELS embodies the organizational capacity to prevent burnout, ensuring that emotional labor is neither normalized as silent sacrifice nor endlessly extracted in the pursuit of guest satisfaction (Kim et al. 2025). Operationally, ELS manifests through equitable task allocation, structured breaks, and leadership practices that actively sustain employees’ emotional energy, embedding long-term well-being into the fabric of hospitality operations (Yao et al. 2019).
3.2 Ethical AI governance (EAIG): transparency, accountability, and non-punitive oversight in AI deployment
AI in hospitality can be either a moral ally or a coercive overseer, depending on how it is governed. EAIG reframes technology not as a neutral efficiency tool, but as a principled infrastructure designed to uphold human dignity, transparency, and oversight (Floridi et al. 2018; Mittelstadt 2019). Rather than serving as a covert surveillance apparatus, EAIG ensures that AI operates with algorithmic fairness, explainability, and accountability, protecting worker autonomy and aligning operations with organizational values (Farina et al. 2024; Binns 2020; Jobin et al. 2019). Study participants consistently highlighted a sharp distinction: AI that is ethically governed supports employees and leadership, while poorly designed AI can erode trust and emotional well-being, turning innovation into intrusion. EAIG embodies the extent to which AI responsibly aggregates operational and emotional data without profiling individuals, enforces fair treatment, and embeds ethical oversight throughout design and deployment (Lucas et al. 2025; Raji et al. 2020). Far from being a technical accessory, governance is the moral precondition that allows AI to safeguard emotional labor rather than exploit it, ensuring that technological sophistication translates into humane, sustainable workplace practices.
3.3 Human-centered leadership (HCL): leadership translating ethical norms into supportive practices
Leadership in hospitality is not merely about efficiency or oversight—it is a moral force that shapes how technology and people interact. HCL elevates empathy, ethical responsibility, and relational awareness as non-negotiable foundations for organizational decision-making, ensuring that employee well-being and sustainable performance are central, not peripheral concerns (Boyatzis and McKee 2005; Goleman et al. 2013). In practice, HCL emerges when leaders translate insights from AI or operational data into supportive, non-punitive actions that foster fairness, emotional recovery, and psychological safety (Hoch et al. 2021). It thrives on inclusive decision-making, active listening, and the equitable allocation of emotionally demanding tasks, cultivating trust, engagement, and loyalty among staff (Wang et al. 2014). Its relevance becomes most striking when integrating AI systems: leadership must ensure that technology enhances human dignity rather than eroding relational cohesion (Rupp et al. 2018). By embedding ethical judgment, care, and relational insight into everyday operational practices, HCL functions as the critical mediating force that enables AI-mediated emotional reciprocity to translate into systemic well-being, shared prosperity, and enduring workplace happiness.
3.4 Shared prosperity (SP): equitable distribution of resources and emotional recognition
Organizational success is meaningless if it is hoarded by a few while others bear the cost of emotional and operational labor. SP reframes success as the equitable distribution of economic, social, and emotional value among all stakeholders, ensuring that employees, management, and the broader community all benefit from organizational achievements (Hossain et al. 2024). In hospitality, SP extends far beyond financial metrics, encompassing relational outcomes, emotional equity, and collective well-being (Aguinis and Glavas 2019; Ravina-Ripoll et al. 2024; Robina-Ramírez et al. 2025).
At its core, SP emphasizes fair workload allocation, recognition of employee contributions, and ethical management practices, creating conditions that prevent exploitation and foster long-term sustainability (Zubeltzu‐Jaka et al. 2018). It also embeds environmental and social responsibility, highlighting the interdependence between organizational prosperity, employee well-being, and sustainable business practices (Konieczny et al. 2023).
When economic rewards and emotional labor are shared equitably, SP cultivates loyalty, trust, and engagement (Robina-Ramírez et al. 2023). It establishes a virtuous cycle in which ethical AI governance and human-centered leadership reinforce collective flourishing. In this framework, prosperity is not an abstract outcome but a systemic, lived experience, where ethical decision-making, emotional reciprocity, and relational cohesion generate long-term resilience and workplace happiness.
3.5 AI-mediated emotional reciprocity (AI-MER): operational processes integrating ELS, EAIG, HCL, and SP through AI-supported recognition and redistribution of emotional labor
Artificial intelligence in hospitality is often imagined as a cold, efficient-driven tool, yet it can assume a far more moral and relational role when designed intentionally. AI-MER transforms technology into an ethical infrastructure, capable of detecting, redistributing, and regulating emotional labor in ways that sustain fairness, relational balance, and employee well-being (Glikson and Woolley 2020; Davenport and Ronanki 2018). Rather than supplanting human empathy, AI-MER amplifies it, making invisible emotional effort measurable, identifying overload, and triggering organizational care responses (Cowls et al. 2019; Rahwan 2018).
AI is situated at the operational level in three levels: (1) normative level—ethical principles such as fairness, dignity, and shared prosperity; (2) institutional level—governance structures and leadership practices embedding these principles; and (3) operational level—AI systems that translate institutional commitments into redistributive, recognition-based, and emotionally responsive processes.
In practice, AI-MER allows hospitality managers to monitor aggregated emotional indicators, dynamically redistribute high-demand tasks, and ensure employees receive recovery opportunities, all while preserving human oversight and dignity (Cheng et al. 2023). It bridges the seemingly incompatible domains of technological efficiency and ethical labor management, integrating the principles of ELS, HCL, and EAIG. By operationalizing emotional equity, AI-MER fosters workplace cohesion, prevents burnout, and enhances service quality through authentic, sustainable emotional engagement (Ravina-Ripoll et al. 2024).
More than a tool, AI-MER positions artificial intelligence as a moral–relational mediator, translating organizational values into actionable practices that balance guest satisfaction with long-term employee flourishing. It demonstrates that AI can be both powerful and principled, embedding fairness, care, and reciprocity into everyday hospitality operations. By making emotional labor visible and governable, AI-MER ensures that ethical considerations are not abstract ideals but tangible, operational realities, ultimately cultivating collective workplace happiness and shared prosperity. AI-MER functions as a mechanism through which human-centered leadership and institutional ethical norms—such as fairness, recognition, and transparency—are operationalized. Its ethical significance is entirely derivative and contingent upon the governance structures and leadership practices within which it is embedded.
The concept of AI as a ‘moral mediator’ is derived from socio-technical and institutional perspectives, where outcomes emerge from the interaction of human actors, governance structures, and technical systems (Floridi et al. 2018; Mittelstadt 2019; Rahwan 2018). Within this framework, AI shapes the operational translation of ethical commitments but does not generate normative content itself. In other words, AI participates in distributed agency, where moral outcomes are contingent upon human-centered leadership and governance, rather than intrinsic AI cognition. Consistent with our socio-technical framework, AI-MER captures the organizational enactment of AI rather than its internal computational properties.
3.6 Workplace happiness (WH): dependent variable reflecting sustained emotional well-being, relational cohesion, psychological safety, and perceived fairness
True organizational success cannot be measured solely in profits or efficiency—it is rooted in the emotional and relational well-being of employees. WH reconceptualizes well-being as a collective, systemic condition, where sustained emotional health, relational cohesion, and psychological safety are embedded in the fabric of the organization, rather than fleeting moods or individual satisfaction (Kahn and Fellow 2013; Pfeffer 2010). WH operates as both a moral and strategic imperative, encompassing ethical treatment, fair allocation of emotional labor, and leadership structures that actively support employees, while simultaneously driving long-term organizational performance (Ravina-Ripoll et al. 2021, 2024; Robina-Ramírez et al. 2026).
At the core of WH lies emotional sustainability, ensuring that emotional labor is recognized, bounded, and recoverable (Hochschild 2022); relational cohesion, reflecting trust, collaboration, and high-quality interpersonal connections (Dutton and Heaphy 2003); and psychological safety, which empowers employees to voice concerns and innovate without fear of reprisal (Edmondson 1999). Ethical governance safeguards fairness and dignity, equitable workload distribution prevents burnout, and leadership care translates organizational awareness into concrete, supportive action (Robina-Ramírez et al. 2025; Grant 2007).
WH emerges when these elements interact synergistically, generating collective well-being, organizational resilience, and sustainable prosperity. In the hospitality sector, this perspective underscores that employee happiness is not a luxury but a foundation: it underpins ethical operations, sustains high-quality service, and shapes guest experiences. By framing happiness as a systemic outcome rather than an individual trait, organizations can align emotional labor, leadership, and ethical AI governance to cultivate environments where both employees and guests thrive.
WH is the central outcome, encompassing sustained emotional well-being, relational cohesion, psychological safety, and perceived fairness. By focusing on WH, the analysis demonstrates how AI, as part of a socio-technical arrangement, mediates ethical processes without implying intrinsic moral agency. This ensures conceptual clarity and aligns the empirical investigation with the study’s theoretical framing.
Although these constructs are normatively aligned, they are analytically distinct because they operate at different functional levels within the socio-technical system.
4 Methodology
The organizations included in this study implemented AI-supported systems primarily in the areas of workforce analytics, scheduling optimization, performance dashboards, and sentiment or feedback analysis. These systems were not experimental prototypes but operational tools integrated into HR and service management processes for at least one year prior to data collection. AI functionalities included workload forecasting, emotional feedback aggregation, and decision-support dashboards for managers. Employees interacted with these systems either directly (e.g., feedback interfaces) or indirectly through managerial decisions informed by AI-generated analytics.
4.1 Sample and population
The sample addressed employees and managers. They are different because they occupy distinct roles in the emotional economy of hospitality. Employees are the primary producers of emotional labor; the database captures their lived experience of emotional intensity, overload, and recovery to assess whether AI systems protect dignity and well-being. Managers, by contrast, are organizational decision-makers; the sample provides them with aggregated, non-intrusive indicators that support ethical leadership, fair task redistribution, and care-based interventions. Thus, the same data serves two purposes: recognition and protection for employees, and ethical governance and responsibility for managers, without collapsing one role into the other.
The study was designed to unsettle purely descriptive approaches by deliberately weaving quantitative reach with qualitative interrogation. Data collection began with an online survey distributed via email to 1,200 hotel managers drawn from the Ecostars v3 registry. Participation was voluntary and yielded responses from 49 hotels and 787 employees. After rigorous screening, validation, and consistency checks, the final dataset was reduced to 42 hotels and 754 employees, strengthening analytical reliability.
4.2 Hypotheses and model
The model explains how ethical management of work, technology, and leadership jointly shapes employee well-being. The central research question explicitly addresses whether AI, functioning as a moral mediator through emotional reciprocity, contributes to sustained workplace happiness in hospitality organizations. All theoretical arguments, conceptual definitions, and empirical analyses are oriented around WH as the outcome, preventing any overstatement of AI’s moral role. ELS enables EAIG by making emotional effort visible, bounded, and recoverable. EAIG then supports HCL by providing transparent, fair, and non-punitive insights for decision-making. ELS, EAIG, and HCL each contribute to SP by promoting fairness, dignity, and equitable value distribution. Shared prosperity activates AI-MER, which operationalizes emotional equity through ethically governed AI practices. AI-MER leads to workplace happiness by sustaining emotional well-being, relational balance, and psychological safety across the organization.
AI-MER can be analyzed as a mediator because it operates how ethical conditions translate into employee outcomes. AI-MER represents AI as an operational mechanism translating institutional ethical norms into actionable processes; it does not imply moral autonomy. The model allows testing direct and indirect effects, including serial mediation, showing how shared prosperity and ethical leadership generate WH through AI-MER (Fig. 1).
Fig. 1
Model. H1: AI-MER influences WH, H2: SP influences AI-MER, H3: ELS influences SP, H4: ELS influences EAIG, H5: EAIG influences SP, H6: EAIG influences HCL, H7: HCL influences SP
4.3 Indicators
The research unfolded through two strategically timed focus groups, each with a distinct intellectual purpose (Sánchez-Oro and Robina Ramírez 2020). The first focus group, involving managers from 11 randomly selected hotels, played an exploratory role. Rather than merely reviewing indicators, participants interrogated the deeper meaning of sustainability in emotionally intensive, AI-supported hospitality environments. Discussions revealed a shared conviction: issues such as AI use, emotional demands, leadership responsibility, and fair value distribution ultimately converge on one collective outcome—workplace happiness understood as an enduring organizational condition rather than an individual feeling. From this debate, an initial constellation of variables emerged, spanning emotional labor sustainability, ethical AI oversight, human-centered leadership, shared prosperity, emotional reciprocity, and well-being.
The second focus group adopted a confirmatory and integrative stance. Managers critically streamlined the initial 29 indicators, testing causal coherence and practical relevance. Through iterative dialogue and consensus building, the framework crystallized into six core constructs and 24 indicators (Sánchez-Oro and Robina Ramírez 2020). Survey items were measured using a 7-point Likert scale, allowing consistent assessment across dimensions. The resulting methodology moves beyond measurement, exposing how ethically governed emotional reciprocity—rather than technology alone—anchors sustainable happiness in hospitality organizations (Table 1).
Table 1 Indicators
Full size table
They capture how AI-supported acknowledgment among employees shapes teamwork, emotional sustainability, and job happiness, and how guest appreciation—interpreted through feedback systems and sentiment analysis—enhances employee motivation and meaning at work. At the organizational level, these indicators link ethically governed emotional reciprocity to service quality, employee retention, customer loyalty, and digital engagement, operationalizing AI-MER as the pathway through which ethical leadership and emotional sustainability generate collective workplace happiness.
The same indicators apply to both employees and managers but serve distinct purposes. Employees experience them subjectively, reflecting emotional effort, overload, and recovery needs. Managers interpret them in aggregated form, using them to guide ethical decisions, rebalance emotional labor, and activate care responses without compromising privacy or fairness.
4.4 Data processing
Data were examined through partial least squares (PLS) regression using SmartPLS 4.1.0.3, refining structural model estimates by reducing residual variance (Hair et al. 2013). Particularly suitable for limited or non-normally distributed samples (Hulland 1999), this approach assessed the direct, indirect, and moderating effects of AI-enabled ethical mediation on workforce well-being and organizational sustainability. The cross-sectional survey data capture employees’ perceptions of organizational ethical practices and AI-mediated processes. While the structural model identifies statistically significant associations between AI-MER and WH) these results reflect human experience of ethically mediated processes rather than evidence that AI independently performs moral or ethical functions. The model highlights how ethically embedded AI contributes to perceived fairness, recognition, and emotional support, contingent on human-led governance.
Given the cross-sectional and self-reported nature of the data, the findings should be interpreted as statistically significant associations rather than causal effects. Although PLS-SEM identifies indirect relationships consistent with mediation, the design does not establish temporal sequencing or causal necessity. Accordingly, terms implying generation, production, or deterministic influence are avoided. The results indicate that AI-mediated emotional reciprocity is positively associated with WH within ethically structured organizational contexts, and future longitudinal or experimental research is required to substantiate causal directionality.
5 Results
In SEM-PLS, the external (measurement) model defines how latent constructs are measured by their observed indicators, assessing reliability and validity. The internal (structural) model specifies the hypothesized relationships between latent constructs, estimating path coefficients, directionality, and predictive effects. Together, the external model ensures constructs are sound, while the internal model tests theoretical relationships and causal mechanisms.
5.1 External model
Indicator reliability requires external loadings above 0.70 (Carmines and Zeller 1979). Table 2 shows all items meet this standard except AI-MER43, EAIG1, HCL1, HCL4, SP3, WH3, and WH4.
Table 2 Outer model loadings
Full size table
Internal consistency of the measurement scales was evaluated using Cronbach’s alpha and composite reliability. Cronbach’s alpha exceeded the 0.70 benchmark (Nunnally and Bernstein 1994), and composite reliability values also surpassed 0.70 (Werts et al. 1974), confirming robust construct reliability.
Convergent validity was assessed through average variance extracted (AVE), with all constructs exceeding the 0.50 threshold (Fornell and Larcker 1981), demonstrating adequate shared variance. Detailed results for Cronbach’s alpha, Rho_A, composite reliability, and AVE are presented in Table 3.
Table 3 Validity and reliability
Full size table
Fornell and Larcker (1981) recommend that a construct’s square root of AVE exceed its correlations with other constructs to demonstrate discriminant validity. As presented in Table 4, this criterion is met for all constructs, confirming that each one is distinct and adequately differentiated from the others.
Table 4 Discriminant validity matrix (Fornell–Larcker criterion)
Full size table
Discriminant validity was further evaluated using the Heterotrait–Monotrait (HTMT) ratio (Henseler et al. 2015). All values fell below the 0.90 threshold, confirming that the constructs are adequately distinct and exhibit robust discriminant validity (see Table 5). Discriminant validity tests confirm that although the constructs are positively correlated, they remain empirically distinguishable.
Table 5 Discriminant validity matrix (heterotrait–monotrait ratio criterion)
Full size table
These results provide strong evidence of discriminant validity for our measures of transformational leadership at both the group and individual levels (Henseler et al. 2015). Model fit was assessed using the standardized root mean square residual (SRMR), with values below 0.08 indicating a good fit between the proposed model and observed data.
5.2 Structural model analysis
The R2 coefficient assesses the model’s explanatory and predictive strength (Chin 1998). Results reveal substantial explanatory power, with R2 values exceeding 0.168 for nearly all constructs (Table 6). Additionally, cross-validated redundancy measures confirm the model’s predictive relevance, demonstrating both robustness and practical applicability.
Table 6 Structural model results
Full size table
Table 7 demonstrates that all hypothesized relationships are highly significant, confirming the model’s robustness. AI-MER strongly predicts WH (H1: β = 0.641, t = 24.308, p < 0.001), showing that ethically guided AI translates emotional labor and organizational care into collective well-being. SP significantly influences AI-MER (H2: β = 0.709, t = 32.414, p < 0.001), indicating that equitable distribution of economic and emotional resources enables AI to mediate emotional exchange effectively. ELS impacts SP (H3: β = 0.250. t = 7.785, p < 0.001) and EAIG (H4: β = 0.649, t = 33.898, p < 0.001), highlighting the foundational role of sustainable emotional practices.
Table 7 Structural model results
Full size table
Ethical AI governance further supports SP (H5: β = 0.180. t = 4.533, p < 0.001) and human-centered leadership (H6: β = 0.504, t = 18.101, p < 0.001), while HCL strongly predicts SP (H7: β = 0.467, t = 16.164, p < 0.001). Together, these results reveal a clear causal pathway: sustainable emotional labor, ethical AI, and human-centered leadership converge through AI-MER, fostering fairness, reciprocity, and ultimately workplace happiness, confirming AI’s role as an ethical mediator in hospitality organizations. While the structural model shows statistically significant paths linking AI-MER to workplace happiness, these findings reflect structured associations rather than causal effects. Cross-sectional survey data and single-country sampling limit causal inference. The role of AI is evaluated through its perceived operational mediation under ethically governed organizational conditions, rather than as an independent producer of well-being.
The multigroup analysis examined whether the structural relationships in the model differed between managers and employees, providing insight into how artificial intelligence functions as an ethical mediator of emotional reciprocity across hierarchical levels in hospitality organizations. Tables 8 and 9 report direct and indirect effects, including path coefficients (β), t values, and significance levels (p values), allowing a clear comparison of magnitude and significance between the two groups.
Table 8 Direct effect
Full size table
Table 9 Specific indirect effect
Full size table
All hypothesized relationships are significant for both managers and employees, confirming the robustness of the model. AI-MER strongly predicts WH) for managers (β = 0.665, t = 7.999, p < 0.001) and employees (β = 0.639, t = 23.621, p < 0.001), showing that AI-mediated ethical emotional exchange enhances collective well-being regardless of role. SP has a stronger effect on AI-MER for employees (β = 0.716) than managers (β = 0.601), suggesting that frontline staff experience AI’s ethical mediation more directly, likely because they are more exposed to day-to-day emotional demands and customer interactions.
EAIG positively influences HCL in both groups (managers: β = 0.582; employees: β = 0.501) and also impacts SP (managers: β = 0.240; employees: β = 0.180). The slightly higher values for managers on EAIG → HCL indicate that leaders perceive ethical AI as a tool to guide and structure leadership practices, whereas employees see its impact more in terms of fairness and resource distribution. Emotional labor sustainability (ELS) predicts EAIG (managers: β = 0.637; employees: β = 0.651) and SP (managers: β = 0.244; employees: β = 0.251), showing that employees may feel the effects of sustainable emotional labor more acutely, while managers focus on shaping policies that support these practices. HCL also strongly predicts SP in both groups (managers: β = 0.561; employees: β = 0.458), reflecting that ethical leadership contributes to perceived fairness and shared prosperity at all levels.
Indirect paths further highlight differences in perception and experience. Complex pathways, such as EAIG → HCL → SP → AI-MER → WH, are significant for both managers (β = 0.131, t = 2.594, p = 0.010) and employees (β = 0.105, t = 9.885, p < 0.001), demonstrating that ethical AI and leadership converge through AI-mediated emotional reciprocity to generate workplace happiness. Notably, employees show stronger indirect effects in paths linking ELS and SP to AI-MER (e.g., SP → AI-MER: β = 0.457 vs. 0.400 for managers), suggesting that they are more sensitive to the ethical redistribution of emotional labor facilitated by AI. Managers, on the other hand, show higher β values for EAIG → HCL → SP (0.326 vs. 0.229 for employees), indicating that they emphasize the structural and governance aspects of AI, translating ethical principles into actionable leadership and organizational fairness.
These contrasts reveal nuanced perceptions between hierarchical levels. Employees, directly engaged in service delivery, experience the tangible effects of AI-MER such as reduced emotional overload and fair task distribution, which enhances their sense of well-being and motivation. Managers, in contrast, interact with the model more strategically, using AI to enforce ethical oversight, guide leadership practices, and maintain systemic fairness. While both groups benefit from AI-mediated emotional exchange, employees perceive its practical, day-to-day impact more strongly, whereas managers focus on its role in sustaining organizational structures and ethical governance.
Differences between managers and employees illuminate the dual role of AI: as a lived, practical experience for frontline staff and as a governance and leadership tool for managers, reinforcing its significance in fostering ethical, emotionally sustainable workplaces in hospitality organizations.
6 Discussion
Artificial intelligence in hospitality is often imagined as a cold, calculating tool—a dispassionate administrator of efficiency. Yet, the findings of this study challenge that stereotype. AI does not simply enforce rules; it becomes a moral mirror, reflecting the emotional realities of human work. ELS emerges as the deepest structural anchor, showing that ethical AI governance cannot be imposed from above or coded into abstract principles. Ethics in AI is not a policy manual; it is lived, breathed, and negotiated in the daily emotional interactions of employees. Unmanaged emotional labor, as Hochschild (2022) and Grandey (2000) warn, breeds exhaustion and moral compromise—but recognized and bounded, it allows AI to mediate ethically rather than exploitatively. AI systems in this study are not moral agents. AI does not possess ethical reasoning, moral consciousness, or autonomous decision-making capability. Rather, AI functions as a socio-technical operational mechanism, translating human-centered ethical norms embedded in organizational governance into actionable processes. Its role is mediatory and contingent, facilitating emotional labor visibility, recognition, and equitable redistribution, without originating ethical principles.
Yet, ethical mediation is not just about individual well-being; it thrives only in systems that deliver fairness visibly and tangibly. SP dominates as a driver of AI-mediated emotional reciprocity (AI-MER), proving that AI becomes moral only in environments perceived as inclusive and equitable. When value is shared and effort rewarded, employees become more receptive to AI as a partner in ethical practice. This insight overturns the notion of AI as a neutral observer: it is a co-creator of moral order, shaping workplace dynamics in ways that honor the emotional investments of staff, echoing Pfeffer (2010) and Aguinis and Glavas (2019).
From a technological standpoint, AI-MER operationalizes ethics in ways traditional policies cannot. By redistributing emotional demands, making strain visible, and activating care, AI transforms ethical principles into practical, measurable actions. Fairness, accountability, and care cease to be abstract ideals and become lived experiences embedded in sociotechnical infrastructures, aligning with Floridi et al. (2018) and Cowls et al. (2019). Here, AI is not replacing human judgment—it is a catalyst for ethical awareness, amplifying the moral signals that would otherwise be lost in the rush of daily hospitality operations.
The power of AI-mediated emotional reciprocity extends to leadership itself. The mediation pathways identified in our model (AI-MER linking EAIG, HCL, and ELS to WH) do not imply autonomous moral agency on the part of AI. Instead, the findings indicate that employees perceive higher workplace well-being when AI systems are embedded in ethically structured organizational contexts. AI’s role is therefore conditional and operational, assessed only through its influence on human experiences of fairness, recognition, and emotional support.
Far from replacing resonant leaders, AI enhances their capacity to cultivate systemic happiness. Emotional reciprocity is not a personal mood but a collective outcome, co-governed by ethical AI systems and empathetic human leaders. Workplace well-being emerges from the interplay of fairness, care, and recognition, all mediated by AI that enforces neither coercion nor indifference. In this light, the title is not merely descriptive, it is a provocation: artificial intelligence can be a moral agent, a mediator of ethical and emotional harmony, but only when it engages with the real, messy, and profoundly human currents of labor in hospitality.
The multigroup analysis provides compelling evidence that AI-MER fully mediates the relationships between core organizational constructs; ELS, EAIG, HCL, and SP and WH. Across both managers and employees, direct effects from these predictors to WH are largely non-significant, indicating that workplace happiness is not simply a product of leadership, fairness, or emotional labor alone, but emerges through AI-MER as an ethical, operational intermediary. This finding aligns with Cowls et al. (2019) and Floridi et al. (2018), who emphasize that AI achieves meaningful organizational impact only when it supports ethical processes rather than functioning autonomously, embedding fairness and care into operational routines. It also resonates with Rahwan (2018), who stresses the importance of society-in-the-loop mechanisms to ensure ethical mediation within organizations.
Indirect effects underscore the nuanced role of AI-MER. For employees, the path SP → AI-MER → WH is stronger (β = 0.457, t = 15.981, p < 0.001) than for managers (β = 0.400. t = 3.498, p < 0.001), reflecting employees’ direct exposure to equitable redistribution of emotional demands and recognition. Similarly, ELS influences WH via chains such as ELS → EAIG → HCL → SP → AI-MER → WH (β = 0.083–0.098, p < 0.05), supporting Brotheridge and Lee (2003) and Yao et al. (2019) on the importance of organizational support in preventing emotional strain. Employees perceive AI as a protective and fairness-enforcing tool, while managers interpret it as a governance mechanism to allocate emotional labor ethically, echoing Pfeffer’s (2010) emphasis on the human factor in sustainable organizations.
HCL and EAIG further highlight the meaning of mediation. HCL’s impact on WH operates indirectly through SP and AI-MER (β = 0.224 for managers; 0.209 for employees, p < 0.001), demonstrating that leadership’s empathic potential materializes fully when AI makes emotional effort visible and manageable. EAIG also influences WH indirectly via HCL → SP → AI-MER → WH (β = 0.131/0.105, p < 0.05), reflecting Binns (2020) and Mittelstadt (2019) on fairness, accountability, and responsible AI design. These mediated relationships illustrate that AI-MER is not a supplemental tool but a structural ethical mechanism, without which leadership, fairness, and labor practices alone would not generate systemic workplace happiness, aligning with Boyatzis and McKee (2005), Grant (2007), and Aguinis and Glavas (2019) on meaningful work and prosocial motivation.
The contrast between managers and employees further underscores mediation’s significance. Employees respond more sensitively to AI-mediated interventions due to their direct engagement with emotional workloads, whereas managers focus on ethical oversight and redistribution. This full mediation pattern confirms that AI-MER operationalizes ethical reciprocity, transforming leadership, labor sustainability, and shared prosperity into a collective, enduring organizational outcome. Echoing Diefendorff et al. (2005), Wen et al. (2019), and Kim et al. (2025), these results highlight that workplace happiness emerges through ethically orchestrated AI-mediated processes, rather than as a direct byproduct of leadership or labor practices, demonstrating that mediation is central to translating organizational intentions into lived well-being.
7 Conclusions
7.1 Theoretical conclusions
The contribution of this study lies in demonstrating that AI, when embedded within ethically structured, fairness-oriented, and emotionally sustainable organizational contexts, is statistically associated with higher levels of WH. AI functions as an operational mediator, and its perceived ethical impact is entirely contingent on human-led governance structures.
AI-enabled emotional analytics raise significant concerns regarding privacy, algorithmic bias, regulatory compliance, and the potential intensification of emotional surveillance. The visibility of emotional labor may function as recognition under ethically governed conditions, but it may also produce performative compliance, autonomy reduction, or inequitable assessment if safeguards are absent. Accordingly, AI-MER should not be assumed ethically neutral or inherently beneficial. Its positive association with WH in this study is contingent upon transparent governance, bias mitigation, regulatory alignment, and leadership practices that prioritize dignity and psychological safety.
Ethical AI is rooted in emotional realities, not abstract codes. The study demonstrates that the ethical function of AI in hospitality emerges from the management and recognition of emotional labor rather than from pre-programmed principles. ELS acts as a structural anchor, showing that AI mediates ethics effectively only when it is embedded in systems that recognize employees’ emotional contributions. Unmanaged emotional labor leads to fatigue, moral compromise, and disengagement, whereas bounded and recognized emotional work enables AI to support ethical decisions. This suggests that AI is not a neutral or detached technology; its ethical capacity is inseparable from the lived experiences and emotional dynamics of human actors. Ethical AI governance must therefore consider emotional labor as a foundational condition rather than a peripheral concern.
Fairness and shared value are prerequisites for moral AI mediation. The findings highlight SP as a dominant driver of AI-MER. AI becomes an ethical mediator only when employees perceive the organizational system as fair, inclusive, and equitable. Emotional reciprocity—employees’ willingness to respond positively to AI-mediated interventions—depends on trust that value and recognition are distributed justly. This theoretical insight underscores that moral AI does not operate in isolation but within social and organizational contexts where fairness and reciprocity are visible and meaningful. Ethical AI cannot be a stand-alone mechanism; it is inseparable from systemic justice and inclusive value distribution.
AI operationalizes ethics through relational and systemic mechanisms. The study illustrates that AI mediates ethics not merely as abstract principles but through tangible practices: making emotional strain visible, redistributing workloads, and amplifying caring behaviors. This highlights a shift from normative to operational ethics, where moral principles are translated into relational actions and organizational routines. AI does not replace human leadership but amplifies resonant leadership by institutionalizing ethical practices across teams. Consequently, workplace happiness and ethical behavior are collective, systemic outcomes rather than individual traits, demonstrating that AI can act as a moral agent by shaping the structures and processes that govern emotional and ethical interactions.
7.2 Managerial practices
Integrate emotional labor assessment into AI systems Managers should embed mechanisms that monitor, recognize, and balance employees’ emotional labor within AI-driven workflows. By capturing emotional strain and providing actionable insights, AI can support fair redistribution of emotional tasks, preventing burnout and fostering ethical behavior. Recognizing emotional contributions through AI-informed dashboards or alerts ensures that ethical governance is grounded in real experiences rather than abstract policies.
Foster fairness and shared prosperity in organizational structures Managers must ensure that AI-mediated interventions are implemented in contexts perceived as equitable. Practices such as transparent performance metrics, inclusive decision-making, and visible rewards for emotional effort enhance trust in AI as a moral mediator. By aligning AI actions with shared organizational values, leaders strengthen employees’ willingness to engage with AI ethically and reinforce the perception of reciprocal fairness.
Leverage AI to support ethical leadership and team culture AI should be used to amplify, not replace, human leadership. Managers can deploy AI to monitor team dynamics, identify sources of emotional tension, and suggest interventions that promote care and reciprocity. Ethical AI practices include redistributing workloads, highlighting achievements, and facilitating collaboration in emotionally demanding contexts. These interventions help cultivate a culture of systemic well-being, where emotional reciprocity and organizational happiness emerge as collective outcomes.
7.3 Future research and limitations
A promising avenue for future research is to explore how AI-mediated emotional reciprocity interacts with diverse cultural and organizational contexts in hospitality. While this study highlights the ethical mediation of AI through ELS and SP, organizations differ widely in emotional norms, power structures, and fairness perceptions. Investigating how AI adapts to or is perceived in cross-cultural settings could provide insights into designing AI systems that are sensitive to local ethical expectations, employee relational patterns, and emotional expressions, enhancing both ethical governance and employee engagement globally.
The study focuses on hospitality organizations, while ELS and SP provide strong explanatory power, their measurement relies on organizational perception and self-reporting. This introduces potential bias, as employees’ awareness, interpretation, and willingness to report emotional strain or perceptions of fairness may vary, possibly affecting the accuracy and consistency of AI-mediated ethical evaluation.
8 Conflict of interest
The authors declare no competing interests.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelsurveystudy
The end of the browse-and-click era: The roadmap to agentic commerce
The agentic commerce era is here — but how much does it matter? The short answer: enormously, and more so every day. McKinsey has estimated that five years from now, agentic commerce — meaning AI agents acting autonomously on behalf of consumers to search, compare and buy across platforms — could account for $1 trillion in the US business-to-consumer (B2C) retail market, and $3 trillion to $5 trillion globally. At the January conference of the National Retail Federation, participants noted that what was experimental is fast becoming operationalized , and implementations that used to take two to three months are now being done in a few weeks. Common standards and practices are being established, such as the Agentic AI Foundation , as well as tools like the Agentic Commerce Protocol and the

PMI builds commerce engine to glean customer insights
Counterfeit tobacco sales account for as much as 75% of South Africa’s total market. And while Mary Mahuma, CIO for Southern Africa PMI, admits that the challenge facing the business is significant, she finds solutions by tackling the root cause of the issue: customer insights . According to her, other FMCG brands also struggle to clearly understand consumer behavior, how they engage with brands, and what they actually want. This is especially true in rural and informal markets. “One might expect a brand like PMI to try to address these challenges by focusing on big fish,” she says. “But there’s so much value in better targeting our strategies toward understanding the hidden market for tobacco products.” This market, consisting of small, independent convenience or general trade stores, is
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
Omni123: Exploring 3D Native Foundation Models with Limited 3D Data by Unifying Text to 2D and 3D Generation
Omni123 is a 3D-native foundation model that unifies text-to-2D and text-to-3D generation using a shared sequence space with cross-modal consistency as an implicit structural constraint. (1 upvotes on HuggingFace)
DynaVid: Learning to Generate Highly Dynamic Videos using Synthetic Motion Data
DynaVid addresses limitations in video diffusion models by using synthetic motion data represented as optical flow to improve realistic video synthesis with dynamic motions and fine-grained motion control. (2 upvotes on HuggingFace)



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!