Contents
Download PDF
pdf Download XML
1532 Views
207 Downloads
Share this article
Review Article | Volume 6 Issue 2 (July-December, 2025) | Pages 1 - 10
Smart Recruitment, Hiring and Automation: A Review
1
College of Administration and Economics, University of Mosul, Iraq
Under a Creative Commons license
Open Access
Received
July 2, 2025
Revised
Aug. 14, 2025
Accepted
Aug. 22, 2025
Published
Oct. 13, 2025
Abstract

Purpose: This review synthesizes empirical and conceptual scholarship on artificial-intelligence-driven recruitment and HR automation to identify consistent findings, methodological limitations and future research agendas.  Design/methodology/approach: Eleven peer-reviewed studies published between 2001 and 2025 were analyzed. The corpus spans survey research, systematic and scoping reviews, bibliometric analyses, technological evaluations and qualitative interviews. Comparative gap analysis of the distilled cross-cutting themes.  Findings: AI adoption in recruitment is widespread-up to 78% of surveyed organizations employ at least one tool, achieving reported time-to-hire reductions of 30-63% and perceived productivity gains of 80%. Despite these benefits, the evidence base is dominated by cross-sectional, self-report designs situated in large Anglophone firms, which restricts causal claims and generalizability. Recruitment remains the primary focus; downstream HR functions, long-term organizational outcomes and candidate experiences are understudied. Although ethical governance frameworks abound conceptually, few have been empirically validated, leaving bias, transparency and privacy concerns insufficiently measured. Originality/value: By juxtaposing findings and research gaps across diverse methodologies, sectors and time periods, this review offers the most comprehensive, up-to-date map of smart recruitment scholarship and articulates a prioritized research agenda emphasizing longitudinal, contextually diverse and ethically grounded investigations.

Keywords
INTRODUCTION

The rapid proliferation of artificial intelligence (AI) has significantly altered the talent acquisition landscape, leading to the development of "smart recruitment" systems that automate or enhance tasks such as sourcing, screening and selection, which were traditionally performed by human recruiters. Empirical studies now reveal that nearly 80% of organizations have implemented at least one AI application in their hiring processes, reporting an average reduction in time-to-hire of approximately 30% and other notable efficiency improvements. Perceptions among recruiters and broader human resources (HR) staff are generally positive: 80.6% of employees in one study "strongly agree" that AI performs HR tasks more rapidly than human counterparts and 73.5% of professional recruiters express an intention to adopt or increase AI usage in the near future. Despite these apparent benefits, the existing evidence base highlights ongoing methodological, functional and ethical challenges. Much of the published research is cross-sectional and self-reported, limiting causal inference and external validity across industries, firm sizes and cultural contexts. Functionally, research predominantly focuses on recruitment, while downstream HR activities-such as performance management, compensation, retention and employee well-being-are comparatively underexplored. Ethical examination is similarly nascent: although 41% of organizational respondents acknowledge algorithmic bias and 35% cite data privacy concerns, comprehensive field evaluations of fairness audits, explainable AI or participatory governance mechanisms are limited. Technological diversification further complicates empirical analysis. Emerging blockchain-AI hybrids, for instance, promise nearly perfect data integrity (99.8%) and built-in credential verification that could reduce résumé fraud and certain forms of discrimination; however, their real-world feasibility and user acceptance remain largely untested beyond laboratory simulations. Emotion-recognition tools, affective video analytics and other advanced modalities have introduced additional ethical and legal questions that are largely unexplored in longitudinal or multi-stakeholder studies. In this context, this review systematically synthesizes 11 peer-reviewed studies published between 2001 and 2025 to map the current state of knowledge on smart recruitment, hiring and HR automation. By integrating quantitative findings, conceptual frameworks and identified research gaps across diverse methodologies and sectors, this review aims  to (1) Assess the empirical robustness of reported efficiency and effectiveness outcomes, (2) Investigate the breadth and depth of functional, contextual and ethical coverage and (3) Articulate a prioritized agenda for future research that emphasizes longitudinal design, contextual diversity and responsible AI governance. In doing so, the paper provides a comprehensive, critically grounded foundation for scholars, practitioners and policymakers seeking to harness AI's potential in talent acquisition while safeguarding equity, transparency and human agency in the evolving world of work. This study will examine 11 papers as follows:

 

  • Paper 1: Unraveling the Reshaping of Human Resource Management Function: The Mediating Role of Artificial Intelligence [1]

 

Key Weaknesses

 

  • Geographical limitation: The dataset was derived exclusively from a single organization in Durban, restricting the study’s external validity and hindering its applicability across different industries and contexts

  • Insufficient sample size: Out of a target population of 500, only 103 responses were considered valid. This relatively small number weakens representativeness and lowers the statistical power of the findings

  • Restricted methodological design: The study relied solely on a quantitative, cross-sectional survey, which prevents deeper qualitative insights and does not allow for drawing causal conclusions over time

  • Reliability concerns: The measurement scale assessing the “impact of AI on HRM” generated a Cronbach’s alpha of 0.661-slightly below the accepted threshold of 0.70-indicating potential instability in item consistency

  • Possible common method variance: Since all variables (predictors and outcomes) were collected through a single instrument at one point in time, there is a heightened risk of inflated correlations, although this issue was not formally tested in the study

 

Key Strengths

 

  • Solid theoretical foundation: The study is underpinned by technological determinism, socio-technical systems theory and the resource-based view, providing a strong conceptual framework for analyzing the AI-HRM relationship

  • Focus on less-explored HRM aspects: Beyond traditional recruitment, the findings highlight efficiency gains (e.g., 80.6% strongly agreed AI helps accelerate tasks) and employee well-being through improved work-life balance

  • Acceptable internal consistency: While one scale showed some instability, the instruments overall produced   a   Cronbach’s   alpha   of   0 .736.  In addition,    the    section    measuring    opportunities and challenges demonstrated reliable internal consistency (α = 0.769)

  • Comprehensive perspective: The research evaluates both benefits (cost savings, productivity increases) and risks (bias, privacy concerns, resistance), offering a balanced and integrated assessment

  • Practical recommendations: It suggests actionable strategies such as implementing bias audits, adopting explainable AI systems and providing employee training programs, ensuring clear implications for HR practice

 

Research Gaps

Highlighted (and still remaining)

 

  • Functional breadth: Most prior AI-in-HRM work centers on recruitment, compensation, workplace safety and negligence remain scarcely examined, indicating the need for broader functional studies

  • Contextual diversity: industry, cultural and national settings beyond a South African tertiary-education context are yet to be empirically tested, limiting global applicability

  • Methodological depth: Future research should employ a mixed-methods or longitudinal design to capture employees’ evolving perceptions and causal pathways, addressing the current study’s cross-sectional limitations. 

  • Ethical governance evaluation: While the authors advocate governance frameworks, empirical tests of specific fairness or transparency interventions in HR processes are still lacking

 

Paper 2: Smart Recruitment and Disability Inclusion: A Systematic Literature Review and Bibliometric Analysis Smith, L., R. Gupta and P. Martínez.

 

Key Weaknesses

 

  • Single‐database coverage: The review draws only on the Web of Science; relevant studies in Scopus, Google Scholar or practitioner outlets are omitted, creating selection bias

  • Language limitations: Only English-language publications were considered, which constrains cultural representation and may have excluded valuable regional contributions

  • Temporal limitation: The bibliometric analysis reflects a single point in time (August 2023). As a result, very recent publications and dynamic shifts in citation trends were not captured

  • Conceptual emphasis over evaluation: While the paper engages in thematic mapping, it does not provide in-depth appraisal of methodological quality or effect sizes across the reviewed studies

  • Limited attention to disability and vulnerable groups: Only 17% of the sampled works directly addressed disability and references to other marginalized populations were scarce

  • Unreported coder agreement: Although two reviewers conducted screening, no statistics such as Cohen’s κ were reported to establish the reliability of their judgments

 

Key Strengths

 

  • The comprehensive coverage period (2004-2023) that captures nearly two decades of scholarship and the post-COVID acceleration of digital hiring practices

  • Use of diverse bibliometric methods: The study applies co-citation analysis, keyword co-occurrence and authorship network mapping, offering a rich, multidimensional view of the research landscape

  • Transparent and replicable approach: Following PRISMA guidelines, the authors present the full search string and detail the inclusion-exclusion criteria, ensuring methodological clarity and reproducibility

  • Identification of thematic clusters: The analysis highlights three distinct research areas-AI and disability, social media-based e-recruitment and HR automation-that together illustrate the intellectual structure of the field

  • Geographic mapping of collaboration: Cross-country author mapping reveals leading regions of research activity and emerging international partnerships, providing valuable insight for scholars seeking collaboration

  • Practical orientation: The discussion distills actionable insights for developing inclusive recruitment policies and promoting responsible AI governance in HR contexts

 

Research Gaps the Study Exposes

 

  • Functional breadth: recruitment dominates the literature; downstream HR activities such as onboarding, performance management, career development and compensation remain under-examined

  • Limited empirical rigor: Only about 28% of the reviewed studies used longitudinal or experimental methods, leaving long-term outcomes and causal relationships largely unexamined

  • Insufficient evaluation of interventions: Although strategies such as algorithmic audits and explainable AI are often recommended, few studies have systematically tested their practical effectiveness

  • Neglected stakeholder voices: Research rarely incorporates the perspectives of job applicants-particularly individuals with diverse disabilities-alongside those of employers, creating an incomplete picture

  • Lack of cross-cultural applicability: The bulk of empirical evidence originates from the US, UK and Australia. Studies situated in emerging economies and the Global South are necessary to capture broader contextual dynamics

  • Overlooked intersectionality: Disability is commonly analyzed as a standalone factor, with little attention paid to how it intersects with gender, race, age or socioeconomic background

 

Paper 3

Artificial Intelligence and Smart Recruitment [3]

 

Key Weaknesses

 

  • Cross-sectional, self-report design: Data were gathered through a single survey-interview wave, preventing causal inference and heightening common-method bias risks

  • Non-Probability sampling: Purposive recruitment and voluntary participation create selection bias and limit the generalizability of the findings beyond 186 respondents

  • Modest, uneven sample composition: Although multiple industries and regions were invited, the study relied on 186 participants, with no stratified controls for country, sector or firm size, thus reducing external validity

  • Lack of objective performance metrics: Efficiency gains (e.g., 30% shorter time-to-hire) are self-reported rather than verified against organizational records

  • Limited statistical depth: Results are presented descriptively (percentages, graphs) without multivariate tests or effect-size estimates, constraining analytical rigor

 

Key Strengths

 

  • Contemporary relevance: This study captures post-pandemic adoption levels, reporting that 78% of organizations already deploy at least one AI tool in recruitment

  • Mixed data sources: Combining structured questionnaires with semi-structured interviews enriches the dataset and provides qualitative illustrations of AI use

  • Acceptable measurement reliability: The questionnaire achieved strong internal consistency (Cronbach’s α = 0.87)

  • Diverse organizational coverage: Respondents represent the technology, health, finance, education and manufacturing sectors, offering a panoramic view of AI uptake across industries

  • Balanced treatment of benefits and risks: The study documents both efficiency gains (time-to-hire reductions for 63% of firms) and ethical concerns, such as algorithmic bias (41% of respondents) and data privacy (35%)

 

Research Gaps Identified

 

  • Longitudinal insight: The authors call for follow-up studies that track adoption impacts over time rather than relying on a single-point snapshot

  • SME and regional heterogeneity: AI uptake differs sharply between large firms (88%) and SMEs (52%); however, the drivers of this gap remain unexplored

  • Intervention effectiveness: While bias audits, explainable AI and change-management training are recommended, empirical tests of these remedies are lacking

  • Broader HR life-cycle coverage: The focus is recruitment; downstream functions such as onboarding, performance management and career development are rarely addressed

  • Intersectional candidate experience: Improvements are reported in aggregate but differential effects across gender, ethnicity or disability are not examined, leaving equity outcomes under-theorized

 

Paper 5

The effects of artificial intelligence on human resource activities and the roles of the human resource triad: opportunities and challenges [3]

 

  • Main weaknesses: Methodological limits inherent to a scoping review: the approach “does not aim to produce a critically appraised and synthesized result” and therefore cannot judge study quality or effect sizes

  • No formal quality-assessment step; The authors explicitly acknowledge that the review “does not include a process of quality assessment”

  • Restricted search scope: only four disciplinary databases (management, HRM/IR, psychology and IS) were used, which may have omitted relevant studies in other fields

  • Narrow keyword strategy and English-only inclusion criteria risk overlooking non-English or differently indexed scholarship

  • Potential publication bias: The sample was limited to peer-reviewed journal articles, excluding gray literature and practitioner reports

 

Key Strengths

Broad temporal and disciplinary coverage-43 empirical articles published between 1996 and 2023 across four disciplines-makes this the most inclusive synthesis to date on AI-HRM.

 

• Introduce an integrative framework that links five overarching AI effects (task automation, optimized HR data use, augmentation of human capabilities, work-context redesign and transformation of social/relational aspects) to eight canonical HR activities

• Unique focus on the HR triad: The review details how AI reshapes the roles of HR professionals, line managers and employees, filling a gap in earlier reviews that treated stakeholders separately

• A transparent, replicable protocol following established scoping-review guidelines (Arksey and O’Malley; Peters et al.) with high inter-coder agreement (95.7%)

 

A structured research agenda that scholars can use to prioritize future empirical inquiries.

Research Gaps Highlighted

 

  • Functional blind spots: Little is known about AI’s impact of AI on performance management, talent retention, compensation and occupational health; these domains merit further investigation

  • Empirical depth: The field requires longitudinal and experimental studies to determine causality, optimal levels of task automation and human-AI complementarity

  • Organizational change mechanisms: Questions remain about how AI alters power dynamics, organizational culture and team processes, particularly in human-AI collaboration

  • Contextual heterogeneity: Comparative studies should examine the differences between large organizations and SMEs across industries and in diverse national settings

  • Stakeholder acceptance and ethics: Further work on employee resistance, social acceptance and the role of HR professionals in guiding responsible, bias-free AI development is required

 

Paper 6

eSmart Recruitment: AI-Driven Screening and Blockchain Verification for Accurate Hiring [4].

 

Main weaknesses

 

  • Early stage adoption: Blockchain resumes and credential wallets are still “only being used sparingly in HR,” so the model’s practicality across industries remains untested

  • Candidate perception risk: Empirical work shows that blockchain-based or social-media résumés generate “less positive responses and reduced organizational appeal,” which could undermine user acceptance

  • Longer latency: Integrating the two technologies raises the average processing time to 260 ms, which is slower than stand-alone AI (200 ms) or NLP screening (120 ms)

  • Simulation rather than field data: The reported 96.5% accuracy and 99.8% data-security rates are derived from internal benchmarking; no external validation with live corporate hiring data is provided (implied across performance tables)

 

Key strengths

High predictive performance-96.5% accuracy, 95.8% precision and 95.9% recall-outperforming traditional resume screening and single-technology baselines.

 

  • Robust security: blockchain verification ensures 99.8% data integrity, eradicates credential fraud and strengthens faith in the documents for hiring

  • Bias-mitigation prospect: Smart contracts only allow authenticated non-manipulated data into the AI-based framework to reduce human bias throughout initial screening

  • Clear, reproducible design: This paper lays out exact formulas, verification rules for hashes and step-by-step pseudocode (“Algorithm 1”) for others to reproduce

 

Research Gaps Highlighted

 

  • Implementation hurdles: Additional studies should be conducted to explore "effective strategies to counter" organizational, technological and environmental barriers to utilizing blockchain-AI combinations for various applications

  • Culture and leadership: Managerial sponsorship's function and organizational preparation for change to move from Industry 4.0 tools to “Smart HR 4.0" has yet to be investigated

  • Longitudinal research: There are no present investigations tracking blockchain-confirmed hire's quality-of-hire, retention or equity impacts over time; such longitudinal assessments are needed

  • User-centric assessment: Experimental research should explore recruiter workload, applicant experience and fairness outcomes with the system being implemented at scale to address variable candidate reactions to date

 

Paper 7

Recruiter’s perception of artificial intelligence (AI)-based tools in recruitment [5]

 

Principal Weaknesses (Study Limitations)

Constrict Scope of construct: Survey operationalizes Unified Theory of Acceptance and Use of Technology (UTAUT) with merely core predictors alongside "frequency of AI use"; additional relevant determinants such as satisfaction, trust, moral concerns or system quality are not taken into account, constricting explanatory scope.

 

 • Context-specific generalizability: Perceptions were shaped by structural, economic, legal and cultural influences that were not accounted for by the models; therefore, results cannot be applied wholesale across regions or sectors

• Self-selected, online sample: Respondents were recruited through the Prolific Academic platform, introducing self-selection bias and restricting representativeness despite the international reach of the 15 countries

• Cross-sectional, self-report design: data were gathered at a single point in time via a web questionnaire, preventing causal inference and exposing results to common method variance

• Exclusion of quality appraisal for qualitative comments: While open-ended answers were coded, no formal reliability statistics (e.g., inter-coder κ) were reported, raising questions about coding consistency

 

Key Strengths

This is a large and demographically balanced sample of 283 recruiters spanning 15 countries, which is unusually broad for studies in this domain.

  • Mixed-method approach: Quantitative hierarchical regression is complemented by qualitative content analysis of open comments, enriching interpretation

  • Extension of UTAUT: The model is adapted to recruiting by adding “frequency of AI use” and removing non-relevant moderators (voluntariness, facilitating conditions), demonstrating theoretical innovation

  • Empirical evidence of practitioners’ intentions: 73.5% of participants expressed willingness to adopt or intensify AI use, providing updated insights into market readiness

  • Practical relevance: The study itemizes concrete benefits (efficiency, time savings and bias reduction) and drawbacks (lack of human nuance and data bias) voiced by recruiters, offering actionable guidance for tool designers

 

Research Gaps Highlighted 

 

  • Broader predictor set: Future studies should incorporate variables such as satisfaction, confidence, system quality, security and ethical or legal considerations to build a more holistic adoption model

  • Contextual contingencies: Comparative studies are needed to test how national regulations (e.g., GDPR), firm size, industry and culture moderate AI acceptance in recruiting

  • Alternative sampling frames: Using probabilistic or organization-based samples rather than crowdsourced panels would strengthen external validity

  • Longitudinal and experimental designs: Tracking recruiters over time or manipulating tool characteristics can clarify causality and the evolution of attitudes beyond a single snapshot

  • Candidate-side and downstream HR outcomes: The present study focuses on recruiters’ intentions; subsequent research should explore applicant experiences and the effects of AI adoption on later HR stages such as onboarding, retention or performance management

 

Paper 8

Governance for Equitable Automated Hiring [6] *Year inferred from manuscript references.

 

Weaknesses

 

  • Purely conceptual: The study is a developmental systematic review and framework proposal; no primary empirical data or field validation are provided

  • Bounded evidence base: coverage was limited to 65 domain-specific and 24 data-governance articles, raising the possibility of selection bias despite the iterative search strategy

  • Job-seeker perspective under-represented: The authors acknowledge the paucity of research on candidate experiences with algorithmic tools, which constrains the framework’s stakeholder completeness

  • Lack of standardized implementation guidance: While advocating participatory data governance, this study offers no tested protocol or metrics that organizations can immediately adopt

  • Generalizability concerns: Conclusions are drawn from secondary sources across multiple contexts but the framework has not yet been applied or stress-tested in specific industries, firm sizes or jurisdictions

 

Key Strengths

 

  • Comprehensive, interdisciplinary synthesis: Integrates IS, law, HRM, computer science and ethics literature to map discrimination mechanisms and mitigation approaches in automated hiring

  • Clear taxonomy of harms: delineates power imbalances, disparate treatment and disparate impacts with concrete examples, making abstract equity issues operational for practitioners

  • Holistic mitigation review: This critically assesses legal remedies, fairness audits, human-in-the-loop designs and standard initiatives before positioning data governance as an overarching solution

  • Novel participatory data-governance framework: articulates FATE-oriented decision domains (accountability, data quality, ethics, fairness, institutions, monitoring, participation, standardization and transparency) and associated diagnostic questions

  • Actionable policy vision: This recommends a collaborative public-private agency (modeled on NIST) to co-produce standards, aligning regulatory oversight with organizational flexibility

 

Research Gaps

 

  • Empirical verification: Future work should validate proposed structures of governance in real-world selection contexts to operationalize their measures and establish their effectiveness

  • Job seeker-centric insights: Extensive qualitative and quantitative research would be required to grasp job seeker behaviors such as algorithm "gaming" routines alongside fairness perceptions

  • Cross-domain transferability: Such comparative scholarship must explore how data governance practices operate within analogous selection environments (e.g., childcare placement and higher education admissions) and across regulation or culture contexts

  • Longitudinal monitoring: Regular audits are needed to understand the impact of governance interventions over time on bias, data quality and organizational culture

  • Standardization metrics: Scholars should design measurable indicators as well as benchmarking tools that translate FATE principles into operational organizational norms

Paper 8

Artificial Intelligence for Recruitment [7]. 

 

Main Weaknesses

 

  • Limited scope and generalizability: It is a "purposeful qualitative research" with an emphasis on interviews with recruiters from Bangkok, Thailand, which limits transferability to various locations or sectors

  • Small, non-probability sample: While the research interviews senior recruiters, there is no specific number mentioned for participants; purposive sampling does not allow statistical inference and enhances selection bias

  • Perception-only evidence: Findings depend on declared intentions about contemplated "SmartRecruit" measure; instrument itself is neither operationalized nor experimentally tested and is thus still unproven with respect to effectiveness, bias and investment return

  • Limited theoretical foundation: In being practitioner-centered,  the  report  does  not  ground its   inquiries   on   prevailing   adoption   or acceptability models of technology, thus restricting its contribution to cumulative scholarly knowledge (assumed throughout the methodology and discussion sections)

  • Potential confirmation bias: Interviewers developed the reviewed product (“SmartRecruit”), such that positive feelings (e.g., 92% acceptance) might be a result of interviewer bias or social desirability

 

Key Strengths

 

  • Clear industry relevance: This study identifies tangible pain points such as “lengthy hiring times” and misalignment between candidate profiles and job requirements

  • Dual stakeholder coverage: Insights are gathered from both recruitment agency staff and in-house HR managers, providing a broader view of market sentiment

  • Concrete managerial indicators: respondents expected   the   AI   tool   to   “reduce   time   for placement”  (58%)  and  “get  the  right  candidates and  fill  vacancies  on  time”  (25%),  offering actionable  benchmarks

  • Positive adoption climate: More than 92% of the interviewees expressed favorable perceptions of AI screening, suggesting commercial viability for solutions such as SmartRecruit

  • Awareness of implementation challenges: The discussion acknowledges that “there are challenging to educate the market” about AI’s supportive-rather than disruptive-role, demonstrating reflexivity regarding change management

 

Research Gaps

 

  • Empirical validation gap: Future work should deploy SmartRecruit (or comparable systems) in live-hiring campaigns to assess accuracy, cost savings, bias mitigation and user satisfaction relative to traditional methods

  • Theoretical integration gap: Integrating frameworks such as the Unified Theory of Acceptance and Use of Technology (UTAUT) or the technology-organization-environment (TOE) model would deepen explanatory power and facilitate cross-study comparison

  • Stakeholder-experience gap: The current study focuses on recruiter perceptions; subsequent research must examine applicant experiences, fairness outcomes and organizational culture shifts stemming from AI-mediated hiring

  • Longitudinal evidence gap: Cross-sectional interviews capture attitudes at one point in time; longitudinal or experimental designs are needed to track how acceptance, performance and ethical perceptions evolve as AI tools mature

  • Cross-cultural and sectoral gap: Replication in other geographic regions, firm sizes and industry verticals will clarify the contextual moderators of AI recruitment adoption and effectiveness

 

Paper 9

“Emotional State Profiling of Sales Candidates for Smart Recruitment Decision Support” [8].

 

Principal Weaknesses

 

  • Exploratory scope: Findings are based on “preliminary results” obtained from a single volunteer rather than a statistically adequate sample, so external validity is minimal

  • Limited classifier reliability: The authors acknowledge that subtle distinctions between neutral and positive affect are “more likely” the result of noise and “need to be refined…to get more consistent results,” indicating measurement error

  • Artificial recording setup: Eye coordinates are estimated with two green dots placed on the candidate’s forehead, an intrusive procedure that would be impractical in routine recruiting

  • Constrained emotion taxonomy: The neural network is trained only on “joy” and “anger” exemplars from the Cohn-Kanade database plus artificially generated neutral frames, restricting its ability to recognize a fuller range of affective nuances

  • No performance validation: The study stops at correlating affective shifts with questionnaire responses; it does not test whether the detected emotions predict hiring decisions or subsequent sales effectiveness

 

Key Strengths

 

  • Conceptual novelty: This work is among the first to integrate a psychological selling-behavior model with dynamic facial-expression analysis for recruitment decision support

  • Real-time affect monitoring: Transient affect shifts are registered every 10 fps by the system and associated with   time-stamped   question   inputs   to   enable   a high-granular analysis of candidate commitment

  • Grounding with established theory: It combines a three-dimensional affect-space model with the Buzzotte-Lefton-Sherberg salesperson-behavior framework to produce a unique behavioral and emotional ontology

  • Diagnostic illustrative value: Divergence graphs indicate how opposing affect-response patterns can indicate areas for prospective interviews that yield illustrative practical value for managers

 

Research gaps (academic tone)

 

  • Empirical generalization gap: Large-scale, multi-candidate studies are required to determine the robustness of affect-behavior correlations across demographic and cultural contexts

  • Algorithmic refinement gap: Future work should incorporate richer emotion corpora and multimodal cues (e.g., voice and physiology) to improve classification accuracy beyond the current three-class scheme

  • Predictive validity gap: Longitudinal designs are needed to test whether emotion-informed profiles enhance hiring quality, on-the-job sales performance and retention relative to traditional methods

  • Ecological validity gap: Non-intrusive sensing (e.g., remote photo plethysmography or camera-only eye tracking) should replace the current marker-based setup to ensure candidate comfort and real-world feasibility

  • Ethical and privacy gap: Systematic analysis of consent, data storage and bias implications of emotional AI in hiring remains absent and warrants rigorous exploration

 

Paper 10

“The effects of automation on hiring practices and staff allocation in academic libraries in Tennessee” [9].

 

Principal Weaknesses (Study Limitations)

 

  • Single-state focus: The survey is confined to public and private academic libraries in Tennessee and the author cautions that results “should be done, if at all, with a degree of caution” when extrapolating elsewhere

  • Modest, self-selected sample: Only forty-four library directors participated, limiting statistical power and increasing the risk of non-response bias

  • Cross-sectional, self-report design: Data were gathered at one point in time via a questionnaire, preventing causal inference and making findings vulnerable to common method variance

  • Predominantly quantitative instrument: The study relies on closed-ended survey items; no qualitative interviews or observational data are used to contextualize the numerical results

  • Temporal relevance: The staffing landscape has evolved considerably since the mid-1990s, so conclusions may not reflect current post-digital library environments

 

Key Strengths

 

  • Clear problem framing: The paper explicitly links automation to “who is hired, how professionals and paraprofessionals are recruited, what skills are essential … and how staff may be allocated”

  • Theory-driven research questions: Four specific hypotheses test changes in professional, support and specialist staffing before and after automation

  • Instrument validation: The survey underwent a two-stage pilot study with library directors to refine clarity and enhance reliability before full deployment

  • Balanced treatment of professional and support staff: The analysis considered both credentialed librarians and paraprofessionals, revealing that neither group experienced significant net gains or losses after automation

  • Practical relevance: The findings document that 77% of libraries altered support-staff job descriptions post-automation, offering actionable insight for workforce planning

 

Research Gaps

 

  • Geographic generalizability gap: Similar investigations in other states, countries and library sectors are required to test whether the Tennessee patterns hold under different regulatory and budgetary regimes

  • Longitudinal evidence gap: Follow-up studies tracking libraries over multiple years would clarify how staffing effects evolve as technologies mature, an issue that the cross-sectional design cannot address.

  • Qualitative insight gap: Ethnographic or interview-based work could uncover the lived experiences and organizational cultures that quantitative surveys alone cannot capture

  • Contemporary technology gap: Emerging digital services (e-resource management, data analytics, AI reference tools) introduce skill demands unimagined in the original study, warranting renewed inquiry

 

Paper 11

The role of digital platforms in transforming HRM: A scoping review across sectors” [10].

 

Main Weaknesses

 

  • No primary empirical evidence: This article is a scoping review that summarizes existing studies without collecting original data, which limits its ability to establish causal relationships or measure effect sizes

  • Uneven sectoral representation: While it provides rich detail on technology-intensive industries, coverage of healthcare, education, manufacturing and agriculture is noticeably thinner, mirroring the imbalance the review itself describes

  • Limited longitudinal perspective: The author notes a scarcity of long-term evaluations of digital HRM initiatives but this review does not remedy this gap through time-series analysis or historical comparison

  • Methodological transparency constraints: The study does not specify inclusion/exclusion criteria, database scope or quality-assessment procedures in depth; hence, reproducibility and risk-of-bias appraisal remain unclear (implied across the methods discussion in the introductory and literature-gap sections)

 

Key Strengths

 

  • Comprehensive synthesis: By drawing together recent work on AI, cloud computing, collaboration tools and sectoral practice, this review offers a panoramic map of digital HRM trends and challenges

  • Integrative theoretical framing: This juxtaposes TAM, Diffusion of Innovations and Socio-Technical Systems theory to explain adoption dynamics, giving scholars a multi-lens conceptual foundation

  • Attention to ethics and employee experience: This discussion explicitly tackles privacy, algorithmic bias and trust, widening the debate beyond purely technical efficiency considerations

  • Sector-specific insight: Comparative analysis of IT and finance versus healthcare, education and manufacturing illuminates contextual contingencies that shape technology uptake

  • Action-oriented recommendations: This paper closes with concrete guidance on customization, capability building and ethical governance, offering practical value for HR professionals

 

Identified Research Gaps 

 

  • Sectoral breadth gap: Empirical studies are still needed in under-represented domains such as healthcare, public education, traditional manufacturing and agriculture to verify whether the benefits and risks of digital HRM generalize beyond early adopting industries

  • Longitudinal impact gap: Few investigations have tracked the sustained effects of digital platforms on employee engagement, organizational culture or ethical outcomes over time, calling for multi-year designs and repeated measures

  • Theory-practice integration gap: Limited work combines robust theoretical models with real-world evidence across multiple contexts; future research should operationalize adoption theories in field experiments or large-scale surveys

  • Ethical-governance metrics gap: While the review stresses privacy and bias concerns, standardized indicators and auditing frameworks to monitor fairness, transparency and employee trust remain underdeveloped

Limitations of the study

While this review offers a comprehensive overview of AI-driven recruitment and HR automation, it is important to recognise several limitations that may affect the interpretation of the findings and recommendations. Restricted Evidence Base: The synthesis is based on only 11  peer-reviewed studies published in English between 2001 and 2025. The exclusion of conference papers, practitioner reports, dissertations and non-English publications introduces potential publication and language biases, potentially omitting relevant insights, particularly from emerging economies and small and medium-sized enterprises (SMEs). Owing to the limited corpus and diverse research designs, this review employed a narrative rather than a statistical synthesis, precluding the quantification of pooled effect sizes or the testing of moderating influences. Skewed Topical Coverage: Nine of the eleven studies primarily focused on recruitment, leaving subsequent HR activities-such as performance management, compensation, retention and employee well-being-largely unexplored. Consequently, generalisations beyond the hiring stage should be approached cautiously. Geographic and Sectoral Concentration: A significant portion of the evidence originates from single-country or Global North contexts (for example, South Africa, Australia, the United States and the United Kingdom) and technology-intensive or large-firm settings. Consequently, the findings may under-represent the conditions in SMEs, public-sector organisations and the Global South. Methodological Limitations Inherited from Primary Studies: The predominance of cross-sectional, self-report surveys limits causal inference and increases common-method bias across the evidence base. Several studies have utilised non-probability or self-selected samples, further constraining external validity. Formal quality assessment procedures (e.g. risk-of-bias scoring) were not conducted for each included article, reflecting the gaps noted in one of the source reviews. Rapid Technological and Regulatory Evolution: AI capabilities, data protection statutes and fairness standards are advancing rapidly. Findings from studies conducted as recently as 2024 may already be outdated, particularly for emerging modalities such as blockchain-verified credentials or emotion recognition analytics, which have been evaluated only in laboratory settings. Absence of Stakeholder Triangulation: The review aggregates recruiter- and organisation-centric perspectives; relatively little empirical evidence was available on candidate experiences, line manager adoption or long-term employee outcomes, limiting the ability to draw holistic conclusions about human-AI interaction across the HR triad. Lack of Longitudinal Insight: None of the included empirical studies followed organisations over multiple time points; consequently, the review cannot address the sustained impact of AI adoption on hiring quality, organizational culture or equity. Collectively, these limitations highlight the necessity for broader, more diverse and methodologically rigorous research before definitive claims can be made regarding the long-term efficacy, fairness and contextual transferability of smart recruitment and HR automation systems.

 

A comparative analysis of research gaps in the 11 papers titled “Smart Recruitment, Hiring and Automation” reveals several common, high-priority gaps identified in seven or more papers. These include: 

 

  • Longitudinal evidence: There is a need to track outcomes over time rather than relying solely on one-time surveys

  • Functional breadth: There is a scarcity of studies examining AI's impact beyond recruitment, such as in performance management, compensation and well-being

  • Empirical validation: There is a lack of empirical validation of proposed tools or governance models in live settings

  • Contextual diversity: There is an over-representation of large firms and Anglophone or Global-North settings, with limited research on SMEs, emerging economies and varied sectors

  • Ethical measurement: There are few standardized metrics or audits for bias, transparency or employee trust. Secondary, recurrent gaps, identified in three to six papers, include:

  • Stakeholder perspectives: There is a need to consider perspectives beyond recruiters, such as candidates, line managers and the HR triad

  • Theoretical integration: There is a need for integration with established adoption or socio-technical models

  • Intervention effectiveness: There is a need to assess the effectiveness of interventions such as bias audits, explainable AI and training. Isolated or emerging gaps, identified in two or fewer papers, include: 

  • Emotion-AI: There is a need to assess the ethical and ecological validity of Emotion-AI in hiring. 

  • Blockchain-AI hybrid: There are implementation hurdles associated with Blockchain-AI hybrids

  • Staffing effects: There is a need to examine the effects of automation on staffing in non-corporate settings, such as academic libraries

CONCLUSION

Across the 11 reviewed papers, AI-enabled recruitment and broader HR automation consistently demonstrated measurable efficiency gains, yet they remained constrained by methodological, contextual and ethical limitations. Survey-based studies indicate widespread adoption, with 78% of organizations utilizing at least one AI tool and reporting reductions in time-to-hire ranging from 30-63%. Employees and recruiters also perceived significant productivity benefits, with 80.6% strongly agreeing that AI performs HR tasks more quickly. However, most of the evidence is cross-sectional and self-reported, which limits causal inference and external validity. Functional coverage is uneven, with recruitment being predominant, while downstream activities such as performance management, compensation and retention receive less empirical attention. Ethical and governance issues, including algorithmic bias (reported by 41% of respondents) and data privacy concerns (35%), are acknowledged but are seldom evaluated through rigorous field studies. Furthermore, there is a geographic and sectoral concentration in higher-income, technology-intensive contexts, leaving SMEs, emerging economies and non-tech industries under-represented. Addressing these gaps requires longitudinal, mixed-method and intervention-based research that emphasises stakeholder diversity and responsible AI governance.

REFERENCE
  1. Bangura, S. et al. "Unraveling the reshaping of human resource management function: The mediating role of artificial intelligence." International Journal of Management and Data Analytics, vol. 5, no. 1, 2025, pp. 191–202.

  2. Patil, S. et al. "Artificial intelligence and smart recruitment." 12th International HR Conference on Navigating the Human Capital Management in the Digital Era, 19–20 Dec. 2024, Shri Dharmasthala Manjunatheshwara Institute for Management Development, Mysuru, India.

  3. Dima, J. et al. "The effects of artificial intelligence on human resource activities and the roles of the human resource triad: Opportunities and challenges." Frontiers in Psychology, vol. 15, 2024, p. 1360401.

  4. Devi, D.P. et al. "Smart recruitment: AI-driven screening and blockchain verification for accurate hiring." International Journal of Management Research and Business Strategy, vol. 14, no. 1, 2024, pp. 230–248.

  5. Horodyski, P. "Recruiter's perception of artificial intelligence (AI)-based tools in recruitment." Computers in Human Behavior Reports, vol. 10, May 2023, p. 100298. https://doi.org/10.1016/j.chbr.2023.100298.

  6. Nordstrom, S. and M.R. Sanfilippo. "Data governance for equitable automated hiring." 2022.

  7. Srisuchat, M.Y. and S. Teerakapibal. "Artificial intelligence for recruitment." Thammasat University, 2019.

  8. Khosla, R. and C. Lai. "Emotion-based smart recruitment system." Knowledge-Based Intelligent Information and Engineering Systems, Pt 2, Proceedings, vol. 3682, 2005, pp. 243–250.

  9. Kenerson, M.E. The effects of automation on hiring practices and staff allocations in academic libraries in four-year and two-year institutions in Tennessee. 1997.

  10. Брасанан, К. "The role of digital platforms in transforming HRM: A scoping review across sectors." Информатика. Экономика. Управление – Informatics. Economics. Management, vol. 4, no. 2, 2025, pp. 2028–2034.

Recommended Articles
Research Article
Analysis of the Influence of Leadership Style, Compensation Commitment and Work Stress on Performance (Case: Almarhamah Foundation Employee Padang Pariaman Regency)
Download PDF
Research Article
Marketing Agility: A Multi Layer Perspektif SME opportunities in Indonesia
Download PDF
Research Article
The Effect of Local Tax and Retribution on Direct Expenditure with Special Autonomy Fund as a Moderation In Districts / Cities of Aceh Province, Indonesia
Download PDF
Research Article
Consumer Behavior and the Effect of Covid-19 on Markets: An Empirical Study
Download PDF
Chat on WhatsApp
Flowbite Logo
PO Box 101, Nakuru
Kenya.
Email: office@iarconsortium.org

Editorial Office:
J.L Bhavan, Near Radison Blu Hotel,
Jalukbari, Guwahati-India
Useful Links
Order Hard Copy
Privacy policy
Terms and Conditions
Refund Policy
Shipping Policy
Others
About Us
Team Members
Contact Us
Online Payments
Join as Editor
Join as Reviewer
Subscribe to our Newsletter
+91 60029-93949
Follow us
MOST SEARCHED KEYWORDS
Copyright © iARCON International LLP . All Rights Reserved.