PhD Literature Review Sample: Can AI Algorithms Eliminate Bias in Recruitment: Evidence from 5 Medium-Sized UK Retail

1. Search Strategy

1.1. Keywords and Search Strategy

To facilitate the consequent PRISM search, the author of this thesis identified a number of primary and secondary keywords used in the systematic literature search afterwards (Fonseca et al., 2019). The first group was mainly focused on the problems of bias in recruitment, with a focus on medium-sized firms and the use of artificial intelligence (AI) in recruitment. The use of these broad terms ensured that the consequent systematic search would lead to the acquisition of all studies exploring these topics, which will be useful for the consequent selection of studies focusing on individual problems (Firth et al., 2020; Jegede, 2021). Secondary keywords explored a wider context of machine learning, employment automation, and bias reduction/elimination in employment. This narrowed-down focus allowed the author to identify the studies exploring specific aspects of the studied phenomena (Rowe, 2021). The analysis of the literature seeks to cover both the strategic perspective of bias in recruitment and specific types of bias, such as affinity bias and confirmation bias. These phenomena are discussed both as existing problems studied by past literature in human-based decision-making in human resource management (HRM) and their manifestations in AI-driven HRM (Haefner et al., 2021; Howard & Borenstein, 2018).

Table 1: List of Keywords

Primary KeywordsSecondary Keywords
AI algorithmsethical AI
AI in HRMhuman resources technology
AI-assisted recruitmentmachine learning
bias in recruitmentrecruitment automation
AI bias in HRMrecruitment discrimination
AI bias in recruitmentbias mitigation in recruitment
bias in SME recruitmentdata-driven recruitment
human errors in recruitmentpredictive analytics
affinity biasemployment bias
confirmation biasalgorithmic fairness
unconscious biasrecruitment diversity
algorithmic bias 
medium-sized retail 

Primary and secondary keywords were used in combination with Boolean operators AND, OR, and NOT (Phillips & Johnson, 2022). These methods provided the capability to combine multiple entries from the list above in order to narrow down the search focus or link them with specific types of studies, such as “qualitative”, “quantitative” or “mixed-methods” studies, to obtain specific types of data for literature analysis (Brabazon et al., 2020). Additionally, truncation and wildcards were added to such keywords as “technolog*” to ensure that all possible variants of these terms used in article headings or article texts would be captured (Smith & Felix, 2019).

1.2. Inclusion and Exclusion Criteria

All studies identified via the aforementioned primary and secondary keywords were compared against the following inclusion and exclusion criteria afterwards (Eva, 2018). This process was deemed necessary to ensure that only high-quality sources directly related to the problems of AI algorithms and bias in recruitment were used in the systematic literature review (Cassuto & Weisbuch, 2021). Overall, five inclusion and five exclusion criteria were formulated for this purpose. The first group included the use of peer-reviewed journals from the Q1 and Q2 categories, the focus on studies providing explicit details about their methodologies, and the selection of studies published after 2013 and directly related to the utilisation of AI in recruitment (Brown, 2021). These provisions ensured that the selected sources did not suffer from data obsolescence and possessed high levels of validity, reliability, generalisability, and reproducibility. As discussed further, multiple studies did not match these criteria and were excluded due to their focus on non-HRM contexts of AI application, lack of methodological details, or other exclusion criteria.

Table 2: Inclusion and Exclusion Criteria

Inclusion CriteriaExclusion Criteria
Only the articles published in peer-reviewed journals were included in the final selection.All articles published in low-quality or non-peer-reviewed journals were excluded.
All included studies had to explicitly discuss their methodologies, data collection, and data analysis processes.  If the articles did not explicitly discuss their methodologies, data collection methods, and data analysis instruments, they were excluded.
Academic articles providing practitioner evidence on the topic had to be published after 2013 to ensure that their findings and analysed HRM technologies were relevant to the current study.Sources published before 2013 were only used if they included original academic theories, seminal works in the field or provided established definitions of the terms used in this thesis.
The analysis only included the studies focused on the use of AI in recruitment.The articles exploring the use of AI in other areas of HRM such as training and development were excluded from the analysis.
The review included the studies that provided all relevant information about sample size, composition, interview questions, survey questions, and other data allowing readers to appraise their reproducibility and transparency.The literature review excluded the studies with limited sample sizes or the studies that did not publish their questionnaires, interview questions, and other critical data demonstrating their reproducibility and transparency.

1.3. PRISM Chart

The following PRISM chart explores the Identification, Screening, and Inclusion phases of the literature review (BMJ, 2021; Siddiqui & Gorard, 2022). These elements allow researchers to structure the process of sources’ selection and integrate the earlier discussed strategies and instruments in a systematic manner. The initial analysis involved three key databases, namely ScienceDirect (2024), SAGE (2024), and Taylor & Francis (2024). These platforms were explored using the earlier provided primary and secondary keywords. After the initial identification, duplicate records were removed before the Screening phase (Smith & Felix, 2019). After the exclusion of 12 articles total, the remaining articles were assessed for eligibility within the scope of the earlier formulated inclusion and exclusion criteria. The removal of 14 studies in this process left 35 studies total that were analysed in the systematic review of literature.

Figure 1: PRISM Flow Chart

prisma flow chart - AI in HRM

2. Literature Review

Bias in recruitment is a relatively well-studied sphere primarily focusing on such phenomena as unconscious bias drawing recruiters’ attention towards candidates with similar backgrounds and/or interests (Budhwar et al., 2022), affinity bias attracting them to candidates with similar experiences or characteristics such as education or demographic background (Garcia-Arroyo & Osca, 2019), confirmation bias implying the need to focus on the characteristics of candidates that fit a certain ‘model of success’ rather than specific job requirements, and other discriminatory practices, including gender, ethnic, and racial bias (Haefner et al., 2021; Howard & Borenstein, 2018). Similarly, the horns effect, stereotyping, and the halo effect were all reported in prior studies as widespread problems in HRM (Kshetri, 2021; Malik & Lenka, 2020). While many of these problems are associated with decision-making bias in general, they can also be inherently linked with the personality of the recruiters and their unconscious willingness to look for people sharing the same background, life experiences, job experiences, or education (Pessach & Shmueli, 2021). Automated systems were expected to not share these limitations, since there is no additional reference to compare the candidates with besides basic job requirements.

AI techniques in HRM are mainly used to automate routine operations, such as application sorting or resume screening (Albaroudi et al., 2024; Ozkazanc-Pan, 2021). However, they are becoming more and more popular as decision support mechanisms, providing an extra opinion to recruiters regarding the selection of the most optimal candidates for particular positions (Raffoni et al., 2018). Unfortunately, multiple authors report that AI systems are prone to the phenomenon called ‘algorithmic biases’ (Rozado, 2020). These issues occur when such instruments rely on non-representative samples, available limited data, or overly represented population groups (Molinillo & Japutra, 2017). The most recent example of such measurement biases, linking biases, and omitted variable biases was recognised in HRM automation experiments of Amazon and several other top-profile companies (Tuffaha, 2023). In their tests, AI-powered systems prioritised white male applicants, since they were trained on samples where this population group represented the majority of entries and associated these demographic characteristics with greater eligibility for recruitment (Nicolaescu et al., 2020).

These considerations suggest that some of the problems associated with AI algorithms may stem not from these instruments per se but from existing HRM operations, samples, and approaches that are inherently biased before AI application (Minbaeva, 2021; Ostheimer et al., 2021; Seppala & Malecka, 2024). AI is presently seen as a tool that has to provide a more objective and fairer appraisal of applicant skills and competencies without evaluating their social group or demographic characteristics. With that being said, this perceived superiority over human decision-making may not be realised without the integration of psychometric objectivity ensuring objectivity in AI decisions related to personnel selection, as noted by Pereira et al. (2021) and Santhosh and Mohanapriya (2021). Additionally, AI still offers superior processing capabilities to any human recruiter. As a result, it can consider hundreds of candidate applications in an instant while identifying implicit data patterns determining the predictions of their suitability (Serrano-Guerrero et al., 2021). In this scenario, the possibility of human-like bias may be outweighed by the advantages of multifaceted data analysis reducing bounded rationality effects.

With that being said, one of the potential advantages of AI over human recruiters, mentioned by such authors as Lee (2018) and Lin et al. (2021), is the standardisation of personnel evaluation processes. As machine learning mechanisms advance with every generation, they can be trained to follow the same procedures and strategies. This way, individual bias introduced by specific recruiters can be eliminated, since decision-making will rely on standardised logic driven by a data-centric approach (Garg et al., 2021). However, such procedures may be affected by multiple ethical considerations, such as the use of real candidates’ data for advancing AI systems (Jaiswal et al., 2022). If such applicants are not aware of these procedures and do not provide explicit informed consent, this can be classified as a violation of GDPR and other applicable regulations (Fernandez-Martine & Fernandez, 2020). These considerations suggest that the potential of AI systems may be appraised as substantial but requires many conditions in order to be realised and make them superior to most human recruiters.

From a medium-sized company’s perspective, the use of AI algorithms in recruitment may be seen as both an opportunity and a source of potential challenges, which informed the focus of this study seeking to explore this problem from their perspective (Escolar-Jimenez et al., 2019; Gonzalez et al., 2022). On the one hand, these instruments may provide unparalleled cost efficiency in terms of benefit-cost ratio. Highly skilled recruiters capable of working with minimal bias may be costly to employ, while AI systems may provide similar levels of resume screening quality and candidate matching efficiency at a fraction of their price, while automating a lot of such operations and reducing the time required to complete them (Cabello & Lobillo, 2017). On the other hand, the aforementioned issues with demonstrated behaviours of such solutions strongly imply that human specialists may be necessary to monitor their selection algorithms and adjust them if necessary (Cooke et al., 2020; Kambur & Akar, 2022). This reduces potential lucrativeness, since bias-free automated recruitment may not be attainable yet without highly competent recruiters controlling such operations and eliminating bias introduced by AI mechanisms.

Additional problems mentioned in modern literature included limited access to big data allowing medium-sized companies to train AI systems and adjust them to their unique recruitment needs (Liu et al., 2021; Prikshat et al., 2022). Such solutions also have high skill requirements to use, which may inform the decision to outsource hiring operations to third parties. In this scenario, the analysis of potential bias may not be available to the firms in question, which exposes them to potential risks if any problems in this field are recognised by regulators and they are held responsible for unethical recruitment practices as a result (Connelly et al., 2021; Nicolaescu et al., 2020). This problem is linked with the concept of designer bias emerging due to the fact that system designers may not possess full awareness of social issues in employment, due to not being professionals in this sphere (Minbaeva, 2021; Ostheimer et al., 2021). As a result, their solutions may not reflect crucial recruitment practices including bias elimination (Qamar et al., 2021). This leads to the earlier discussed outcomes, where AI-powered systems demonstrate similar prejudice due to the wrong choice of data for their training or sub-optimal algorithmic models.

References

Albaroudi, E., Mansouri, T., & Alameer, A. (2024). A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring. AI, 5(1), 383-404. https://doi.org/10.3390/ai5010019

BMJ. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Research Methods & Reporting, 372(71), 1-10. https://doi.org/10.1136/bmj.n71

Brabazon, T., Lyndall-Knight, T., & Hills, N. (2020). The creative PhD: Challenges, opportunities, reflection. New York: Emerald Publishing Limited.

Brown, G. (2021). How to get your PhD: a handbook for the journey. Oxford: Oxford University Press.

Budhwar, P., Malik, A., De Silva, M., & Thevisuthan, P. (2022). Artificial intelligence – challenges and opportunities for international HRM: a review and research agenda. The International Journal of Human Resource Management, 33(6), 1065-1097. https://doi.org/10.1080/09585192.2022.2035161

Cabello, J., & Lobillo, F. (2017). Sound branch cash management for less: A low-cost forecasting algorithm under uncertain demand. Omega, 70(1), 118-134. https://doi.org/10.1016/j.omega.2016.09.005

Cassuto, L., & Weisbuch, R. (2021). The new PhD: How to build a better graduate education. London: Johns Hopkins University Press.

Connelly, C., Fieseler, C., Cerne, M., Giessner, S., & Wong, S. (2021). Working in the digitized economy: HRM theory & practice. Human Resource Management Review, 31(1), 1-23. https://doi.org/10.1016/j.hrmr.2020.100762

Cooke, F., Schuler, R., & Varma, A. (2020). Human resource management research and practice in Asia: Past, present and future. Human Resource Management Review, 30(4), 1-19. https://doi.org/10.1016/j.hrmr.2020.100778

Escolar-Jimenez , C., Matsuzaki, K., Okada, K., & Gustilo, R. (2019). Data-driven decisions in employee compensation utilizing a neuro-fuzzy inference system. International Journal of Emerging Trends in Engineering Research, 7(8), 1-19. https://doi.org/10.30534/ijeter/2019/10782019

Eva, O. (2018). The AZ of the PhD Trajectory: A Practical Guide for a Successful Journey. Berlin: Springer.

Fernandez-Martine, C., & Fernandez, A. (2020). AI and recruiting software: Ethical and legal implications. Paladyn, Journal of Behavioral Robotics, 1(1), 199-216. https://doi.org/10.1515/pjbr2020-0030

Firth, K., Connell, L., & Freestone, P. (2020). Your PhD survival guide: planning, writing, and succeeding in your final year. London: Routledge.

Fonseca, C., Lopes, M., Mendes, D., Mendes, F., & Garcia-Alonso, J. (2019). Handbook of research on health systems and organizations for an aging society. Hershey: IGI Global.

Garcia-Arroyo, J., & Osca, A. (2019). Big data contributions to human resource management: A systematic review. The International Journal of Human Resource Management, 32(20), 4337-4362. https://doi.org/10.1080/09585192.2019.1674357

Garg, S., Sinha, S., Kar, A., & Mani, M. (2021). A review of machine learning applications in human resource management. International Journal of Productivity and Performance Management, 1(1), 1-16. https://doi.org/10.1108/IJPPM-08-2020-0427

Gonzalez, M., Liu, W., Shirase, L., Tomczak, D., Lobbe, Justenhoven, R., & Martin, N. (2022). Allying with AI? Reactions toward human-based, AI/ML-based, and augmented hiring processes. Computers in Human Behavior, 130(1), 1-18. https://doi.org/10.1016/j.chb.2022.107179

Haefner, N., Wincent, J., Parida, V., & Gassmann, O. (2021). Artificial intelligence and innovation management: a review, framework, and research agenda. Technological Forecasting and Social Change, 162(1), 1-20. https://doi.org/10.1016/j.techfore.2020.120392

Howard, A., & Borenstein, J. (2018). The ugly truth about ourselves and our robot creations: The problem of bias and social inequity. Science and Engineering Ethics, 24(1), 1521-1536. https://doi.org/10.1007/s11948-017-9975-2

Jaiswal, A., Arun, C., & Varma, A. (2022). Rebooting employees: up skilling for artificial intelligence in multinational corporations. The International Journal of Human Resource Management, 33(1), 1179-1208. https://doi.org/10.1080/09585192.2021.1891114

Jegede, F. (2021). Doing a PhD in the social sciences: A student’s guide to post-graduate research and writing. London: Routledge.

Kambur, E., & Akar, C. (2022). Human resource developments with the touch of artificial intelligence: a scale development study. International Journal of Manpower, 43(1), 168-205. https://doi.org/10.1108/IJM-04-2021-0216

Kshetri, N. (2021). Evolving uses of artificial intelligence in human resource management in emerging economies in the global South: some preliminary evidence. Management Research Review, 44(7), 970-990. https://doi.org/10.1108/MRR-03-2020-0168

Lee, N. (2018). Detecting racial bias in algorithms and machine learning. Journal of Information, Communication and Ethics in Society, 16(3), 252-260. https://doi.org/10.1108/JICES-06-2018-0056

Lin, Y., Hung, T., & Huang, L. (2021). Engineering equity: how AI can help reduce the harm of implicit bias. Philosophy & Technology, 34(1), 65-90. https://doi.org/10.1007/s13347-020-00406-7

Liu, P., Qingqing, W., & Liu, W. (2021). Enterprise human resource management platform based on FPGA and data mining. Microprocessors and Microsystems, 80(1), 1-20. https://doi.org/10.1016/j.micpro.2020.103330

Malik, P., & Lenka, U. (2020). Identifying HRM practices for disabling destructive deviance among public sector employees using content analysis. International Journal of Organizational Analysis, 28(3), 719-744. https://doi.org10.1108/IJOA-02-2019-1658

Minbaeva, D. (2021). Disrupted HR? Human Resource Management Review, 31(4), 1-21. https://doi.org/10.1016/j.hrmr.2020.100820

Molinillo, S., & Japutra, A. (2017). Organizational adoption of digital information and technology: a theoretical review. The Bottom Line, 30(1), 33-46. https://doi.org/10.1108/BL-01-2017-0002

Nicolaescu, S., Florea, A., Kifor, C. V., Fiore, U., Cocan, N., Receu, I., & Zanetti, P. (2020). Human capital evaluation in knowledge-based organizations based on big data analytics. Future Generation Computer Systems, 111(1), 654-667. https://doi.org/10.1016/j.future.2019.09.048

Ostheimer, J., Chowdhury, S., & Iqbal, S. (2021). An alliance of humans and machines for machine learning: hybrid intelligent systems and their design principles. Technology in Society, 66(1), 1-18. https://doi.org/10.1016/j.techsoc.2021.101647

Ozkazanc-Pan, B. (2021). Diversity and future of work: Inequality abound opportunities for all? Management Decision, 59(11), 2645-2659. https://doi.org/10.1108/MD-02-2019-0244

Pereira, V., Hadjielias, E., Christofi, M., & Vrontis, D. (2021). A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective. Human Resource Management Review, 9(1), 1-20. https://doi.org/10.1016/j.hrmr.2021.100857

Pessach, D., & Shmueli, E. (2021). Improving fairness of artificial intelligence algorithms in privileged-group selection bias data settings. Expert Systems with Applications, 185(15), 1-17. https://doi.org/10.1016/j.eswa.2021.115667

Phillips, E., & Johnson, C. (2022). How to Get a PhD: A handbook for students and their supervisors 7e. London: McGraw-Hill Education (UK).

Prikshat, V., Patel, P., Varma, A., & Ishizaka, A. (2022). A multi-stakeholder ethical framework for AI-augmented HRM. International Journal of Manpower, 43(1), 226-250. https://doi.org/10.1108/IJM-03-2021-0118

Qamar, Y., Agrawal, R., Samad, T., & Jabbour, C. (2021). When technology meets people: the interplay of artificial intelligence and human resource management. Journal of Enterprise Information Management, 34(5), 1339-1370. https://doi.org/10.1108/JEIM-11-2020-0436

Raffoni, A., Visani, R., Bartolini, M., & Silvi, R. (2018). Business Performance Analytics: exploring the potential for Performance Management Systems. Production Planning & Control, 29(1), 1-18. https://doi.org/10.1080/09537287.2017.1381887

Rowe, N. (2021). The realities of completing a PhD: how to plan for success. London: Routledge.

Rozado, D. (2020). Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types. PLoS ONE, 15(4), 1-19. https://doi.org10.1371/journal.pone.0231189

SAGE (2024, October 17). Sage Journals. Sage Journals. https://journals.sagepub.com/

Santhosh, R., & Mohanapriya, M. (2021). Generalized fuzzy logic based performance prediction in data mining. Materials Today: Proceedings, 45(2), 1770-1774. https://doi.org/10.1016/j.matpr.2020.08.626

ScienceDirect (2024, October 17). ScienceDirect. ScienceDirect. https://www.sciencedirect.com/

Seppala, P., & Malecka, M. (2024). AI and discriminative decisions in recruitment: Challenging the core assumptions. Big Data & Society, 11(1), 1-20. https://doi.org/10.1177/20539517241235872

Serrano-Guerrero, J., Romero, F., & Olivas, J. (2021). Fuzzy logic applied to opinion mining: A review. Knowledge-Based Systems, 222(1), 1-19. https://doi.org/10.1016/j.knosys.2021.107018

Siddiqui, N., & Gorard, S. (2022). Making your doctoral research project ambitious: Developing large-scale studies with real-world impact. New York: Taylor & Francis.

Smith, I., & Felix, M. (2019). A practical guide to dissertation and thesis writing. Cambridge: Cambridge Scholars Publishing.

Taylor & Francis (2024, October 17). Taylor & Francis. Taylor & Francis. https://www.tandfonline.com/

Tuffaha, M. (2023). The Impact of Artificial Intelligence Bias on Human Resource Management Functions: Systematic Literature Review and Future Research Directions. European Journal of Business and Innovation Research, 11(4), 35-58. https://doi.org/10.37745/ejbir.2013/vol11n3123

Author

  • phd_writer_6

    Linda opted to return to education to pursue a PhD in Business and HRM after a career in HRM and management. She has also received training in Speech and Language Therapy and English as a Foreign Language Teaching.

    View all posts PhD Business and HRM Writer