How Supply Chain Dependencies Complicate Bias Measurement and Accountability Attribution in AI Hiring Applications

2026-04-24Computers and Society

Computers and SocietyArtificial Intelligence
AI summary

The authors explain that AI hiring tools involve many different companies working together, like data suppliers and software developers, which makes it hard to find who is responsible for unfair bias. Even if each part seems fair alone, when combined, they can lead to discrimination, and companies using these tools often can’t see how the tools work inside. The authors suggest better checks, clearer rules, and ongoing monitoring to help everyone involved make sure the whole system is fair and accountable. They emphasize that solving this problem needs teamwork between technical experts, organizations, and regulators.

algorithmic biasAI hiring systemsaccountabilitysupply chainsproprietary algorithmsregulatory compliancesystem-level auditsinformation asymmetryvendor guidelinescontinuous monitoring
Authors
Gauri Sharma, Maryam Molamohammadi
Abstract
The increasing adoption of AI systems in hiring has raised concerns about algorithmic bias and accountability, prompting regulatory responses including the EU AI Act, NYC Local Law 144, and Colorado's AI Act. While existing research examines bias through technical or regulatory lenses, both perspectives overlook a fundamental challenge: modern AI hiring systems operate within complex supply chains where responsibility fragments across data vendors, model developers, platform providers, and deploying organizations. This paper investigates how these dependency chains complicate bias evaluation and accountability attribution. Drawing on literature review and regulatory analysis, we demonstrate that fragmented responsibilities create two critical problems. First, bias emerges from component interactions rather than isolated elements, yet proprietary configurations prevent integrated evaluation. A resume parser may function without bias independently but contribute to discrimination when integrated with specific ranking algorithms and filtering thresholds. Second, information asymmetries mean deploying organizations bear legal responsibility without technical visibility into vendor-supplied algorithms, while vendors control implementations without meaningful disclosure requirements. Each stakeholder may believe they are compliant; nevertheless, the integrated system may produce biased outcomes. Analysis of implementation ambiguities reveals these challenges in practice. We propose multi-layered interventions including system-level audits, vendor guidelines, continuous monitoring mechanisms, and documentation across dependency chains. Our findings reveal that effective governance requires coordinated action across technical, organizational, and regulatory domains to establish meaningful accountability in distributed development environments.