HomeAI Transparency
AI Ethics & Transparency

How Our AI Makes Decisions

We believe AI-powered screening must be transparent, fair, and accountable. This report details how our AI works, how we test for bias, and how we ensure every candidate is treated equitably.

Last updated: April 2026 | Next audit: July 2026

Our AI Principles

Six core principles guide every decision our AI makes.

Explainability

Every risk score comes with a plain-English explanation of exactly which factors contributed and how much weight each received. No black boxes.

Fairness

Our models are tested quarterly for disparate impact across race, gender, age, and disability status. We publish results publicly and retrain when bias is detected.

Human Oversight

AI makes recommendations, humans make decisions. Every adverse action requires human review. Our AI never autonomously rejects a candidate.

Data Minimization

We only collect and process data strictly necessary for the screening. Personal data is encrypted at rest and in transit, and deleted per retention schedules.

Candidate Rights

Candidates can request a full explanation of their risk score, dispute any finding, and have their data deleted. We respond within 5 business days.

Continuous Monitoring

We monitor our AI's performance in production for accuracy drift, bias emergence, and edge cases. Anomalies trigger automatic human review.

How Our AI Generates Risk Scores

1

Data Collection

We aggregate data from court records, employment databases, education registries, and other verified sources. We never scrape social media or use unverified data sources.

2

Record Matching

Our AI uses probabilistic matching with name, DOB, SSN, and address to ensure we're looking at the right person. Match confidence must exceed 98% before inclusion.

3

Risk Factor Analysis

Each finding is analyzed for severity, recency, relevance to the position, and jurisdiction-specific rules. The AI considers over 200 factors in its assessment.

4

Score Generation

A composite risk score (0-100) is generated with a detailed breakdown showing exactly which factors contributed and their individual weights.

5

Human Review

Scores below 70 are automatically flagged for human review. Adverse action recommendations always require human approval before proceeding.

Q1 2026 Audit Results

Independent third-party audit conducted by AI Fairness Lab, LLC.

Overall Accuracy Rate
Industry avg: 95.1%
99.2%
False Positive Rate
Industry avg: 2.8%
0.3%
False Negative Rate
Industry avg: 2.1%
0.5%
Disparate Impact (Race)
EEOC threshold: 0.80
0.92
Disparate Impact (Gender)
EEOC threshold: 0.80
0.95
Disparate Impact (Age)
EEOC threshold: 0.80
0.91
Explainability Score
Target: 90%+
94%
Average Processing Time
SLA: < 24 hrs
4.2 hrs

Full audit report available upon request. Contact [email protected] for access.

Transparency Timeline

Q1 2026
Published first AI Transparency Report
Established baseline metrics for accuracy, bias, and explainability.
Q4 2025
Third-party bias audit completed
Independent audit by AI Fairness Lab confirmed no statistically significant disparate impact.
Q3 2025
Explainability engine v2 launched
Upgraded from feature importance to natural language explanations for every risk factor.
Q2 2025
Candidate dispute portal launched
Self-service portal for candidates to view, understand, and dispute their screening results.
Q1 2025
AI Ethics Advisory Board formed
5-member board including civil rights attorneys, AI researchers, and HR industry experts.

Questions About Our AI?

We're committed to transparency. If you have questions about how our AI works, our bias testing, or our data practices, we're here to answer them.