Leia — Your Human Resource Assistant at 913.ai

Leia helps teams make quick, fair, and defensible hiring decisions. She reviews CVs, scores role fit with weighted criteria, builds shortlists, designs interview plans, and summarizes risks in plain English. Outputs are concise, traceable to evidence, and ready to act on.

Written By Mahir Mushtaq

Last updated 6 months ago

1) What Leia Is Great At

Structured CV reviews

  • Surfaces strengths, gaps, measurable impact, and red flags.

  • Quotes exact CV lines for each claim (role, dates, outcomes).

  • Example: “Achievement quantified: reduced support tickets 22 percent — depth primarily single product.”

Role-fit scoring

  • Builds weighted criteria and scores 0 to 5 per criterion.

  • Produces pass, hold, or reject with a one-line rationale.

  • Example: “Weighted score 4.2 — pass. System design 4.5, ownership 4.0, Python 4.0, communication 4.5.”

Ranked shortlists

  • Ranks top candidates with tie-breakers (impact, scope, leadership signals).

JD alignment

  • Maps must-haves vs nice-to-haves with direct evidence or specific gaps.

Interview support

  • Generates question banks, take-home rubrics, structured scorecards, and pass bars.

Offer support

  • Lists risks, mitigation plans, and reference-check prompts tied to uncertainties.


2) Best Ways to Ask for Help

Share the role

  • Title and level, must-haves vs nice-to-haves, stack or domain, team context.

Provide inputs

  • JD, 5 to 10 CVs, any internal preferences or deal-breakers.

Set constraints

  • Bias checks (blind CVs), concise summaries, credentials to verify, preferred tone.

Define deliverables

  • Ranked list, per-candidate summary, score table with weights, interview plan.

Acceptance criteria (example)

  • “Weight system design 30 percent, Python 25 percent, leadership 20 percent, cloud 15 percent, communication 10 percent.”

  • “Provide pass, hold, or reject with a one-line rationale per candidate.”


3) Best Practices

  • Start with a role profile and weighted criteria — confirm before review.

  • Use evidence-based scoring — quote exact CV lines.

  • Run bias checkpoints — optional blinding and consistent criteria.

  • Provide pass, hold, or reject with:

    • Clear reasons

    • What is missing or needs verification

    • Suggested interview probes


4) Quick Tips

  • Ask for a one-page hiring brief first (role, stack, must-haves, culture).

  • Request a comparison table and a top-three shortlist with tie-breaker logic.

  • Include red flags and calibration notes (for example, value ownership over brand names).

  • End with next steps — who to interview, structure, and verification items.


5) Common Tasks With Ready-to-Use Prompts

  • Multi-CV review with weights

    “Review these 8 CVs for Senior Backend. Weights: system design 30 percent, Python 25 percent, cloud 20 percent, ownership 15 percent, communication 10 percent.”

  • JD-to-CV matching

    “Match these CVs to this JD. Produce a ranked shortlist with rationale.”

  • Interview plan

    “Create an interview plan — 5 must-ask questions, scoring rubric, pass bar.”

  • Scorecard creation

    “Turn this JD into an objective scorecard (0 to 5) with weights.”

  • Concise candidate summaries

    “Summarize each candidate in 6 bullets — strengths, gaps, risks, questions.”


6) Interview Support Pack (starter)

Must-ask questions

  • “Walk me through a system you designed — trade-offs, bottlenecks, metrics.”

  • “Describe a challenging incident. What changed after your fix”

  • “Tell me about a time you owned a problem across teams.”

  • “How do you balance speed versus quality under deadlines”

  • “Most impactful feedback you received and how you acted”

Scoring rubric (0 to 5)

  • 0 to 1: superficial — no evidence

  • 2 to 3: partial depth — limited examples

  • 4: solid depth — quantifiable outcomes

  • 5: expert — nuanced trade-offs, clear metrics and reflection

Pass bar

  • Overall at least 3.8 weighted, no critical criterion below 3, at least two quantified outcomes.


7) Offer and Risk Mitigation

Common risks

  • Ownership overstated, leadership limited to mentoring, metrics lack context, short tenures.

Mitigation

  • Targeted reference checks, trial project or structured probation goals, early performance metrics.

Reference-check prompts

  • “What problems did they own end to end”

  • “How did they handle pressure or ambiguity”

  • “What would you coach them on”


8) Bias-Aware Process

  • Optional CV blinding before review.

  • Uniform weighted criteria aligned to JD must-haves.

  • Evidence-first summaries quoting exact CV lines.

  • Consistent pass, hold, or reject rationale and interview probes.


9) Ready-to-Send Intake Template

Role title and seniority:

Must-haves (5 max):

Nice-to-haves (3 to 5):

Weights (sum to 100 percent):

Team context and culture notes:

Deal-breakers:

Deliverables you want: ranked list, per-candidate summaries, score table, interview plan, reference-check questions

Bias checks: blind CVs, remove education identifiers, standardize titles and dates


Getting Started — 3 Simple Steps

  1. Share the role and criteria — title, seniority, must-haves vs nice-to-haves, and weights

  2. Send inputs — JD and 5 to 10 CVs, plus culture notes or deal-breakers

  3. Pick deliverables — ranked shortlist, score table, per-candidate bullets, interview plan