An HR manager at a London professional services firm cut job description writing time from 2–3 hours to 30 minutes by learning how to structure AI prompts for HR-specific outputs. The same approach extended to screening questions, rejection emails, and onboarding checklists — all using the AI Survival Kit for HR Professionals.

HR Manager Published

How an HR Manager Cut
Job Description Writing
Time by 70% Using AI

James T. had tried ChatGPT once, got output that was "too generic to use", and assumed AI wasn't for HR. He was wrong about the tool. He was right about the prompt.

70%
reduction in JD writing time
15–20
new roles handled per quarter
30 min
per JD including review

Disclaimer: This is a composite account based on early user experiences. Names and identifying details are fictional. Individual results vary.

The Problem

Writing job descriptions from scratch for 15–20 roles a quarter

James T. is an HR manager at a professional services firm in London, leading a team of three. His firm is growing — 15 to 20 new roles a quarter, ranging from mid-level analysts to senior consultants. Each role is different enough that copy-pasting the previous JD wasn't an option.

Writing a job description properly — clarity on seniority, responsibilities, must-have qualifications, company culture — took two to three hours. Multiply that across 20 roles and it consumed a significant chunk of James's month before he'd done anything else.

The screening question sets were generic. He used a standard bank of 12 questions and rotated them. The questions weren't wrong — they just weren't specific to the role. Interviewers noticed. Occasionally, a candidate did too.

Rejection emails were the most uncomfortable part. His team used a single template, copy-pasted and tweaked by hand. For candidates who'd made it to a final round, it felt impersonal. He knew it. He just didn't have time to do it differently.

He'd tried ChatGPT once, typed "write me a job description for a senior consultant in professional services", and got back something that read like a LinkedIn post from 2018. He closed the tab and didn't try again.

The problem wasn't ChatGPT. It was the prompt.

What He Did

The kit didn't just give him templates — it showed him why the prompts work

James bought the AI Survival Kit for HR Professionals. The first thing he did was read the system prompt module — a section on writing a persistent instruction set that tells the AI it's an HR professional working in a specific context, before any task is given.

This was the key change. His previous ChatGPT attempt had no system prompt — he'd just asked for a JD cold, with no context about the firm, the sector, the seniority level, or the role the output needed to serve. The system prompt fixed all of that.

His second step was the JD prompt. The kit provides a five-bullet brief format: seniority, team structure, top three responsibilities, must-have qualifications, and one sentence on company culture. With those five bullets filled in, the AI produces a first-draft JD that reflects the actual role — not a generic consultant job spec.

The third step was adapting the same brief to generate screening questions — something that took him a few iterations to get right, but which now runs automatically from the same role input.

3 Prompts He Used

Prompt 1 — HR System Prompt (set this first in every session)

You are a senior HR professional at a professional services firm in London. You specialise in writing clear, compelling job descriptions that attract qualified candidates and filter out poor fits. When writing job descriptions: - Lead with the impact of the role, not the duties - Use active verbs (manage, lead, develop — not "responsible for") - Be specific about seniority, team structure, and reporting lines - List requirements as "must-have" and "nice-to-have" separately - Keep the tone professional but human — not corporate boilerplate Your audience is a qualified mid-career professional who reads dozens of job ads. Make this one worth reading. Keep these instructions active for the entire session.

Why it works: This grounds the AI in a specific context before any task begins. The "your audience" instruction is particularly important — it shifts the AI from writing a JD for a recruiter to writing one for a candidate who has options.

Prompt 2 — Job Description from 5-Bullet Brief

Write a job description using this role brief: - Seniority: Senior Consultant (IC, not people manager yet — promotion track to Manager within 18 months) - Team: 6-person consulting team, reports to Associate Director - Top 3 responsibilities: (1) lead client delivery on 2–3 projects simultaneously, (2) manage junior analyst output and review, (3) contribute to new business proposals in your sector - Must-haves: 4+ years in management consulting or advisory, strong written communication, experience in financial services sector - Culture: High performance, collaborative, meritocratic. We promote on output, not time served. Format: Role Overview (3 sentences), Key Responsibilities (6–8 bullets), Requirements (must-have / nice-to-have), What We Offer (3–4 bullets). Aim for 450–550 words total.

Why it works: The five-bullet brief gives the AI specific material to work from. The word count and format instruction ensures you get a usable structure — not a wall of text. With the system prompt already set, the tone is consistent throughout.

Prompt 3 — Role-Specific Screening Questions

Based on the job description you just wrote, generate a screening question set for phone interviews. Include: - 2 questions that test for the must-have technical competencies (financial services experience, client delivery) - 2 questions that test for soft skills relevant to this role (managing junior staff, writing quality) - 1 behavioural question that distinguishes candidates who have genuinely led client work from those who supported it - 1 values question aligned with the culture note (performance-based promotion, meritocracy) For each question, add a one-line note on what a strong answer typically includes. Format as a numbered list.

Why it works: By referencing the JD you've already built in the same session, you don't have to repeat the context. The "distinguishes candidates who led vs. supported" instruction is the most valuable part — it forces the AI to write a question with discrimination power, not a softball.

What Changed

Across every stage of the recruiting workflow

JDs: 2–3 hours each, written from scratch

30 minutes including review and edits

Generic 12-question screening bank, rotated

Role-specific question sets generated per JD

Single rejection template, copy-pasted

Staged rejection emails by candidate interview round

Onboarding checklists built from memory

Full onboarding checklist drafted in one session

The rejection email upgrade was the change James mentions most. He now has three templates — early-stage, mid-process, and final-round — each written with different levels of warmth and specificity. Candidates who reached the final interview receive a more considered message. The AI wrote the first drafts of all three in a single session.

The onboarding checklist was a surprise use case. He'd been putting it off for months. He gave the AI a role description and a list of systems the new hire would need access to, and asked for a 30-day onboarding plan. It took 20 minutes to produce something he could review and send.

"The prompt library is the bit I keep coming back to. It's not just copy-paste — it teaches you why the prompt works."

— James T., HR Manager, professional services firm, London

Common Questions

AI for HR professionals

Why does ChatGPT produce generic job descriptions? +

Generic output is almost always a prompting problem. If you tell ChatGPT 'write me a job description for a project manager', it has no context — it produces the statistical average of every PM JD it's been trained on. The fix is a structured role brief: seniority, team size, reporting line, top 3 responsibilities, must-haves, and company tone. With that context, the output is specific and usable.

Can AI help with candidate screening as well as job descriptions? +

Yes. Once you've written the JD, you can ask the AI to generate a role-specific screening question set based on the same brief. Ask it to produce questions that test for the top 3 required competencies, plus one question that distinguishes candidates who have actually done the work from those who've only supported it. The HR kit includes a full module on this.

Is it appropriate to use AI to write rejection emails to candidates? +

Yes — with care. The risk with AI rejection emails is that they feel even more impersonal than a standard template. The solution is to give the AI the candidate stage (phone screen vs. final-round) and instruct it to vary the level of acknowledgement accordingly. A candidate who went through three interviews deserves a warmer tone and more specific thanks. The HR kit includes a staged rejection email module.

Get the Same System

AI Survival Kit for HR Professionals

30 copy-paste prompts for HR workflows. System prompt templates. JD builder, screening question generator, rejection email library, and onboarding toolkit — all included.

See the HR Professionals Kit →

From $47 · Instant PDF · 30-day money-back guarantee