Building a hiring process that prevents expensive mistakes
Hiring is one of the biggest bets a startup makes. A great hire can propel your company forward; a bad one can drain time, morale, and money.
Manage your equity and shareholders
Share schemes & options
Fundraising
Equity management
Start a business
Company valuations
Launch funds, evalute deals & invest
Special Purpose Vehicles (SPV)
Manage your portfolio
Model future scenarios
Powerful tools and five-star support
Employee share schemes
Predictable pricing and no hidden charges
For startups
For scaleups & SMEs
For larger companies
Ideas, insight and tools to help you grow
Table of Contents
Assessment tasks such as take-home challenges, case studies, and mini-projects are now a standard in hiring.
They promise a clearer view of candidates’ real skills, but many candidates feel these tasks are often just free work.
This post digs deeper into the grey zone. How do you tell a fair test from unpaid labour?
We’ll look at the line where an assessment task becomes free labour, and how to design assessments that balance insight with respect.
It helps to think of assessments on a spectrum. On one end are lightweight tests designed solely to assess a skill, and on the other are full-blown assignments that deliver business value.
The problem arises when one becomes the other.
Here are signals a test is slipping into free labour:
A senior engineer summed it up bluntly: many take-homes “drive home the idea that this employer doesn’t care if you are a carbon-based life form, as long as code comes out”. .
In a ThriveMap survey, many candidates complained that assessments took too long or were irrelevant to the job.
By contrast, a fair test sits clearly on the low-intensity side. It should be short, focused, hypothetical or anonymised, and respectful of time.
A useful distinction comes from labour law and hiring ethics:
If your assessment starts to look like client work, it flips into the working interview zone, and should be compensated or avoided.
Here’s a more detailed blueprint for designing assessments that respect candidates and still yield meaningful evaluation.
Decide exactly what behaviour or thinking process you want to observe (e.g. how someone structures a brief, prioritises features, solves ambiguity).
Don’t aim to test everything. That clarity helps you define a compact, fair task.
Set a firm upper limit, usually 1–2 hours, rarely more. Beyond that, the burden is too great.
According to GitHub’s Developer Assessment Experience, design challenges are typically done in three to four hours and candidates prefer clearly time-boxed tasks.
If you absolutely need deeper tasks (for senior roles), break them into stages, pay candidates for the heavier parts, or offer to waive for those who can’t commit the time.
Never ask candidates to produce work that could slide directly into your live pipeline.
Instead:
This prevents confusion over whether you’re outsourcing tasks and removes pressure.
A brief should explicitly state:
Providing a clear outline upfront is powerful as it reduces ambiguity and helps candidates self-select out if they don’t want to do it. It also enables fairer scoring across submissions.
Not every candidate has the time or willingness to take on a take-home test.
Options:
This flexibility respects diverse circumstances and prevents bias in favour of people with more spare time.
If your task requires more than two hours or is for a high-level role:
Even modest compensation signals respect and may reduce backlash.
Silence after an assessment is damaging. Data suggests:
At minimum, tell candidates whether they passed or not.
Even better, provide brief comments on strengths and improvement areas. That closes the candidate experience loop and protects reputation.
Where possible remove names, gender, and demographic cues, and use multiple reviewers and average to reduce individual bias
This helps avoid bias creeping in around writing style, polish, or presentation.
Test your assessments internally or with a sample group.
Collect feedback and refine:
Better to iterate now than damage employer brand later.
Candidates talk. A bad assessment experience can ripple through employer branding and recruiting metrics.
72% of job seekers who had a negative hiring experience share it publicly, and 55% abandon applications after seeing negative reviews.
How you assess people is a message about your company’s values. If applicants feel exploited, you repel not just that individual, you undermine your reputation in a market.
Vestd helps founders align people around long-term value with employee share schemes that reinforce ownership. Learn more.
Hiring is one of the biggest bets a startup makes. A great hire can propel your company forward; a bad one can drain time, morale, and money.
Career breaks. Job hopping. A zig-zagging path across industries.
Diversity is one of those goals that almost every company claims to care about.