Writing challenge objectives users actually finish
Most challenges die at step two or three. Not at the finish line, where you can see them coming. In the middle, where the wording was vague. Compare 'engage with the product' to 'send your first message'. Same intent, completely different completion rate. This guide is about the second version: verbs that act, thresholds that are reachable, and an ordering that keeps users moving.
Key takeaways
- Every objective starts with a verb the user can act on. 'Send your first message' beats 'Use messaging features'.
- Thresholds should match median user behaviour, not power-user behaviour.
- Order objectives easy-medium-hard-easy-payoff. The last easy-step before payoff is what closes the deal.
- Each objective must have one clear completion signal. Ambiguous endings produce support tickets.
- If two objectives can be done in either order, let the user choose. Forced ordering loses users at the gate.
Definition
What good objective design actually means
The discipline of writing the steps inside a challenge so each one is unambiguous, achievable, and clearly tied to the reward. It is editorial work disguised as product work. The rest of the campaign cannot save weak objectives.
Plain definition
Objective design is the discipline of writing the steps inside a challenge so each one is unambiguous, achievable, and clearly connected to the reward. It covers the verb, the threshold, the ordering, and the completion signal. Without strong objective design, even a well-funded challenge stalls.
Who runs this
Lifecycle, growth, and product marketing teams. The work is mostly editorial and operational, with some engineering input on completion-signal definitions.
How it differs from adjacent mechanics
- vs feature naming. Feature names describe a tool. Objectives describe a user action. The same feature can power dozens of different objectives depending on the verb and threshold.
- vs scoring rules. Scoring rules decide how much a step is worth. Objective design decides what the step actually is. Get the objective right first; tune the score after.
- vs challenge themes. The theme is the headline ('30 days of fitness'). Objectives are the steps inside. A great theme cannot save weak objectives.
Verbs
Pick the verb that maps to one action
The verb is the most important word in the objective. It decides whether the user knows exactly what to do or has to translate. 'Use' is a translation. 'Send' is an action.
| Weak verb | Strong replacement | Why |
|---|---|---|
| Use | Send / Run / Post / Save | 'Use' is vague; the strong verb names the action. |
| Try | Complete / Finish | 'Try' implies optional. The strong verb implies a definite endpoint. |
| Engage with | Open / Read / Watch | 'Engage' is internal jargon; the strong verb is observable. |
| Set up | Connect / Link / Import | 'Set up' is vague about what is configured. Be specific. |
| Explore | Visit / View / Discover | 'Explore' has no completion signal. Pick a verb with a clear endpoint. |
Thresholds
Set the threshold to median user behaviour
Anchor to actuals
Look at what median users naturally do in the period. Set the threshold at or just above that level. Anchoring to power users locks out 80 percent of participants.
Round to a memorable number
Three, five, ten, twenty. Memorable thresholds get repeated in head; obscure thresholds (seven, eleven) feel arbitrary.
Calibrate by category
A daily-habit product can ask for 5 actions a week. A monthly-purchase category cannot. Match the threshold to the natural cadence of the behaviour.
Lower it for new cohorts
First-week thresholds should be lower than steady-state. Most cohorts need a confidence boost before they push for harder targets.
Cap the upper bound
Never set a threshold so high that only top decile users can finish. The middle 60 percent of users is where the lift comes from.
Tune in production
Watch first-week completion rates. Adjust thresholds at the next campaign, not mid-campaign. Mid-flight changes break trust.
Ordering
The shape of a path that finishes
The order matters more than the individual steps. A working pattern: easy, medium, hard, easy, payoff.
| Position | Difficulty | Purpose | Example |
|---|---|---|---|
| Step 1 | Trivial | Build commitment in the first session. | Set your goal, pick a category, complete profile. |
| Step 2 | Easy | Demonstrate the core value. | Run your first action, complete one lesson, send one message. |
| Step 3 | Medium | Push depth without losing the user. | Connect an external account, invite a teammate, finish first key milestone. |
| Step 4 | Easy | Reset momentum before the final push. | Share progress, customise a setting, set a reminder. |
| Step 5 | Hard or outcome | The meaningful payoff. | First transaction, first published doc, third week of streak. |
Completion signals
One clear ending per objective
Single tracked event
Each objective should resolve on one observable event. Multiple-condition objectives confuse users and create disputes.
Server-side validation
If the objective is worth a real reward, the completion signal must be server-validated. Client-side state is gamed within hours.
Idempotent on retry
If the user retries a flow, the objective should not double-credit. Use idempotency keys on the underlying event.
Explicit feedback
When the objective completes, show a clear success state. 'Step 2 done' is better than a silent UI change.
Best practices
Seven rules of objectives that finish
- 1
Lead every objective with a verb
'Send your first message' beats 'Messaging features'. The verb makes the action immediate; without it the user has to translate.
- 2
One objective, one event
Compound objectives ('send a message and update your profile') confuse the user and the data. Split them into two clear steps.
- 3
Match the threshold to the median user
Top-decile thresholds turn challenges into power-user contests. Most lift comes from the middle 60 percent. Set thresholds accordingly.
- 4
Order easy-medium-hard-easy-payoff
The penultimate easy step rescues users who hit the hard step. Skip it and your funnel drops at step 3.
- 5
Allow flexible ordering when possible
If steps 2 and 3 do not depend on each other, let users do them in either order. Forced ordering loses people who hit a temporary block.
- 6
Show one progress signal per objective
A check, a fill, or a ticked box. Multi-state objective UIs make the user uncertain whether they are done.
- 7
Test the objective on a real user before launch
Walk a non-team-member through the flow. Where they pause is a weak objective. Rewrite. Re-test.
Use cases
Where strong objective design wins
Onboarding
Five sequenced first-week steps with the easy-medium-hard-easy-payoff pattern.
Activation lift comes mostly from objective design rather than reward size. The same reward set with weak objectives underperforms by 25 percent.
Re-engagement
Three-step lapsed-user challenge: tap to return, complete one action, redeem a small reward.
Reactivation rate beats a flat discount email because the user has done a small piece of work that justifies the reward.
Cross-sell campaigns
Try-three-categories challenge with one easy starter category, one core, and one stretch.
Category penetration lifts more reliably than a flat 'try our other products' email.
Habit challenges
21-day count challenge with weekly milestones at days 5, 10, 15.
Mid-funnel completion rate is the most-improved metric. Strong milestone objectives carry the medium-frequency users through.
When to skip
When tighter objective design will not save the campaign
The reward is wrong
Even perfect objectives cannot rescue a challenge with a hollow reward. Fix the reward design first.
The window is wrong
A 60-day window with brilliant objectives still loses users in the middle. Tighten the timeline before tuning the steps.
The category does not have repeat behaviour
If the user only does the action once a year, no objective design produces engagement. Use a different mechanic.
The completion signal is impossible
If the underlying event cannot be tracked reliably, do not write an objective around it. Pick a different observable action.
Common mistakes
The mistakes that quietly kill objectives
Mistake
Vague verbs ('use', 'try', 'engage with'). Users do not know what they are supposed to do.
Fix
Pick an observable action verb. 'Send', 'run', 'post', 'save', 'connect'. One verb, one event.
Mistake
Threshold set by intuition, not data. Power users finish; the median user stalls.
Fix
Pull the median behaviour and set the threshold at or just above. Tune in the next campaign, not mid-flight.
Mistake
Compound objectives ('do X and Y'). Users finish one and assume the step is done.
Fix
Split into two objectives. The data is cleaner and the user feels more progress.
Mistake
Hard step at position 4. Users hit a wall right before payoff and bail.
Fix
Reorder. The hard step belongs at position 3. Position 4 is for momentum recovery before the outcome.
Mistake
No success state on completion. The UI quietly updates; the user is uncertain.
Fix
Show an explicit success: check mark, success toast, brief animation. Confidence at every step compounds.
In the wild
Three rewritten objectives
B2B SaaS
Before: 'Get started with the product'. After: 'Create your first project'.
Outcome. Step 1 completion lifts because the verb is observable, the noun is concrete, and the user knows the exact next action.
Banking app
Before: 'Use your card'. After: 'Make your first transaction'.
Outcome. Funded-account rate lifts because the threshold is anchored to a clear event and the wording is unambiguous.
Learning app
Before: 'Engage with the content'. After: 'Finish your first lesson'.
Outcome. Median user completes step 2 inside the first session because the action is a single observable event.
Implementation
Build this with Bricqs
Bricqs ships challenge objectives with built-in evaluators (count, threshold, streak, completion, score), idempotent event handling, and server-side completion. Drop in the verbs; the engine handles the rest.
Frequently asked
Common questions
How many objectives should a challenge have?
Three to seven. Below three feels trivial; above seven feels overwhelming. Five is the modal sweet spot for most categories.
Should we let users skip an objective?
Only if the challenge is a completion-style format with optional steps. Sequential challenges should not allow skipping; the order is part of the design.
Can two objectives share a completion signal?
No. Each objective should resolve on a distinct event. Sharing signals makes data analysis ambiguous and the user feels the steps blur together.
How do we handle users who completed an action before the challenge started?
Pick a policy and document it. Common patterns: backfill from the last 24 to 48 hours, or only count actions taken after enrolment. Be explicit so users know what counts.
What if the threshold turns out to be wrong mid-campaign?
If it is too low, leave it; users finishing early is not a problem. If it is too high, do not lower mid-campaign; ship a follow-up easier challenge for users who stalled.
Ready to ship?
Try Bricqs free, or talk to a strategist
Plan a campaign, configure the engine, and ship in days. No credit card required to start.
