PPS · T&S
Internal Thought Piece · 2026
Thought Piece

Hiring a role,
hiring a model.

A simple frame for thinking about AI projects in PPS: treat each one like a hire. Start from the business need, write the “talent portrait,” and only then go looking for the model that fits the job.

Mapping business to role.

When a team has an internal-efficiency problem, you don’t hire a generic person — you look for someone who has built workflows, optimized processes, driven automation. The mapping from need to profile is what makes the hire work. AI projects deserve the same discipline.

The human era

Need → talent portrait → hire

You define the business need first. Then sketch the kind of person who has solved that need before. Then go find them. You evaluate the hire against the original need, not against generic resume traits.

The AI era

Need → capability portrait → project

Define the business need and the pain points. Map them to expectations of a model or workflow. That mapping is the project — and it’s also how you tell whether the project is running well.

— SAME DISCIPLINE, DIFFERENT WORKER —
Area Scenario Factor Ability

Four areas where PPS needs AI.

From Eric’s original whiteboard. Each area breaks down into the scenarios where AI gets work, and each scenario into the abilities a model must demonstrate.

Efficiency Improvement

效率提升
Routine tasks 日常任务
Understanding Summary Operation on behalf
Adhoc tasks 临时任务
Understanding Operation on behalf Summary

Training & Improvement

培训和提升
Quality improvement 质量提升
Test & entry limitation Material design Progress tracking
New onboarding 新人上岗
Progress tracking Test & entry limitation Material design
Change awareness 变更周知
Material design Test & entry limitation Progress tracking

Moderation Output

审核产出
Policy 政策
Enforcement Understanding
SOP
Understanding Enforcement

RCA 归因

Root-cause analysis
Improvement · Top issue policy 政策优先级
Data explore Understanding Summary & feedback
Improvement · Moderation health 审核健康度
Data explore Summary & feedback
Change · Human 人
Summary & feedback Data explore
Change · Policy 政策
Understanding Enforcement Summary & feedback Data explore
Change · Platform / Tool / Task 平台·工具·任务
Summary & feedback Data explore

Six rules for setting up an AI project.

If the project doesn’t pass these, the “hire” isn’t real — it’s a model on a leaderboard, not a worker on the team.

01

Lead by business metric

// not by model score

AI success metrics must map to business metrics — accuracy, leakage, overkill, productivity. Pure model metrics alone are not a deliverable. Training set, test set, and success metric must all live in the same business scenario.

02

Blind-test first

// keep the answer key locked

The test set is invisible to the modeling team. It’s maintained and audited by an independent project POC. No mixing, no leakage, no “just one peek.”

03

Multi-locale by default

// not English-first

Cover at least the Top X market languages and cultural contexts. A model that works only in one locale isn’t shipping for TikTok — it’s a demo.

04

Freshness

// refresh, don’t overfit

Refresh at least Y% of the test set every X period. A frozen test set rewards memorization, not capability.

05

Reproducibility

// version everything

Sampling SQL, annotation SOPs, and evaluation scripts must be archived and version-controlled. If you can’t rerun last quarter’s eval, you don’t actually know what changed.

06

Human-AI alignment

// vs. Golden set, not vs. each other

Test alignment against a Golden set — not just model vs. model, or model vs. generic moderation output. The bar is human ground truth, not the next-best machine.

“Find the exact business requirement. Write the talent portrait. Then build the project — and judge it by the same standard you’d judge the hire.”