Test Prompts That Actually Work: Writing Tests With AI the Right Way
AI can write tests fast, but bad test prompts produce tests that pass without actually testing anything. Learn the prompts and patterns that produce meaningful, maintainable test suites.
The Testing Anti-Pattern to Avoid
The fastest way to get meaningless tests: "Write tests for this component." You'll get tests that render the component and check that it renders — no behavior, no edge cases, no value. The problem isn't the AI; it's the prompt.
Good test prompts specify what behavior to test, not what to test against. The distinction matters enormously.
The Behavior-Driven Test Prompt
Write tests for the following component focusing on user-observable behavior.
For each test:
- Describe the user action or scenario in plain language
- Test what the user sees or experiences, not implementation details
- Do not test: internal state, CSS class names, component structure
- Do test: visible output, user interactions, accessibility, error states
Component: [PASTE COMPONENT]
Behaviors to cover:
1. [DESCRIBE USER SCENARIO 1]
2. [DESCRIBE USER SCENARIO 2]
3. [DESCRIBE EDGE CASE]
Use React Testing Library + Vitest. Follow the AAA pattern (Arrange, Act, Assert).
Example: Testing a Login Form
Write tests for this login form component.
Behaviors to test:
1. User sees an error message when submitting empty fields
2. User sees a password-specific error when password is under 8 characters
3. User sees a loading state while the form submits
4. User is redirected to /dashboard on successful login
5. User sees a server error message when credentials are wrong
6. Tab order is correct: email -> password -> submit button
7. Submit button is disabled while loading
Mock the login API call — don't make real network requests.
Use userEvent over fireEvent for realistic user interactions.
Prompting for Edge Case Discovery
What edge cases should I test for this component?
For each edge case:
1. Describe the scenario
2. Explain why it might fail
3. Write the test
Prioritize edge cases that:
- Represent realistic user behavior
- Have caused bugs in similar components
- Involve async timing or race conditions
Component:
[PASTE COMPONENT]
Generating Test Data Factories
// Prompt: Write a test factory for the User entity
// Result:
function createUser(overrides: Partial<User> = {}): User {
return {
id: crypto.randomUUID(),
email: `user-${Date.now()}@example.com`,
name: 'Test User',
role: 'USER',
createdAt: new Date(),
updatedAt: new Date(),
...overrides,
};
}
// In tests
const adminUser = createUser({ role: 'ADMIN' });
const recentUser = createUser({ createdAt: new Date() });
Prompting for Integration Tests
Write an integration test for the article creation flow.
The flow:
1. User fills out the article form
2. Clicks submit
3. API is called with the form data
4. User sees a success toast
5. User is redirected to the new article page
Setup:
- Mock the POST /api/articles endpoint to return { id: '123', slug: 'my-article' }
- Mock Next.js router
- Use MSW (Mock Service Worker) for API mocking
The test should fail if:
- The API is called with wrong data shape
- The success toast doesn't appear
- The redirect doesn't happen
Use @testing-library/user-event for all interactions.
Reviewing AI-Generated Tests
Before accepting AI tests, verify:
- Does the test fail when the behavior is broken? (Delete the implementation and check)
- Is it testing what it says it's testing? (Read it like documentation)
- Does it mock too much? (Mocking internal modules = testing the mock)
- Would a false positive be possible? (Test could pass even with broken code)
Run the test, then break the code it's supposed to test. If the test still passes, it's not testing anything useful.
Admin
Cal.com
Open source scheduling — tự host booking system, thay thế Calendly. Free & privacy-first.
Bình luận (0)
Đăng nhập để bình luận
Chưa có bình luận nào. Hãy là người đầu tiên!