Testing Guidelines
Testing strategy for TimeTiles: unit tests for logic, integration tests for workflows, E2E tests for user journeys.
Core Principle
Use real implementations. Only mock external paid APIs, rate-limited services, or when testing error scenarios.
Test Types
Unit Tests (tests/unit/)
Test isolated functions and business logic.
- Framework: Vitest
- Speed: < 100ms per test
- Setup: Mock payload objects, use test data factories
- Examples: Coordinate parsing, date formatting, schema validation
Integration Tests (tests/integration/)
Test components working together with real database.
- Framework: Vitest with PostgreSQL
- Setup:
createIntegrationTestEnvironment() - Database: Isolated per worker, auto cleanup
- Examples: File uploads, job processing, API endpoints
E2E Tests (tests/e2e/)
Test complete user workflows in the browser.
- Framework: Playwright
- Environment: Full application stack
- Examples: Import workflow, map exploration, data filtering
Mocking Rules
Never Mock
- Database operations → use test database
- Payload CMS operations → use real Payload
- Internal services → use actual implementations
- File system → use temp directories
- Job queues → use real handlers
Can Mock (Document Why)
- External paid APIs (Google Maps) → costs and rate limits
- Rate-limited services → avoid CI/CD quotas
- Network failures → test error handling
- Time →
vi.setSystemTime()for date tests
Example: Mocking External API
/**
* Mocking acceptable because:
* - Testing OUR caching/fallback logic
* - Real API has costs and rate limits
* - Need deterministic test scenarios
*/
vi.mock("node-geocoder", () => ({ default: mockGeocoder }));Running Tests
From project root (monorepo orchestration):
make test # Run all tests across all packages
make test-ai # AI-friendly output (silent, JSON results)
# Filter tests for faster iteration (24-120x faster)
make test-ai FILTER=date.test # Run specific test file
make test-ai FILTER=tests/unit # Run unit tests directory
make test-ai FILTER=store.test # Run store tests
make test-ai FILTER=tests/unit/lib # Run specific directory
make test-ai FILTER="date|store|geo" # Multiple patterns (pipe-separated)From apps/web (package-specific commands):
# Unit and Integration
pnpm test # All tests
pnpm test:debug # Verbose output with logs
pnpm test:unit # Unit tests only
pnpm test:integration # Integration tests only
pnpm test:coverage # With coverage report
# E2E Tests
pnpm test:e2e # All E2E tests
pnpm test:e2e:debug # Debug mode (headed, visible)
# Specific tests
pnpm test tests/unit/services/schema-builder.test.ts
pnpm test --grep "geocoding"
# With debugging logs
LOG_LEVEL=debug pnpm test tests/integration/services/seed-config.test.tsAI Output: Results saved to apps/web/.test-results.json, apps/web/.lint-results.json, apps/web/.typecheck-results.json
Analyzing Test Results
When tests fail, use jq to extract detailed information from the JSON results:
# See specific failed test names
cat apps/web/.test-results.json | jq '.testResults[] | select(.status=="failed") | .name'
# See assertion details for failures
cat apps/web/.test-results.json | jq '.testResults[] | select(.status=="failed") | .assertionResults[] | select(.status=="failed")'
# Count total failures
cat apps/web/.test-results.json | jq '[.testResults[] | select(.status=="failed")] | length'Analyzing Lint Results
Extract lint errors and top violations:
# Files with errors
cat apps/web/.lint-results.json | jq '.[] | select(.errorCount > 0) | .filePath'
# Specific error messages
cat apps/web/.lint-results.json | jq '.[] | select(.errorCount > 0) | .messages[] | select(.severity == 2)'
# Full error details with location
cat apps/web/.lint-results.json | jq '[.[] | select(.errorCount > 0) | {file: .filePath, errors: [.messages[] | select(.severity == 2) | {line, column, message, ruleId}]}]'Analyzing TypeScript Errors
Extract type errors from typecheck results:
# All type errors
cat apps/web/.typecheck-results.json | jq '.errors'
# Errors by file
cat apps/web/.typecheck-results.json | jq '.summary.byFile'
# Errors by error code
cat apps/web/.typecheck-results.json | jq '.summary.byCode'
# Specific error details
cat apps/web/.typecheck-results.json | jq '.errors[] | {file, line, code, message}'Test Setup Patterns
Integration Tests (with Database)
Use createIntegrationTestEnvironment() for tests that need real PostgreSQL and Payload CMS:
import { createIntegrationTestEnvironment, withCatalog, withDataset } from "@/tests/setup/integration-test-environment";
let testEnv: Awaited<ReturnType<typeof createIntegrationTestEnvironment>>;
beforeAll(async () => {
testEnv = await createIntegrationTestEnvironment();
});
afterAll(async () => {
await testEnv.cleanup();
});
beforeEach(async () => {
await testEnv.seedManager.truncate();
});
it("processes import file", async () => {
const { catalog } = await withCatalog(testEnv);
const { dataset } = await withDataset(testEnv, catalog.id);
// Test with real database and Payload...
});Use for: File uploads, job processing, API endpoints, access control, geospatial queries
Unit Tests (with Mocks)
Use factories and mocks for tests that verify business logic:
import { vi } from "vitest";
import { createEvent } from "@/tests/setup/factories";
const mockPayload = { findByID: vi.fn(), create: vi.fn() };
it("validates event data", () => {
const event = createEvent({ data: { title: "Test" } });
expect(validateEvent(event).valid).toBe(true);
});Use for: Validation logic, coordinate parsing, schema building, date formatting, utility functions
Component Tests (with Factories)
import { renderWithProviders } from "@/tests/setup/test-utils";
import { createEvent } from "@/tests/setup/factories";
const event = createEvent({ data: { title: "My Event" } });
const { getByText } = renderWithProviders(<EventCard event={event} />);
expect(getByText("My Event")).toBeInTheDocument();