Troubleshooting
Diagnosis and recovery for the TimeTiles ingest pipeline.
Error Model
Task handlers communicate results in three ways:
| Pattern | Meaning | Pipeline behavior |
|---|---|---|
| Throw | Transient failure (network timeout, DB deadlock, rate limit) | Payload retries the task automatically; completed tasks return cached output on retry |
Return { needsReview: true } | Human decision needed (schema drift, high duplicates, geocoding failure rate) | Pipeline pauses for that sheet only; other sheets continue |
| Return data | Success | Pipeline advances to the next task |
Multi-sheet workflows (manual-ingest, scheduled-ingest, scraper-ingest) wrap each sheet in a try/catch inside Promise.allSettled. A failure in one sheet does not block others. The markSheetFailed function marks the individual ingest job as FAILED with error details. The parent import file status reflects the aggregate state across all sheets.
The single-job ingest-process workflow (queued after NEEDS_REVIEW approval) does not use per-sheet isolation. Errors propagate normally and Payload’s onFail callback fires as expected.
Note: Payload’s
onFailcallback does not fire when errors are caught byPromise.allSettled. Multi-sheet workflows use explicitmarkSheetFailedinstead.
Common Issues
NEEDS_REVIEW
Symptoms: Ingest job shows NEEDS_REVIEW status. Admin interface shows pending reviews.
Review reasons (defined in review-checks.ts):
| Reason | Trigger | Resume point |
|---|---|---|
schema-drift | Breaking schema changes in automated import | create-schema-version |
quota-exceeded | Import would exceed user’s event quota | detect-schema |
high-duplicates | Duplicate rate exceeds 80% | detect-schema |
geocoding-partial | Geocoding failure rate exceeds 50% | create-events |
Breaking vs non-breaking schema changes:
- Breaking (require approval): field type changes, required fields removed, constraint narrowing, date format changes, enum value restrictions
- Non-breaking (can auto-approve if
autoApproveNonBreakingis set): new optional fields, constraint expansion, enum additions, type generalization
Resolution options:
- Review and approve changes in admin interface (sets
schemaValidation.approved = true) - Enable
autoGrowon the dataset for future auto-approval of non-breaking changes - Configure type transformations to handle known mismatches
- Reject and fix source data
Duplicate Events
Symptoms: Same event appears multiple times, or expected duplicates are not detected.
Check these settings on the dataset:
idStrategy.type— is itexternal(field-based) orcomputed(hash)?- For external IDs: verify the field path is correct (case-sensitive) and the field exists in all rows
- For computed hashes: verify the selected fields create unique combinations and avoid unstable fields (timestamps, counters)
deduplicationConfig.enabled— is deduplication on?- The
duplicatessummary on the ingest job shows what was detected
Resolution: Fix the ID strategy configuration, add more fields to the computed hash, or re-import after deleting duplicates.
Geocoding Failures
Symptoms: Events created without coordinates. Geocoding stage shows errors.
Field detection issues:
- Address field name does not match common patterns — add manual field mapping override via
geocodingCandidateson the ingest job - Latitude/longitude field names are non-standard, values are outside valid ranges (-90 to 90, -180 to 180), or fields contain non-numeric data
Provider issues:
- Invalid or expired API key (check geocoding provider configuration in Settings global)
- Rate limit exceeded (the geocode-batch task throws on 429 responses, so Payload retries automatically)
- Provider service outage
Resolution: Fix API configuration, add manual field mappings, or switch geocoding providers.
Schema Conflicts
Symptoms: Schema validation errors, type mismatch errors during import.
Common causes:
- Source data types changed between imports (e.g., a numeric field now contains strings)
- New fields in source data but
autoGrowis disabled on the dataset schema - Values exceed existing constraints (min/max, maxLength, enum set)
- Schema is locked (
schemaConfig.locked = true)
Resolution: Add type transformations for known mismatches, enable autoGrow, relax constraints, or approve schema changes via NEEDS_REVIEW flow.
Debugging Tools
Version History
Open the ingest job in the admin interface and navigate to the “Versions” tab. Each stage transition is recorded with timestamps. Look for:
- Long gaps between transitions (bottlenecks)
- Stage transitions that failed and were retried
- Changes in progress, errors, or validation results
Error Logs
ingestJob.errorLog— the last failure context (message, context, timestamp), set bymarkSheetFailedingestJob.reviewReasonandingestJob.reviewDetails— why NEEDS_REVIEW was triggered- Application logs filtered by ingest job ID show the full processing trace
Database Queries
Direct inspection via the admin interface or database:
- Stuck imports: Query
ingest-jobswherestageis notCOMPLETEDorFAILEDandupdatedAtis stale - Schema history: Check
dataset-schemasfor version progression by dataset - Event counts: Count events by
ingestJobrelation to verify expected totals
Recovery
Retry a Failed Job
Endpoint: POST /api/ingest-jobs/{id}/retry
Re-queues the ingest-process workflow starting from detect-schema. Only works when the job is in FAILED state. Previously completed tasks return cached output, so no work is repeated.
When to use: After fixing the underlying issue (API key, permissions, network) for a job that exhausted automatic retries.
Reset to a Specific Stage
Endpoint: POST /api/ingest-jobs/{id}/reset
Auth: Admin only.
Body: { "targetStage": "detect-schema" | "analyze-duplicates" | "validate-schema" | "geocode-batch" | "create-events", "clearRetries": true }
Resets the job stage and queues ingest-process from the corresponding resume point. The clearRetries flag (default true) clears the error log.
Cautions: Resetting clears progress from later stages. Does not delete already-created events. Ensure idempotency before resetting to a stage that writes data.
NEEDS_REVIEW Resolution
- User reviews the issue in the admin interface (schema changes, duplicate rates, quota)
- User approves (sets
schemaValidation.approved = true) - The
ingest-jobsafterChange hook queues theingest-processworkflow automatically - The workflow resumes from the appropriate point based on
reviewReason
Quota-exceeded approvals require admin role.
Scheduled Import Failures
Each scheduled run creates an independent workflow instance. Failed workflows do not block future scheduled runs. The next scheduled trigger creates a fresh workflow from scratch (URL fetch, schema detection, full pipeline).
If a scheduled import consistently fails, check the source URL, API keys, and schema configuration rather than retrying the failed workflow.