Skip to Content
⚠️Active Development Notice: TimeTiles is under active development. Information may be placeholder content or not up-to-date.

Troubleshooting

Diagnosis and recovery for the TimeTiles ingest pipeline.

Error Model

Task handlers communicate results in three ways:

PatternMeaningPipeline behavior
ThrowTransient failure (network timeout, DB deadlock, rate limit)Payload retries the task automatically; completed tasks return cached output on retry
Return { needsReview: true }Human decision needed (schema drift, high duplicates, geocoding failure rate)Pipeline pauses for that sheet only; other sheets continue
Return dataSuccessPipeline advances to the next task

Multi-sheet workflows (manual-ingest, scheduled-ingest, scraper-ingest) wrap each sheet in a try/catch inside Promise.allSettled. A failure in one sheet does not block others. The markSheetFailed function marks the individual ingest job as FAILED with error details. The parent import file status reflects the aggregate state across all sheets.

The single-job ingest-process workflow (queued after NEEDS_REVIEW approval) does not use per-sheet isolation. Errors propagate normally and Payload’s onFail callback fires as expected.

Note: Payload’s onFail callback does not fire when errors are caught by Promise.allSettled. Multi-sheet workflows use explicit markSheetFailed instead.

Common Issues

NEEDS_REVIEW

Symptoms: Ingest job shows NEEDS_REVIEW status. Admin interface shows pending reviews.

Review reasons (defined in review-checks.ts):

ReasonTriggerResume point
schema-driftBreaking schema changes in automated importcreate-schema-version
quota-exceededImport would exceed user’s event quotadetect-schema
high-duplicatesDuplicate rate exceeds 80%detect-schema
geocoding-partialGeocoding failure rate exceeds 50%create-events

Breaking vs non-breaking schema changes:

  • Breaking (require approval): field type changes, required fields removed, constraint narrowing, date format changes, enum value restrictions
  • Non-breaking (can auto-approve if autoApproveNonBreaking is set): new optional fields, constraint expansion, enum additions, type generalization

Resolution options:

  1. Review and approve changes in admin interface (sets schemaValidation.approved = true)
  2. Enable autoGrow on the dataset for future auto-approval of non-breaking changes
  3. Configure type transformations to handle known mismatches
  4. Reject and fix source data

Duplicate Events

Symptoms: Same event appears multiple times, or expected duplicates are not detected.

Check these settings on the dataset:

  • idStrategy.type — is it external (field-based) or computed (hash)?
  • For external IDs: verify the field path is correct (case-sensitive) and the field exists in all rows
  • For computed hashes: verify the selected fields create unique combinations and avoid unstable fields (timestamps, counters)
  • deduplicationConfig.enabled — is deduplication on?
  • The duplicates summary on the ingest job shows what was detected

Resolution: Fix the ID strategy configuration, add more fields to the computed hash, or re-import after deleting duplicates.

Geocoding Failures

Symptoms: Events created without coordinates. Geocoding stage shows errors.

Field detection issues:

  • Address field name does not match common patterns — add manual field mapping override via geocodingCandidates on the ingest job
  • Latitude/longitude field names are non-standard, values are outside valid ranges (-90 to 90, -180 to 180), or fields contain non-numeric data

Provider issues:

  • Invalid or expired API key (check geocoding provider configuration in Settings global)
  • Rate limit exceeded (the geocode-batch task throws on 429 responses, so Payload retries automatically)
  • Provider service outage

Resolution: Fix API configuration, add manual field mappings, or switch geocoding providers.

Schema Conflicts

Symptoms: Schema validation errors, type mismatch errors during import.

Common causes:

  • Source data types changed between imports (e.g., a numeric field now contains strings)
  • New fields in source data but autoGrow is disabled on the dataset schema
  • Values exceed existing constraints (min/max, maxLength, enum set)
  • Schema is locked (schemaConfig.locked = true)

Resolution: Add type transformations for known mismatches, enable autoGrow, relax constraints, or approve schema changes via NEEDS_REVIEW flow.

Debugging Tools

Version History

Open the ingest job in the admin interface and navigate to the “Versions” tab. Each stage transition is recorded with timestamps. Look for:

  • Long gaps between transitions (bottlenecks)
  • Stage transitions that failed and were retried
  • Changes in progress, errors, or validation results

Error Logs

  • ingestJob.errorLog — the last failure context (message, context, timestamp), set by markSheetFailed
  • ingestJob.reviewReason and ingestJob.reviewDetails — why NEEDS_REVIEW was triggered
  • Application logs filtered by ingest job ID show the full processing trace

Database Queries

Direct inspection via the admin interface or database:

  • Stuck imports: Query ingest-jobs where stage is not COMPLETED or FAILED and updatedAt is stale
  • Schema history: Check dataset-schemas for version progression by dataset
  • Event counts: Count events by ingestJob relation to verify expected totals

Recovery

Retry a Failed Job

Endpoint: POST /api/ingest-jobs/{id}/retry

Re-queues the ingest-process workflow starting from detect-schema. Only works when the job is in FAILED state. Previously completed tasks return cached output, so no work is repeated.

When to use: After fixing the underlying issue (API key, permissions, network) for a job that exhausted automatic retries.

Reset to a Specific Stage

Endpoint: POST /api/ingest-jobs/{id}/reset

Auth: Admin only.

Body: { "targetStage": "detect-schema" | "analyze-duplicates" | "validate-schema" | "geocode-batch" | "create-events", "clearRetries": true }

Resets the job stage and queues ingest-process from the corresponding resume point. The clearRetries flag (default true) clears the error log.

Cautions: Resetting clears progress from later stages. Does not delete already-created events. Ensure idempotency before resetting to a stage that writes data.

NEEDS_REVIEW Resolution

  1. User reviews the issue in the admin interface (schema changes, duplicate rates, quota)
  2. User approves (sets schemaValidation.approved = true)
  3. The ingest-jobs afterChange hook queues the ingest-process workflow automatically
  4. The workflow resumes from the appropriate point based on reviewReason

Quota-exceeded approvals require admin role.

Scheduled Import Failures

Each scheduled run creates an independent workflow instance. Failed workflows do not block future scheduled runs. The next scheduled trigger creates a fresh workflow from scratch (URL fetch, schema detection, full pipeline).

If a scheduled import consistently fails, check the source URL, API keys, and schema configuration rather than retrying the failed workflow.

Last updated on