Scheduled Imports
Fetch data from a URL automatically on a recurring schedule — hourly, daily, weekly, or custom cron expressions. The fetched data goes through the same pipeline as a manual file upload.
Setup
- In the Import wizard, choose URL as your data source
- Paste the URL and configure schema and field mappings as usual
- In the Schedule step, choose Repeat on schedule
- Set the frequency (e.g., every 6 hours, daily at midnight)
- Optionally add authentication headers for protected APIs
- Save
How It Works
A background job checks for due schedules every minute. When a schedule fires, it fetches the URL and runs the full import pipeline — schema detection, deduplication, geocoding, event creation.
If the source schema has changed between runs, you may be asked to approve the new mapping before processing continues.
Authentication
Scheduled imports support several methods for protected URLs:
| Method | Description |
|---|---|
| None | Public URLs, no authentication |
| API Key | Sent as a header or query parameter |
| Bearer Token | Authorization: Bearer <token> header |
| Basic Auth | HTTP Basic authentication |
Managing Schedules
From the Scheduled Imports section in your account:
- Pause/resume a schedule
- Trigger an immediate run
- View history of past runs
- Delete a schedule
Admins can view all schedules at /dashboard/collections/scheduled-ingests.
Webhook Triggers
Each scheduled import can optionally expose a webhook URL. POST to it to trigger an immediate run — useful for CI/CD pipelines or external automation:
curl -X POST https://your-instance.com/api/webhooks/trigger/{token}No authentication header needed — the token in the URL is the credential. Tokens are rotated when the webhook is disabled and re-enabled.
Caching
Scheduled imports use HTTP caching (RFC 7234) to avoid re-downloading unchanged data. If the source server sends appropriate Cache-Control headers, TimeTiles respects them. See Usage Limits for cache configuration.
Next Steps
- File Upload — One-time manual imports
- Scrapers — Custom scripts for non-tabular data sources
- Exploring Data — Map, timeline, filters, and sharing