Coniglio
Repository: gitlab.com/publicala/coniglio
Coniglio is Publica.la's event-tracking and session-analytics backend. It ingests millions of client-side tracking events daily, validates and stores them, then aggregates them into reading and listening sessions that other services (for example Farfalla) can query for analytics purposes.
Overview
Coniglio provides:
- HTTP ingestion endpoints for high-volume tracking events
- Validation, storage, and aggregation of events into user sessions
- Analytics data APIs for other services to consume
- Tenant-aware data isolation and aggregation logic
- Environment toggles to pause heavy aggregation during maintenance windows
Technology Stack
| Layer | Technology |
|---|---|
| Runtime | PHP 8.3 |
| Framework | Laravel 10 |
| Database | SingleStoreDB (columnstore, sharded) |
| Queues | Redis/SQS (Horizon locally, SQS via Vapor in staging and production) |
| Testing | Composer + Pest |
| Static Analysis | PHPStan / Larastan, Pint |
| Observability | Sentry |
| Deployment | Laravel Vapor (serverless on AWS) |
High-Level Data Flow
- Ingestion:
api/v1/track/*routes hitTrackEventHandlerfor individual events.api/v1/track/batchaccepts arrays of events for offline sync scenarios (see Offline Analytics). - Validation: Controller dispatches
ValidateTrackRequestjob. Batch events are validated individually with UUID-based deduplication. - Event Dispatch: Job fires Laravel events (
SessionStart,Heartbeat, ...). - Storage: Listener
TrackEventStorerinserts raw events intotrack_events. - Aggregation Scheduler:
TrackEventsAggregatorruns every minute (configurable).- Calculates a safe unprocessed time window (now-5 s).
- Locks the range with
RaceConditionMiddlewareto avoid overlaps. - Creates a batch of
TenantTrackEventsAggregatorjobs (one per tenant).
- Tenant Aggregation: Each job pages through events and feeds them to
SessionProjector, creating or updating rows insessions. - Data Access: Analytics endpoints provide access to session data for other services to query.
Volpe no longer sends events directly to Coniglio. Events flow through Delfino RPC to the host application (Farfalla for web, Fenice for mobile and desktop). Farfalla forwards events directly; Fenice adds connectivity detection and offline queuing. See Offline Analytics for the full architecture.
Environment Toggles
| Variable | Purpose |
|---|---|
TRACK_EVENTS_PROCESSING_AGGREGATION_ENABLED | Enable/disable aggregation jobs |
TRACK_EVENTS_PROCESSING_SCHEDULED_AGGREGATION | Run aggregator via scheduler |
TRACK_EVENTS_PROCESSING_SCHEDULED_SELF_HEALING | Run stuck window recovery hourly |
Automatic detection in Kernel.php pauses aggregation every Tuesday 10:00-12:00 UTC, providing a maintenance window.
Local Development Quick-Start
cp .env.example .env && composer install
php artisan horizon # or set QUEUE_CONNECTION=sync
php artisan schedule:work # run scheduler
./vendor/bin/pest # execute tests
# Optionally seed fixtures
Deployment
CI/CD is handled by GitLab:
.gitlab-ci.ymlruns lint, static analysis, and tests.- Artifacts are deployed via Laravel Vapor using
vapor.production.ymlandvapor.staging.yml.
How It Scales
- Append-Only Writes:
track_eventstable receives inserts only. - Windowed Aggregation: Lock-based time windows eliminate per-row flags.
- Tenant Partitioning: Workload is split per tenant for linear scalability.
- Paged Processing: Aggregators paginate to keep jobs short-lived.
- Autoscaling: Horizon and Vapor balance worker counts to traffic; SingleStore columnstore optimizes large analytic reads.
With this architecture Coniglio delivers reliable, near-real-time session analytics while maintaining linear throughput as event volume grows.
Related Systems
Coniglio provides analytics data to:
- Farfalla: provides dashboards, reports, and exports by accessing Coniglio's database directly or through its API endpoints
- Medusa: triggers and insights for content automations
- Other services: ecosystem-wide analytics