Payroll Engine Performance
This page covers the configuration options and tools for optimizing payrun execution performance.
Contents
| Section | Description |
|---|---|
| Parallel Employee Processing | MaxParallelEmployees settings and thread safety |
| Asynchronous Payrun Jobs | Background queue, HTTP 202, client polling |
| Employee Timing Logs | Per-employee duration logging |
| Bulk Data Import | SqlBulkCopy bulk endpoints |
| Rate Limiting | Global and payrun job policies |
| Load Testing | Generate, setup, execute — CSV timing report |
| Retro Period Limit | MaxRetroPayrunPeriods safety guard |
| Performance Checklist | Key settings summary |
Parallel Employee Processing
By default, the payrun processes employees sequentially. The MaxParallelEmployees setting enables parallel processing to reduce total execution time for large payrolls.
| Value | Behavior |
|---|---|
0 or off |
Sequential processing (default) |
half |
Half of available CPU cores |
max |
All available CPU cores |
-1 |
Automatic (runtime decides) |
1–N |
Explicit thread count |
Each employee is processed within an isolated PayrunEmployeeScope that provides mutable state separation. The payroll calculator cache uses Lazy<T> with a composite key (calendar + culture) for thread-safe reuse across employees. Progress reporting is thread-safe with batched database persistence (every 10 employees).
Configuration in appsettings.json:
{
"MaxParallelEmployees": "half"
}
Sequential processing remains the default for deterministic behavior. Enable parallel processing only after verifying that your regulation scripts do not share mutable state across employees.
Asynchronous Payrun Jobs
For large payrolls (500+ employees), the payrun job endpoint uses asynchronous processing to prevent HTTP timeout errors. The endpoint returns HTTP 202 Accepted immediately and processes the job in the background.
The processing pipeline:
1. The payrun job is pre-created and persisted with status Process.
2. The job is enqueued into a bounded channel (capacity: 100) for backpressure control.
3. A background worker dequeues and processes jobs.
4. On completion or abort, a webhook notification is sent.
See Payrun Model for details on the processing pipeline and client polling pattern.
Employee Timing Logs
When LogEmployeeTiming is enabled, the engine logs per-employee processing duration at the Information log level. The summary includes:
- Processing mode (sequential or parallel)
- Total processing time
- Average time per employee
This helps identify slow employees caused by complex regulation scripts, large case value histories, or expensive lookup queries.
Configuration in appsettings.json:
{
"LogEmployeeTiming": true
}
Bulk Data Import
Bulk endpoints use SqlBulkCopy for high-throughput data import, processing items in chunks of 5'000. This is significantly faster than individual REST calls for large data volumes.
Available bulk endpoints:
POST .../employees/bulk— bulk employee creationPOST .../lookups/sets— bulk lookup value import viaLookupSetPOST .../cases/bulk— bulk case change import (via Client SDK)
See API Usage for endpoint details.
Rate Limiting
Rate limiting protects the backend from overload. Two policies are available:
- Global policy — applies to all endpoints
- Payrun job policy — dedicated limit for the payrun job start endpoint
Each policy is configured with PermitLimit (maximum requests) and WindowSeconds (time window).
Configuration in appsettings.json:
{
"RateLimiting": {
"Global": {
"PermitLimit": 100,
"WindowSeconds": 60
},
"PayrunJob": {
"PermitLimit": 10,
"WindowSeconds": 60
}
}
}
Rate limiting is inactive by default. See Security for additional context.
Load Testing
The Payroll Console includes built-in commands for payrun performance benchmarking. A load test follows four steps:
| Step | Command | Description |
|---|---|---|
| 1. Generate | LoadTestGenerate |
Create a scaled exchange file from a regulation template |
| 2. Setup | LoadTestSetup |
Bulk-import employees via the bulk creation API |
| 3. Setup Cases | LoadTestSetupCases |
Bulk-import case changes |
| 4. Execute | PayrunLoadTest |
Run payrun with warmup and measured repetitions |
LoadTestGenerate Template.json /employees:1000 /output:LoadTest.json
LoadTestSetup LoadTest.json
LoadTestSetupCases LoadTest.json
PayrunLoadTest LoadTest.json /warmup:2 /repetitions:5
The PayrunLoadTest command produces a CSV report with per-iteration timings for both client round-trip and server-side processing. This data can be used to:
- Identify performance regressions between releases
- Compare sequential vs. parallel processing throughput
- Determine the optimal
MaxParallelEmployeesvalue for a given hardware configuration
Retro Period Limit
The MaxRetroPayrunPeriods setting (default: 0/unlimited) provides a safety guard against runaway retroactive calculations with RetroTimeType.Anytime. When set to a positive value, the engine limits the number of retroactive periods processed per payrun job.
Configuration in appsettings.json:
{
"MaxRetroPayrunPeriods": 12
}
See Payrun Model for details on retroactive calculation.
Performance Checklist
The following checklist summarizes the key settings for optimizing payrun performance:
| Setting | Default | Recommendation |
|---|---|---|
MaxParallelEmployees |
off |
Set to half or max for large payrolls after script validation |
LogEmployeeTiming |
false |
Enable during performance analysis |
MaxRetroPayrunPeriods |
0 (unlimited) |
Set a reasonable limit for production environments |
AuditTrail |
disabled | Disable unused categories to reduce write overhead |
ScriptSafetyAnalysis |
false |
Enable in production, accept compilation overhead |
| Bulk endpoints | — | Use for data import with more than 100 employees |
See also
- Payrun Model — payrun processing, async jobs, parallel execution
- Testing — load test commands
- API Usage — bulk endpoints
- Security — rate limiting configuration
- Design Scalable Payroll Software — architectural considerations