Skip to content

Payroll Engine Performance

Automator|Client Services · .NET 10

This page covers the configuration options and tools for optimizing payrun execution performance.

Contents

Section Description
Scaling Reference Measured throughput and hardware requirements
Parallel Employee Processing MaxParallelEmployees settings and thread safety
Persist Parallelism MaxParallelPersist settings and load-test results
Asynchronous Payrun Jobs Background queue, HTTP 202, client polling
Employee Timing Logs Per-employee duration logging
Bulk Data Import SqlBulkCopy bulk endpoints
Rate Limiting Global and payrun job policies
Load Testing Generate, setup, execute — CSV timing report
Retro Period Limit MaxRetroPayrunPeriods safety guard
Performance Checklist Key settings summary

Scaling Reference

The following measurements use a reference regulation — the most I/O-intensive regulation in the PE ecosystem due to large tax lookup tables. These values represent the worst case for PE performance. Simpler regulations without large lookup tables will achieve significantly lower ms/Employee values.

Measured Throughput (reference regulation, SQL Server, 1 period)

Employees ms/Employee Throughput Hardware
100 ~72 ms ~50'000 emp/h Laptop (i7-10875H, 2020)
1'000 ~84 ms ~43'000 emp/h Laptop (i7-10875H, 2020)
10'000 ~232 ms ~15'500 emp/h Laptop (i7-10875H, 2020, RAM limited)
10'000 ~100 ms ~36'000 emp/h Modern laptop / dedicated server

Scaling Behaviour

The engine scales sub-linearly: 100× more employees results in only ~3× more processing time. This is a key characteristic of the parallel architecture.

Scale factor Employees Expected (linear) Actual (measured)
100
10× 1'000 10× 1.2×
100× 10'000 100× 3.2×

Hardware Requirements

Performance at scale is primarily determined by SQL Server Buffer Pool capacity. When result data exceeds available RAM, SQL Server falls back to disk I/O.

Employees (reference) Result data Recommended free RAM
1'000 ~450 MB 8 GB
5'000 ~2.2 GB 16 GB
10'000 ~4.5 GB 32 GB

The reference values above use MaxParallelEmployees=16 and MaxParallelPersist=2 on SQL Server 2019 Developer Edition, which is functionally identical to Enterprise Edition.


Parallel Employee Processing

By default, the payrun processes employees sequentially. The MaxParallelEmployees setting enables parallel processing to reduce total execution time for large payrolls.

Value Behavior
0 or empty Auto — uses all available CPU cores (ProcessorCount) — default
off or -1 Sequential processing (no parallelism)
half Half of available CPU cores
max All available CPU cores
1N Explicit thread count

Each employee is processed within an isolated PayrunEmployeeScope that provides mutable state separation. The payroll calculator cache uses Lazy<T> with a composite key (calendar + culture) for thread-safe reuse across employees. Progress reporting is thread-safe with batched database persistence (every 10 employees).

Configuration in appsettings.json:

{
  "MaxParallelEmployees": "half"
}

Parallel processing is enabled by default. Set MaxParallelEmployees to off or -1 for sequential processing when diagnosing issues or when regulation scripts share mutable state across employees.

Persist Parallelism

The MaxParallelPersist setting controls how many employees can write their results to the database concurrently.

Value Behavior
1 Fully serialized — no concurrent writes, lowest DB load
2 Default — best balance of throughput and stability (load-tested)
4+ No measurable gain over 2; not recommended

Load test results (reference regulation, 1'000 employees):

MaxParallelPersist ms/Employee vs. P1
1 121 ms baseline
2 84 ms +30% faster
4 85 ms +30% faster

Failed persist operations are retried up to 3 times with exponential backoff. A job abort only occurs if all retries are exhausted. No data loss is possible — each employee result is wrapped in a transaction that rolls back on failure.

Configuration in appsettings.json:

{
  "MaxParallelPersist": 2
}


Asynchronous Payrun Jobs

For large payrolls (500+ employees), the payrun job endpoint uses asynchronous processing to prevent HTTP timeout errors. The endpoint returns HTTP 202 Accepted immediately and processes the job in the background.

The processing pipeline: 1. The payrun job is pre-created and persisted with status Process. 2. The job is enqueued into a bounded channel (capacity: 100) for backpressure control. 3. A background worker dequeues and processes jobs. 4. On completion or abort, a webhook notification is sent.

See Payrun Model for details on the processing pipeline and client polling pattern.

Employee Timing Logs

When LogEmployeeTiming is enabled, the engine logs per-employee processing duration at the Information log level. The summary includes:

  • Processing mode (sequential or parallel)
  • Total processing time
  • Average time per employee

This helps identify slow employees caused by complex regulation scripts, large case value histories, or expensive lookup queries.

Configuration in appsettings.json:

{
  "LogEmployeeTiming": true
}

Bulk Data Import

Bulk endpoints use SqlBulkCopy for high-throughput data import, processing items in chunks of 5'000. This is significantly faster than individual REST calls for large data volumes.

Available bulk endpoints:

  • POST .../employees/bulk — bulk employee creation
  • POST .../lookups/sets — bulk lookup value import via LookupSet
  • POST .../cases/bulk — bulk case change import (via Client SDK)

See API Usage for endpoint details.

Rate Limiting

Rate limiting protects the backend from overload. Two policies are available:

  • Global policy — applies to all endpoints
  • Payrun job policy — dedicated limit for the payrun job start endpoint

Each policy is configured with PermitLimit (maximum requests) and WindowSeconds (time window).

Configuration in appsettings.json:

{
  "RateLimiting": {
    "Global": {
      "PermitLimit": 100,
      "WindowSeconds": 60
    },
    "PayrunJob": {
      "PermitLimit": 10,
      "WindowSeconds": 60
    }
  }
}

Rate limiting is inactive by default. See Security for additional context.

Load Testing

The Payroll Console includes built-in commands for payrun performance benchmarking. A load test follows four steps:

Step Command Description
1. Generate LoadTestGenerate Create a scaled exchange file from a regulation template
2. Setup LoadTestSetup Bulk-import employees via the bulk creation API
3. Setup Cases LoadTestSetupCases Bulk-import case changes
4. Execute PayrunLoadTest Run payrun with warmup and measured repetitions
LoadTestGenerate Template.et.json 1000 LoadTest1000
LoadTestSetupEmployees LoadTest1000\Setup-Employees.json
LoadTestSetupCases LoadTest1000
PayrunLoadTest LoadTest1000\Payrun-Invocation.json 1000 3 Results\LT1000.csv

An optional Excel report can be generated alongside the CSV:

PayrunLoadTest LoadTest1000\Payrun-Invocation.json 1000 3 Results\LT1000.csv /ExcelReport
PayrunLoadTest LoadTest1000\Payrun-Invocation.json 1000 3 Results\LT1000.csv /ExcelFile=Reports\LT1000.xlsx /ParallelSetting=half

The Excel report includes a Setup sheet (machine, OS, ProcessorCount, MaxParallelEmployees), a Results sheet (identical to CSV with formatting), and an Avg ms/Employee pivot sheet with outlier highlighting.

The PayrunLoadTest command produces a CSV report with per-iteration timings for both client round-trip and server-side processing. This data can be used to:

  • Identify performance regressions between releases
  • Compare sequential vs. parallel processing throughput
  • Determine the optimal MaxParallelEmployees value for a given hardware configuration

Retro Period Limit

The MaxRetroPayrunPeriods setting (default: 0/unlimited) provides a safety guard against runaway retroactive calculations with RetroTimeType.Anytime. When set to a positive value, the engine limits the number of retroactive periods processed per payrun job.

Configuration in appsettings.json:

{
  "MaxRetroPayrunPeriods": 12
}

See Payrun Model for details on retroactive calculation.

Performance Checklist

The following checklist summarizes the key settings for optimizing payrun performance:

Setting Default Recommendation
MaxParallelEmployees 0 (auto, ProcessorCount) Set to half to reduce DB load; use off for debugging
MaxParallelPersist 2 Default is optimal; set to 1 for debugging only
LogEmployeeTiming false Enable during performance analysis
MaxRetroPayrunPeriods 0 (unlimited) Set a reasonable limit for production environments
AuditTrail disabled Disable unused categories to reduce write overhead
ScriptSafetyAnalysis false Enable in production, accept compilation overhead
Bulk endpoints Use for data import with more than 100 employees

See also