42. CI pipelines, diagnostics, and troubleshooting

Goal

Why this matters

Prerequisites

1. Pick a CI host and bootstrap prerequisites

Avalonia’s own integration pipeline (see external/Avalonia/azure-pipelines-integrationtests.yml:1) demonstrates the moving parts for Appium + headless test runs:

For GitHub Actions, mirror that setup with runner-specific steps:

jobs:
  ui-tests:
    strategy:
      matrix:
        os: [windows-latest, macos-13]
    runs-on: $
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-dotnet@v3
        with:
          global-json-file: global.json
      - name: Start WinAppDriver
        if: runner.os == 'Windows'
        run: Start-Process -FilePath 'C:\\Program Files (x86)\\Windows Application Driver\\WinAppDriver.exe'
      - name: Restore
        run: dotnet restore tests/Avalonia.Headless.UnitTests
      - name: Test headless suite
        run: dotnet test tests/Avalonia.Headless.UnitTests --logger "trx;LogFileName=headless.trx" --blame-hang-timeout 5m
      - name: Publish results
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: headless-results
          path: '**/*.trx'

Adjust the matrix for Linux when you only need headless tests (no Appium). Use the same dotnet test command locally to validate pipeline scripts.

2. Configure deterministic test execution

Headless suites should run with parallelism disabled unless every fixture is isolation-safe. xUnit supports assembly-level configuration:

// AssemblyInfo.cs
[assembly: CollectionBehavior(DisableTestParallelization = true)]
[assembly: AvaloniaTestFramework]

Pair the attribute with AvaloniaTestApplication so a single HeadlessUnitTestSession drives the whole assembly. For NUnit, launch the test runner with --workers=1 or mark fixtures [NonParallelizable]. This avoids fighting over the singleton dispatcher and ensures actions happen in the same order on developer machines and CI bots.

Within tests, drain work deterministically. HeadlessWindowExtensions already wraps each gesture with Dispatcher.UIThread.RunJobs() and AvaloniaHeadlessPlatform.ForceRenderTimerTick(); call those directly from helpers when you schedule background tasks outside the provided wrappers.

3. Capture logs, screenshots, and videos

Collect evidence automatically so failing builds are actionable:

4. Diagnose hangs and deadlocks

UI tests occasionally hang because outstanding work blocks the dispatcher. Harden your pipeline with diagnosis options:

Analyze captured dumps with dotnet-dump analyze to inspect managed thread stacks and spot blocked tasks.

5. Environment hygiene on shared agents

CI agents often reuse workspaces. Add cleanup steps before running UI automation:

For cross-platform Appium tests, encapsulate capability setup in fixtures. DefaultAppFixture (external/Avalonia/tests/Avalonia.IntegrationTests.Appium/DefaultAppFixture.cs:9) configures Windows and macOS sessions differently while exposing a consistent driver to tests.

6. Build health dashboards and alerts

Publish TRX or NUnit XML outputs to your CI system so failures appear in dashboards. Azure Pipelines uses PublishTestResults@2 to ingest xUnit results even when the job succeeds with warnings (external/Avalonia/azure-pipelines-integrationtests.yml:67). GitHub Actions can read TRX via dorny/test-reporter or similar actions.

Send critical logs to observability tools if your team maintains telemetry infrastructure. A simple approach is to push structured log lines to stdout in JSON—CI services preserve the console by default.

7. Troubleshooting checklist

Practice lab

  1. Pipeline parity – Create a local script that mirrors your CI job (dotnet restore, dotnet test, artifact copy). Run it before pushing so pipeline failures never surprise you.
  2. Hang detector – Wire dotnet test --blame into your CI job and practice analyzing the generated dumps for a deliberately hung test.
  3. Artifact triage – Extend your test harness to save headless screenshots and logs into an output directory, then configure your pipeline to upload them on failure.
  4. Parallelism audit – Temporarily enable test parallelization to identify fixtures that rely on global state. Fix the offenders or permanently disable parallel runs via assembly attributes.
  5. Cross-platform dry run – Use a GitHub Actions matrix or Azure multi-job pipeline to run headless tests on Windows and Linux simultaneously, comparing logs for environment-specific quirks.

What's next