Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Testing Guide for OPNsense Config Faker

This document provides comprehensive guidance on running the various types of tests in this project and understanding the testing infrastructure.

Quick Start

# Run all tests
cargo test --all-features

# Run all tests with environment normalization
TERM=dumb cargo test --all-features

# Run tests with coverage
just coverage

# Run QA pipeline (format check, lint, test)
just qa

CI Environment Considerations

TERM=dumb Support

When running in CI environments, the TERM=dumb environment variable is automatically respected by various tools to disable color output and interactive features:

  • Rust Crates: Libraries like console, indicatif, and termcolor respect NO_COLOR, CARGO_TERM_COLOR, and TERM="dumb" environment variables
  • Cargo: Respects terminal capabilities and adjusts output accordingly
  • Test Output: All test runners adapt to non-interactive terminal environments

This ensures consistent, parseable output in CI pipelines without requiring special configuration.

CI-Friendly Tasks

Use the following justfile tasks for CI environments:

# Standard QA pipeline (respects TERM=dumb)
just ci-qa

# Full CI validation with coverage
just ci-check

# Fast CI validation without coverage
just ci-check-fast

Test Categories

Unit Tests

Run core library functionality tests:

# Run all unit tests
cargo test --lib --all-features

# Run specific unit test module
cargo test --lib module_name

# Run unit tests with output
cargo test --lib --all-features -- --nocapture

Integration Tests

Test CLI functionality with real command execution:

# Run all integration tests
cargo test --tests --all-features

# Run specific integration test file
cargo test --test integration_cli

# Run integration tests with environment normalization
TERM=dumb cargo test --test integration_cli --all-features

Property-Based Tests (PropTest)

Run property-based testing for data generation:

# Run all property tests
cargo test proptest --all-features

# Run VLAN generation property tests
cargo test --test proptest_vlan

# Run with more test cases (slow tests)
cargo test proptest --all-features --features slow-tests

Snapshot Tests

Validate CLI output consistency using insta snapshots:

# Run all snapshot tests
cargo test --test snapshot_tests

# Run CSV snapshot tests
cargo test --test snapshot_csv

# Run XML snapshot tests
cargo test --test snapshot_xml

# Run with environment normalization (recommended)
TERM=dumb cargo test --test snapshot_tests

Updating Snapshots

When CLI output legitimately changes, update snapshots:

# Review and approve snapshot changes
cargo insta review

# Accept all snapshot changes (use with caution)
INSTA_UPDATE=auto cargo test --test snapshot_tests

# Force update specific snapshots
cargo insta test --accept --test snapshot_tests

Best Practices for Snapshots:

  • Always review snapshot changes before accepting
  • Use TERM=dumb to ensure deterministic output
  • Run tests multiple times to ensure stability
  • Keep snapshots focused and readable
  • Update documentation when snapshot behavior changes

Quality Assurance Workflow

Local Development

# Format, lint, and test
just qa

# Include coverage report
just qa-cov

# Development workflow with coverage
just dev

CI Pipeline

# Standard CI QA check
just ci-qa

# Full CI validation
just ci-check

Coverage and Quality Assurance

Running Coverage

Generate test coverage reports:

# Basic coverage report
just coverage

# Coverage with 80% threshold enforcement
just coverage

# HTML coverage report (opens in browser)
just coverage-html

# CI-friendly coverage (ignores test failures)
just coverage-ci

Note: CI runs (`just coverage-ci`) will never fail on coverage drops. To enforce an 80% threshold locally, use `just coverage`.

# Terminal coverage report
just coverage-report

The project enforces an 80% coverage threshold locally via just coverage. CI runs (just coverage-ci) generate reports without threshold enforcement. Coverage reports are generated using cargo-llvm-cov.

Linting and Formatting

The project follows strict linting policies:

# Run clippy with warnings as errors (project policy)
cargo clippy -- -D warnings

# Or use the just command
just lint

# Format code
cargo fmt
just format

# Check formatting without modifying files
cargo fmt --check
just format-check

Clippy Policy: All warnings are treated as errors (-D warnings). This ensures high code quality and consistency across the project.

Complete QA Pipeline

# Full quality assurance check
just qa

# QA with coverage
just qa-cov

# CI-friendly QA check
just ci-qa

Benchmarks

Run performance benchmarks:

# Run all benchmarks
cargo bench --all-features

# Or use just command
just bench

# Run specific benchmark
cargo bench vlan_generation

# Generate HTML reports
cargo bench --all-features
# Results in target/criterion/reports/index.html

Benchmarks are excluded from coverage reports and use the Criterion framework.

Environment Variables and Deterministic Testing

TERM=dumb

The TERM=dumb environment variable is crucial for deterministic testing:

# Disable terminal formatting for consistent output
TERM=dumb cargo test

# Why this matters:
# - Removes ANSI color codes from output
# - Ensures consistent formatting across different terminals
# - Required for reliable snapshot testing
# - Prevents Rust crate color formatting in CLI output

Rust crates and Cargo automatically respect TERM=dumb to disable color output in non-interactive terminals.

Deterministic Seeds

Tests use fixed seeds for reproducible results:

# Some tests use deterministic random seeds
# This is handled automatically in test utilities
# See tests/common/mod.rs for implementation details

# Property tests use configurable seeds:
PROPTEST_CASES=1000 cargo test proptest

Additional Environment Variables

# Disable colored output completely
NO_COLOR=1 cargo test

# Disable Cargo colored output
CARGO_TERM_COLOR=never cargo test

# Comprehensive environment normalization (recommended)
TERM=dumb CARGO_TERM_COLOR=never NO_COLOR=1 cargo test

Test Environment Setup

Prerequisites

# Install coverage tooling
just install-cov

# Full development setup
just setup

Running Specific Test Types

# Unit tests only
just test-unit

# Integration tests only
just test-integration

# Documentation tests
just test-doc

# All tests excluding benchmarks
just test-no-bench

Continuous Integration

The CI pipeline automatically:

  1. Validates Formatting: just rust-fmt-check
  2. Runs Linting: just rust-clippy with strict warnings
  3. Executes Tests: just rust-test with all features
  4. Generates Coverage: just coverage-ci generates lcov report (no threshold enforcement)
  5. Respects Environment: Adapts output based on TERM variable

Test Data and Fixtures

  • Property-Based Testing: Uses proptest for generating test data
  • Snapshot Testing: Uses insta for CLI output validation
  • Fixtures: Test data located in tests/fixtures/
  • Snapshots: Expected outputs stored in tests/snapshots/

Test Utilities

The project includes shared test utilities in tests/common/mod.rs that provide consistent testing patterns:

Standardized CLI Testing

The cli_command() helper automatically sets up a consistent test environment:

  • TERM=dumb - Disables Rich terminal formatting
  • CARGO_TERM_COLOR=never - Disables Cargo colored output
  • NO_COLOR=1 - Disables all color output
#![allow(unused)]
fn main() {
use common::{cli_command, TestOutputExt};

let output = cli_command()
    .arg("generate")
    .arg("--format")
    .arg("csv")
    .arg("--count")
    .arg("5")
    .run_success();

output.assert_stdout_contains("Generated 5 VLAN configurations");
}

Output Normalization

The normalize_output() function removes ANSI escape sequences and normalizes whitespace for stable test assertions:

#![allow(unused)]
fn main() {
use common::normalize_output;

let raw_output = "\u001b[32m✅ Success\u001b[0m\n  Multiple   spaces\t\n";
let clean = normalize_output(raw_output);
assert_eq!(clean, "✅ Success Multiple spaces");
}

Temporary File Creation

Multiple helpers for creating temporary test resources:

#![allow(unused)]
fn main() {
use common::{create_temp_dir, create_temp_csv, create_temp_xml};

// Basic temporary directory
let temp_dir = create_temp_dir("test_prefix");
let file_path = temp_dir.path().join("test_file.csv");

// CSV with test data
let (temp_file, csv_path) = create_temp_csv("test_", &[
    &["VLAN", "IP Range", "Description"],
    &["100", "192.168.1.0/24", "Test Network"],
]).unwrap();
}

Extended Test Output Assertions

The TestOutputExt trait provides additional assertion methods:

#![allow(unused)]
fn main() {
output
    .assert_stdout_contains("success message")
    .assert_stderr_contains("warning message")
    .assert_stdout_matches(r"Generated \d+ configurations");

// Access normalized output
let clean_stdout = output.normalized_stdout();
let clean_stderr = output.normalized_stderr();
let combined = output.normalized_combined();
}

Troubleshooting

Coverage Issues

If coverage falls below 80%:

# View detailed coverage report
just coverage-html

# Clean coverage artifacts and retry
just coverage-clean
just coverage

Test Failures in CI

  1. Check that TERM=dumb is set in CI environment
  2. Verify all dependencies are properly installed
  3. Use just ci-check-fast for quicker feedback
  4. Review snapshot differences with cargo insta review

Best Practices

  1. Write Tests First: Follow TDD principles for new features
  2. Use Property-Based Testing: Leverage proptest for edge cases
  3. Snapshot Critical Outputs: Use insta for CLI behavior verification
  4. Maintain Coverage: Keep above 80% line coverage
  5. CI-Friendly Output: Ensure all tools respect TERM=dumb