- Refactor index.md with improved navigation and comprehensive documentation sections - Update README.md to streamline documentation navigation - Create new roadmap.md with detailed project goals and vision - Add testing.md with comprehensive testing guidelines and best practices - Improve overall documentation clarity and user experience
14 KiB
Testing Documentation
Quick Reference
# Most Common Commands
bun test # Run all tests
bun test --watch # Run tests in watch mode
bun test --coverage # Run tests with coverage
bun test path/to/test.ts # Run specific test file
# Additional Options
DEBUG=true bun test # Run with debug output
bun test --pattern "auth" # Run tests matching pattern
bun test --timeout 60000 # Run with custom timeout
Overview
This document describes the testing setup and practices used in the Home Assistant MCP project. The project uses Bun's test runner for unit and integration testing, with a comprehensive test suite covering security, SSE (Server-Sent Events), middleware, and other core functionalities.
Test Structure
Tests are organized in two main locations:
-
Root Level Integration Tests (
/__tests__/):__tests__/ ├── ai/ # AI/ML component tests ├── api/ # API integration tests ├── context/ # Context management tests ├── hass/ # Home Assistant integration tests ├── schemas/ # Schema validation tests ├── security/ # Security integration tests ├── tools/ # Tools and utilities tests ├── websocket/ # WebSocket integration tests ├── helpers.test.ts # Helper function tests ├── index.test.ts # Main application tests └── server.test.ts # Server integration tests -
Component Level Unit Tests (
src/**/):src/ ├── __tests__/ # Global test setup and utilities │ └── setup.ts # Global test configuration ├── component/ │ ├── __tests__/ # Component-specific unit tests │ └── component.ts
The root level __tests__ directory contains integration and end-to-end tests that verify the interaction between different components of the system, while the component-level tests focus on unit testing individual modules.
Test Configuration
Bun Test Configuration (bunfig.toml)
[test]
preload = ["./src/__tests__/setup.ts"] # Global test setup
coverage = true # Enable coverage by default
timeout = 30000 # Test timeout in milliseconds
testMatch = ["**/__tests__/**/*.test.ts"] # Test file patterns
Bun Scripts
Available test commands in package.json:
# Run all tests
bun test
# Watch mode for development
bun test --watch
# Generate coverage report
bun test --coverage
# Run linting
bun run lint
# Format code
bun run format
Test Setup
Global Configuration
The project uses a global test setup file (src/__tests__/setup.ts) that provides:
- Environment configuration
- Mock utilities
- Test helper functions
- Global test lifecycle hooks
Test Environment
Tests run with the following configuration:
- Environment variables are loaded from
.env.test - Console output is suppressed during tests (unless DEBUG=true)
- JWT secrets and tokens are automatically configured for testing
- Rate limiting and other security features are properly initialized
Running Tests
To run the test suite:
# Basic test run
bun test
# Run tests with coverage
bun test --coverage
# Run specific test file
bun test path/to/test.test.ts
# Run tests in watch mode
bun test --watch
# Run tests with debug output
DEBUG=true bun test
# Run tests with increased timeout
bun test --timeout 60000
# Run tests matching a pattern
bun test --pattern "auth"
Test Environment Setup
-
Prerequisites:
- Bun >= 1.0.0
- Node.js dependencies (see package.json)
-
Environment Files:
.env.test- Test environment variables.env.development- Development environment variables
-
Test Data:
- Mock responses in
__tests__/mock-responses/ - Test fixtures in
__tests__/fixtures/
- Mock responses in
Continuous Integration
The project uses GitHub Actions for CI/CD. Tests are automatically run on:
- Pull requests
- Pushes to main branch
- Release tags
Writing Tests
Test File Naming
- Test files should be placed in a
__tests__directory adjacent to the code being tested - Test files should be named
*.test.ts - Test files should mirror the structure of the source code
Test Structure
import { describe, expect, it, beforeEach } from "bun:test";
describe("Module Name", () => {
beforeEach(() => {
// Setup for each test
});
describe("Feature/Function Name", () => {
it("should do something specific", () => {
// Test implementation
});
});
});
Test Utilities
The project provides several test utilities:
import { testUtils } from "../__tests__/setup";
// Available utilities:
- mockWebSocket() // Mock WebSocket for SSE tests
- mockResponse() // Mock HTTP response for API tests
- mockRequest() // Mock HTTP request for API tests
- createTestClient() // Create test SSE client
- createTestEvent() // Create test event
- createTestEntity() // Create test Home Assistant entity
- wait() // Helper to wait for async operations
Testing Patterns
Security Testing
Security tests cover:
- Token validation and encryption
- Rate limiting
- Request validation
- Input sanitization
- Error handling
Example:
describe("Security Features", () => {
it("should validate tokens correctly", () => {
const payload = { userId: "123", role: "user" };
const token = jwt.sign(payload, validSecret, { expiresIn: "1h" });
const result = TokenManager.validateToken(token, testIp);
expect(result.valid).toBe(true);
});
});
SSE Testing
SSE tests cover:
- Client authentication
- Message broadcasting
- Rate limiting
- Subscription management
- Client cleanup
Example:
describe("SSE Features", () => {
it("should authenticate valid clients", () => {
const client = createTestClient("test-client");
const result = sseManager.addClient(client, validToken);
expect(result?.authenticated).toBe(true);
});
});
Middleware Testing
Middleware tests cover:
- Request validation
- Input sanitization
- Error handling
- Response formatting
Example:
describe("Middleware", () => {
it("should sanitize HTML in request body", () => {
const req = mockRequest({
body: { text: '<script>alert("xss")</script>' }
});
sanitizeInput(req, res, next);
expect(req.body.text).toBe("");
});
});
Integration Testing
Integration tests in the root __tests__ directory cover:
- AI/ML Components: Testing machine learning model integrations and predictions
- API Integration: End-to-end API route testing
- Context Management: Testing context persistence and state management
- Home Assistant Integration: Testing communication with Home Assistant
- Schema Validation: Testing data validation across the application
- Security Integration: Testing security features in a full system context
- WebSocket Communication: Testing real-time communication
- Server Integration: Testing the complete server setup and configuration
Example integration test:
describe("API Integration", () => {
it("should handle a complete authentication flow", async () => {
// Setup test client
const client = await createTestClient();
// Test registration
const regResponse = await client.register(testUser);
expect(regResponse.status).toBe(201);
// Test authentication
const authResponse = await client.authenticate(testCredentials);
expect(authResponse.status).toBe(200);
expect(authResponse.body.token).toBeDefined();
// Test protected endpoint access
const protectedResponse = await client.get("/api/protected", {
headers: { Authorization: `Bearer ${authResponse.body.token}` }
});
expect(protectedResponse.status).toBe(200);
});
});
Security Middleware Testing
Utility Function Testing
The security middleware now uses a utility-first approach, which allows for more granular and comprehensive testing. Each security function is now independently testable, improving code reliability and maintainability.
Key Utility Functions
-
Rate Limiting (
checkRateLimit)- Tests multiple scenarios:
- Requests under threshold
- Requests exceeding threshold
- Rate limit reset after window expiration
// Example test it('should throw when requests exceed threshold', () => { const ip = '127.0.0.2'; for (let i = 0; i < 11; i++) { if (i < 10) { expect(() => checkRateLimit(ip, 10)).not.toThrow(); } else { expect(() => checkRateLimit(ip, 10)).toThrow('Too many requests from this IP'); } } }); - Tests multiple scenarios:
-
Request Validation (
validateRequestHeaders)- Tests content type validation
- Checks request size limits
- Validates authorization headers
it('should reject invalid content type', () => { const mockRequest = new Request('http://localhost', { method: 'POST', headers: { 'content-type': 'text/plain' } }); expect(() => validateRequestHeaders(mockRequest)).toThrow('Content-Type must be application/json'); }); -
Input Sanitization (
sanitizeValue)- Sanitizes HTML tags
- Handles nested objects
- Preserves non-string values
it('should sanitize HTML tags', () => { const input = '<script>alert("xss")</script>Hello'; const sanitized = sanitizeValue(input); expect(sanitized).toBe('<script>alert("xss")</script>Hello'); }); -
Security Headers (
applySecurityHeaders)- Verifies correct security header application
- Checks CSP, frame options, and other security headers
it('should apply security headers', () => { const mockRequest = new Request('http://localhost'); const headers = applySecurityHeaders(mockRequest); expect(headers['content-security-policy']).toBeDefined(); expect(headers['x-frame-options']).toBeDefined(); }); -
Error Handling (
handleError)- Tests error responses in production and development modes
- Verifies error message and stack trace inclusion
it('should include error details in development mode', () => { const error = new Error('Test error'); const result = handleError(error, 'development'); expect(result).toEqual({ error: true, message: 'Internal server error', error: 'Test error', stack: expect.any(String) }); });
Testing Philosophy
- Isolation: Each utility function is tested independently
- Comprehensive Coverage: Multiple scenarios for each function
- Predictable Behavior: Clear expectations for input and output
- Error Handling: Robust testing of error conditions
Best Practices
- Use minimal, focused test cases
- Test both successful and failure scenarios
- Verify input sanitization and security measures
- Mock external dependencies when necessary
Running Security Tests
# Run all tests
bun test
# Run specific security tests
bun test __tests__/security/
Continuous Improvement
- Regularly update test cases
- Add new test scenarios as security requirements evolve
- Perform periodic security audits
Best Practices
- Isolation: Each test should be independent and not rely on the state of other tests.
- Mocking: Use the provided mock utilities for external dependencies.
- Cleanup: Clean up any resources or state modifications in
afterEachorafterAllhooks. - Descriptive Names: Use clear, descriptive test names that explain the expected behavior.
- Assertions: Make specific, meaningful assertions rather than general ones.
- Setup: Use
beforeEachfor common test setup to avoid repetition. - Error Cases: Test both success and error cases for complete coverage.
Coverage
The project aims for high test coverage, particularly focusing on:
- Security-critical code paths
- API endpoints
- Data validation
- Error handling
- Event broadcasting
Run coverage reports using:
bun test --coverage
Debugging Tests
To debug tests:
- Set
DEBUG=trueto enable console output during tests - Use the
--watchflag for development - Add
console.log()statements (they're only shown when DEBUG is true) - Use the test utilities' debugging helpers
Advanced Debugging
-
Using Node Inspector:
# Start tests with inspector bun test --inspect # Start tests with inspector and break on first line bun test --inspect-brk -
Using VS Code:
// .vscode/launch.json { "version": "0.2.0", "configurations": [ { "type": "bun", "request": "launch", "name": "Debug Tests", "program": "${workspaceFolder}/node_modules/bun/bin/bun", "args": ["test", "${file}"], "cwd": "${workspaceFolder}", "env": { "DEBUG": "true" } } ] } -
Test Isolation: To run a single test in isolation:
describe.only("specific test suite", () => { it.only("specific test case", () => { // Only this test will run }); });
Contributing
When contributing new code:
- Add tests for new features
- Ensure existing tests pass
- Maintain or improve coverage
- Follow the existing test patterns and naming conventions
- Document any new test utilities or patterns
Coverage Requirements
The project maintains strict coverage requirements:
- Minimum overall coverage: 80%
- Critical paths (security, API, data validation): 90%
- New features must include tests with >= 85% coverage
Coverage reports are generated in multiple formats:
- Console summary
- HTML report (./coverage/index.html)
- LCOV report (./coverage/lcov.info)
To view detailed coverage:
# Generate and open coverage report
bun test --coverage && open coverage/index.html