Testing Philosophy with File System Interfaces
File system code presents unique testing challenges that many developers struggle with. The traditional approach of writing tests that directly interact with the real file system creates brittle, slow tests that depend on external state. A better approach treats the file system as a dependency that can be abstracted, controlled, and tested in isolation.
Dependency Injection Benefits
The key to testable file system code lies in dependency injection. Instead of hardcoding calls to os.Open()
or ioutil.ReadFile()
, inject the file system interface as a dependency. This seemingly simple change transforms your code from untestable to highly testable.
// Problematic: Direct file system dependency func ProcessConfigFile(path string) (*Config, error) { data, err := os.ReadFile(path) if err != nil { return nil, err } return parseConfig(data) } // Better: File system as injected dependency func ProcessConfigFile(fsys fs.FS, path string) (*Config, error) { data, err := fs.ReadFile(fsys, path) if err != nil { return nil, err } return parseConfig(data) }
This pattern allows you to pass different file system implementations during testing while maintaining the same interface in production code. The production code receives os.DirFS(".")
while tests receive mock implementations.
Testable File System Operations
When designing file system operations for testability, focus on three core principles: interface segregation, explicit dependencies, and predictable behavior.
Interface segregation means depending only on the file system capabilities you actually need. If your function only reads files, depend on fs.FS
rather than a broader interface that includes write operations. This makes your intentions clear and simplifies testing.
// Only needs read access func LoadTemplate(fsys fs.FS, name string) (*Template, error) { content, err := fs.ReadFile(fsys, name) if err != nil { return nil, fmt.Errorf("loading template %s: %w", name, err) } return parseTemplate(content) } // Needs write access - use different interface func SaveReport(fsys WriteFS, path string, report *Report) error { data, err := report.Marshal() if err != nil { return err } return fsys.WriteFile(path, data, 0644) }
Explicit dependencies eliminate hidden file system interactions. Every file system operation should flow through the injected interface, making the code's dependencies obvious to both humans and tests.
Isolation from Real File System
Test isolation prevents tests from interfering with each other and with the host system. When tests manipulate real files, they can leave behind state that affects subsequent test runs or, worse, modify important system files.
Proper isolation means your tests never touch the real file system unless specifically testing integration points. Unit tests work entirely with mock file systems, while integration tests use temporary directories that are cleaned up automatically.
func TestProcessConfigFile(t *testing.T) { // Isolated test using mock file system testFS := fstest.MapFS{ "config.json": &fstest.MapFile{ Data: []byte(`{"setting": "value"}`), }, } config, err := ProcessConfigFile(testFS, "config.json") assert.NoError(t, err) assert.Equal(t, "value", config.Setting) }
This approach gives you complete control over the file system state during tests. You define exactly what files exist, their contents, permissions, and modification times. Tests become deterministic and fast because they don't perform actual disk I/O.
The isolation also extends to error conditions. Mock file systems let you simulate permission errors, disk full conditions, and network failures that would be difficult or impossible to reproduce consistently with real file systems.
Testing file system code well requires thinking about the file system as just another dependency to be managed, not as an unchangeable part of the environment. This mindset shift enables the testing patterns and techniques that follow in the subsequent sections.
Using testing/fstest.TestFS
Go's testing/fstest
package provides powerful tools for validating file system implementations and ensuring your code works correctly with any fs.FS
implementation. The TestFS
function acts as a comprehensive test suite that verifies file system behavior against the expected interface contracts.
Validating Custom FS Implementations
When you create custom file system implementations, fstest.TestFS
becomes your verification tool. It runs a battery of tests that check edge cases, error conditions, and specification compliance that you might miss in manual testing.
func TestMyCustomFS(t *testing.T) { // Create your custom file system customFS := &MyCustomFS{ files: map[string][]byte{ "file1.txt": []byte("content1"), "dir/file2.txt": []byte("content2"), "empty.txt": []byte(""), }, } // Validate it conforms to fs.FS interface expectations if err := fstest.TestFS(customFS, "file1.txt", "dir/file2.txt", "empty.txt"); err != nil { t.Fatal(err) } }
The function tests various scenarios: opening files that exist and don't exist, reading directories, handling path separators correctly, and ensuring error messages follow Go's conventions. It catches subtle bugs that surface only in specific conditions.
For read-write file systems, create additional validation that goes beyond the basic fs.FS
interface:
func TestWritableFS(t *testing.T) { fsys := NewMemoryFS() // First validate basic fs.FS compliance err := fstest.TestFS(fsys, "test1.txt", "dir/test2.txt") require.NoError(t, err) // Then test write operations testWriteOperations(t, fsys) testConcurrentAccess(t, fsys) testErrorConditions(t, fsys) }
Test Coverage for Edge Cases
fstest.TestFS
excels at uncovering edge cases that developers typically overlook. It tests paths with different separators, attempts to open directories as files, checks behavior with empty files, and validates that file info structures contain correct data.
The tool systematically tests path handling, which is notoriously error-prone:
// These are all tested automatically by fstest.TestFS testCases := []string{ "normal-file.txt", "path/to/nested/file.txt", "file-with-dots..txt", "file.with.multiple.dots", "UPPERCASE.TXT", "file with spaces.txt", }
It also validates that your file system handles the distinction between files and directories correctly, ensures that fs.FileInfo
returns accurate information, and checks that error types match Go's standard patterns.
The edge case testing extends to boundary conditions. What happens when you try to read beyond a file's end? How does your implementation handle zero-length files? Does it correctly report file modification times? TestFS
checks all of these scenarios.
Performance Testing Patterns
While fstest.TestFS
focuses on correctness, performance testing requires additional patterns. Create benchmarks that measure your file system implementation under realistic workloads:
func BenchmarkFileSystemOperations(b *testing.B) { fsys := createLargeTestFS(1000) // 1000 files filenames := getRandomFilenames(fsys, 100) b.ResetTimer() b.RunParallel(func(pb *testing.PB) { for pb.Next() { name := filenames[rand.Intn(len(filenames))] if file, err := fsys.Open(name); err == nil { io.Copy(io.Discard, file) file.Close() } } }) }
Performance testing should cover different access patterns: sequential reads, random access, directory traversal, and concurrent operations. Each pattern stresses different aspects of your implementation.
Memory usage patterns matter as much as speed. Profile your file system implementation to ensure it doesn't leak memory or consume excessive resources:
func TestMemoryUsage(t *testing.T) { fsys := NewMemoryFS() // Load test data for i := 0; i < 1000; i++ { filename := fmt.Sprintf("file%d.txt", i) fsys.WriteFile(filename, make([]byte, 10240), 0644) } // Measure baseline memory var m1 runtime.MemStats runtime.GC() runtime.ReadMemStats(&m1) // Perform operations performRandomAccess(fsys, 10000) // Check for memory leaks var m2 runtime.MemStats runtime.GC() runtime.ReadMemStats(&m2) memGrowth := m2.Alloc - m1.Alloc if memGrowth > 1024*1024 { // 1MB threshold t.Errorf("Excessive memory growth: %d bytes", memGrowth) } }
The combination of fstest.TestFS
for correctness validation and custom benchmarks for performance creates a comprehensive testing strategy. Your file system implementation gains confidence through systematic verification while maintaining performance characteristics under realistic workloads.
Mock File System Implementations
Mock file systems give you complete control over the testing environment, allowing you to simulate any condition your code might encounter in production. Unlike stubs that return fixed responses, mocks actively track interactions and can change behavior based on the sequence of operations.
In-Memory Test File Systems
In-memory file systems provide the fastest and most controllable testing environment. They eliminate disk I/O completely while maintaining the same interface as real file systems. The fstest.MapFS
type serves as an excellent starting point:
func TestFileProcessor(t *testing.T) { testFS := fstest.MapFS{ "input/data.csv": &fstest.MapFile{ Data: []byte("name,age\nAlice,30\nBob,25"), Mode: 0644, ModTime: time.Date(2023, 1, 1, 12, 0, 0, 0, time.UTC), }, "config/settings.json": &fstest.MapFile{ Data: []byte(`{"delimiter": ",", "headers": true}`), }, } processor := NewFileProcessor(testFS) result, err := processor.ProcessData("input/data.csv", "config/settings.json") assert.NoError(t, err) assert.Len(t, result.Records, 2) assert.Equal(t, "Alice", result.Records[0]["name"]) }
For more complex scenarios, create custom in-memory implementations that support write operations and state tracking:
type MemoryFS struct { files map[string]*MemoryFile dirs map[string]bool mutex sync.RWMutex calls []string // Track method calls for verification } func (m *MemoryFS) Open(name string) (fs.File, error) { m.mutex.Lock() m.calls = append(m.calls, fmt.Sprintf("Open(%s)", name)) m.mutex.Unlock() m.mutex.RLock() defer m.mutex.RUnlock() if file, exists := m.files[name]; exists { return &MemoryFile{data: file.data, info: file.info}, nil } return nil, &fs.PathError{Op: "open", Path: name, Err: fs.ErrNotExist} } func (m *MemoryFS) WriteFile(name string, data []byte, perm fs.FileMode) error { m.mutex.Lock() defer m.mutex.Unlock() m.calls = append(m.calls, fmt.Sprintf("WriteFile(%s, %d bytes)", name, len(data))) m.files[name] = &MemoryFile{ data: append([]byte(nil), data...), // Copy to prevent external modification info: &MemoryFileInfo{name: name, size: int64(len(data)), mode: perm}, } return nil }
This approach lets you verify not just the final state but also the sequence of operations your code performs.
Controlled Error Injection
Mock file systems excel at simulating error conditions that are difficult to reproduce with real file systems. You can trigger specific errors at precise moments to test error handling paths:
type ErrorInjectingFS struct { base fs.FS failAfter int callCount int errorType error } func (e *ErrorInjectingFS) Open(name string) (fs.File, error) { e.callCount++ if e.callCount > e.failAfter { return nil, e.errorType } return e.base.Open(name) } func TestErrorHandling(t *testing.T) { baseFS := fstest.MapFS{ "file1.txt": &fstest.MapFile{Data: []byte("content1")}, "file2.txt": &fstest.MapFile{Data: []byte("content2")}, "file3.txt": &fstest.MapFile{Data: []byte("content3")}, } // Simulate disk full error after 2 successful operations errorFS := &ErrorInjectingFS{ base: baseFS, failAfter: 2, errorType: errors.New("no space left on device"), } processor := NewBatchProcessor(errorFS) files := []string{"file1.txt", "file2.txt", "file3.txt"} results, err := processor.ProcessFiles(files) // Should process first 2 files successfully, then fail assert.Error(t, err) assert.Len(t, results, 2) assert.Contains(t, err.Error(), "no space left on device") }
Error injection patterns help you test partial failure scenarios, retry logic, and cleanup operations. You can simulate network timeouts, permission errors, corrupted files, and resource exhaustion.
State Verification Techniques
Mock file systems enable detailed verification of how your code interacts with the file system. Track not just what files were accessed, but when, in what order, and with what parameters:
type TrackedFS struct { base fs.FS operations []Operation mutex sync.Mutex } type Operation struct { Type string Path string Timestamp time.Time Thread int } func (t *TrackedFS) Open(name string) (fs.File, error) { t.mutex.Lock() t.operations = append(t.operations, Operation{ Type: "Open", Path: name, Timestamp: time.Now(), Thread: getCurrentThreadID(), }) t.mutex.Unlock() return t.base.Open(name) } func TestFileAccessPattern(t *testing.T) { tracked := &TrackedFS{base: createTestFS()} processor := NewConcurrentProcessor(tracked) processor.ProcessDirectory("test-dir") // Verify operations occurred in expected order ops := tracked.GetOperations() assert.Equal(t, "Open", ops[0].Type) assert.Equal(t, "test-dir", ops[0].Path) // Verify no concurrent access to same file fileAccess := groupOperationsByFile(ops) for file, accesses := range fileAccess { ensureNoOverlap(t, file, accesses) } }
State verification extends beyond simple call tracking. You can verify that files are closed properly, that locks are acquired and released correctly, and that temporary files are cleaned up. Mock file systems give you visibility into aspects of your code's behavior that would be invisible with real file systems.
The power of mock file systems lies in their predictability and observability. Every aspect of the file system's behavior is under your control, and every interaction is visible to your tests. This combination enables thorough testing of complex file system operations that would be brittle or impossible to test otherwise.
Test Data Management
Effective test data management determines whether your file system tests remain maintainable as your codebase grows. Poor test data practices lead to flaky tests, unclear failure messages, and maintenance nightmares. Well-organized test data makes tests self-documenting and reliable.
Embedding Test Fixtures
Go's embed directive provides an elegant solution for including test files directly in your binary. This approach ensures test data travels with your code and eliminates path-related issues across different environments:
//go:embed testdata/* var testDataFS embed.FS func TestConfigParser(t *testing.T) { tests := []struct { name string filename string expected Config wantErr bool }{ { name: "valid config", filename: "testdata/valid-config.json", expected: Config{Host: "localhost", Port: 8080}, }, { name: "invalid json", filename: "testdata/invalid.json", wantErr: true, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { config, err := ParseConfig(testDataFS, tt.filename) if tt.wantErr { assert.Error(t, err) return } assert.NoError(t, err) assert.Equal(t, tt.expected, *config) }) } }
Organize embedded test data in a clear directory structure that mirrors your test cases:
testdata/ ├── configs/ │ ├── valid-config.json │ ├── invalid-syntax.json │ └── missing-required-field.json ├── templates/ │ ├── simple.tmpl │ └── complex-with-partials.tmpl └── datasets/ ├── small-sample.csv └── large-sample.csv
This structure makes it obvious which test data belongs to which test scenario. File names should be descriptive enough that you understand their purpose without opening them.
For binary test data or files that change frequently, consider using a hybrid approach:
func loadTestData(t *testing.T, filename string) []byte { // Try embedded data first if data, err := testDataFS.ReadFile("testdata/" + filename); err == nil { return data } // Fall back to external file for development data, err := os.ReadFile(filepath.Join("testdata", filename)) require.NoError(t, err, "failed to load test data: %s", filename) return data }
Dynamic Test File Generation
Some tests require generated data rather than static fixtures. Dynamic generation is particularly useful for testing edge cases, large datasets, or scenarios with specific characteristics:
func generateTestCSV(rows int, corruptLine int) []byte { var buf bytes.Buffer buf.WriteString("id,name,email,age\n") for i := 1; i <= rows; i++ { if i == corruptLine { // Intentionally corrupt this line for error testing buf.WriteString(fmt.Sprintf("%d,\"Unclosed Quote,user%d@example.com,25\n", i, i)) } else { buf.WriteString(fmt.Sprintf("%d,User%d,user%d@example.com,%d\n", i, i, i, 20+i%50)) } } return buf.Bytes() } func TestCSVParserWithLargeFile(t *testing.T) { testFS := fstest.MapFS{ "large.csv": &fstest.MapFile{ Data: generateTestCSV(10000, 5000), // Corrupt line 5000 }, } parser := NewCSVParser(testFS) records, err := parser.ParseFile("large.csv") assert.Error(t, err) assert.Contains(t, err.Error(), "line 5000") assert.Len(t, records, 4999) // Should have parsed up to the error }
Dynamic generation shines when testing boundary conditions. Generate files that are exactly at size limits, create directory structures with maximum depth, or produce data with specific statistical distributions:
func createDeepDirectoryStructure(depth int) fstest.MapFS { fs := make(fstest.MapFS) var path strings.Builder for i := 0; i < depth; i++ { if i > 0 { path.WriteString("/") } path.WriteString(fmt.Sprintf("dir%d", i)) } // Add a file at the deepest level filePath := path.String() + "/deep-file.txt" fs[filePath] = &fstest.MapFile{ Data: []byte(fmt.Sprintf("File at depth %d", depth)), } return fs } func TestDeepDirectoryHandling(t *testing.T) { maxDepth := 100 testFS := createDeepDirectoryStructure(maxDepth) walker := NewDirectoryWalker(testFS) files, err := walker.FindAllFiles(".") assert.NoError(t, err) assert.Len(t, files, 1) assert.Contains(t, files[0], "deep-file.txt") }
Cleanup Strategies
Test cleanup becomes critical when tests create temporary files or modify shared state. Implement cleanup strategies that work reliably even when tests fail or panic:
func TestFileProcessor(t *testing.T) { // Create temporary directory tempDir := t.TempDir() // Automatically cleaned up by testing framework // For more control, use custom cleanup customTempDir, err := os.MkdirTemp("", "test-processor-*") require.NoError(t, err) t.Cleanup(func() { os.RemoveAll(customTempDir) }) // Test operations... processor := NewFileProcessor(os.DirFS(tempDir)) result, err := processor.ProcessFiles([]string{"input.txt"}) assert.NoError(t, err) assert.NotNil(t, result) }
For tests that modify global state or shared resources, use cleanup functions that restore the original state:
func TestWithModifiedEnvironment(t *testing.T) { // Save original environment originalHome := os.Getenv("HOME") originalConfig := os.Getenv("CONFIG_PATH") t.Cleanup(func() { os.Setenv("HOME", originalHome) os.Setenv("CONFIG_PATH", originalConfig) }) // Modify environment for test testHome := t.TempDir() os.Setenv("HOME", testHome) os.Setenv("CONFIG_PATH", filepath.Join(testHome, "config")) // Run test with modified environment config, err := LoadUserConfig() assert.NoError(t, err) assert.Equal(t, testHome, config.HomeDirectory) }
Well-managed test data eliminates a major source of test brittleness. When tests fail, you want to debug your application logic, not struggle with missing files or corrupted test data. The patterns above ensure your test data remains an asset rather than a liability as your test suite grows.
Integration Testing Approaches
Integration tests verify that your file system code works correctly with real file systems and external dependencies. While unit tests with mocks validate logic in isolation, integration tests catch issues that emerge from the interaction between your code and the actual environment it will run in.
Temporary Directory Usage
Integration tests need isolated environments that don't interfere with the host system or other tests. Go's testing package provides t.TempDir()
which creates temporary directories that are automatically cleaned up:
func TestFileBackupIntegration(t *testing.T) { // Create isolated test environment sourceDir := t.TempDir() backupDir := t.TempDir() // Set up test files testFiles := map[string]string{ "document.txt": "Important document content", "config/app.json": `{"version": "1.0", "debug": true}`, "data/records.csv": "id,name,value\n1,test,100\n2,prod,200", } for path, content := range testFiles { fullPath := filepath.Join(sourceDir, path) err := os.MkdirAll(filepath.Dir(fullPath), 0755) require.NoError(t, err) err = os.WriteFile(fullPath, []byte(content), 0644) require.NoError(t, err) } // Test the backup operation backup := NewFileBackup() err := backup.CreateBackup(sourceDir, backupDir) require.NoError(t, err) // Verify backup contents for path, expectedContent := range testFiles { backupPath := filepath.Join(backupDir, path) actualContent, err := os.ReadFile(backupPath) require.NoError(t, err) assert.Equal(t, expectedContent, string(actualContent)) } // Verify file permissions and metadata sourceInfo, err := os.Stat(filepath.Join(sourceDir, "document.txt")) require.NoError(t, err) backupInfo, err := os.Stat(filepath.Join(backupDir, "document.txt")) require.NoError(t, err) assert.Equal(t, sourceInfo.Mode(), backupInfo.Mode()) assert.Equal(t, sourceInfo.Size(), backupInfo.Size()) }
Temporary directories provide true isolation while testing real file system behavior. This catches issues like path separator handling, permission problems, and filesystem-specific limitations that mocks cannot simulate.
Real File System Testing
Some functionality can only be properly tested against real file systems. File locking, atomic operations, and filesystem-specific features require integration testing:
func TestConcurrentFileAccess(t *testing.T) { if testing.Short() { t.Skip("Skipping integration test in short mode") } tempDir := t.TempDir() testFile := filepath.Join(tempDir, "shared.txt") // Create initial file err := os.WriteFile(testFile, []byte("initial content\n"), 0644) require.NoError(t, err) const numWriters = 5 const writesPerWriter = 10 var wg sync.WaitGroup errChan := make(chan error, numWriters) // Launch concurrent writers for i := 0; i < numWriters; i++ { wg.Add(1) go func(writerID int) { defer wg.Done() for j := 0; j < writesPerWriter; j++ { line := fmt.Sprintf("Writer %d, line %d\n", writerID, j) if err := appendToFile(testFile, line); err != nil { errChan <- err return } time.Sleep(time.Millisecond) // Small delay to increase contention } }(i) } wg.Wait() close(errChan) // Check for errors for err := range errChan { t.Errorf("Concurrent write error: %v", err) } // Verify file integrity content, err := os.ReadFile(testFile) require.NoError(t, err) lines := strings.Split(string(content), "\n") expectedLines := 1 + (numWriters * writesPerWriter) // Initial + all writes assert.Equal(t, expectedLines, len(lines)) } func appendToFile(filename, content string) error { file, err := os.OpenFile(filename, os.O_APPEND|os.O_WRONLY, 0644) if err != nil { return err } defer file.Close() _, err = file.WriteString(content) return err }
Real file system testing reveals race conditions, locking issues, and platform-specific behaviors that are impossible to catch with mocked implementations.
Cross-Platform Considerations
File system behavior varies significantly across operating systems. Integration tests should account for these differences or explicitly test platform-specific behavior:
func TestPathHandling(t *testing.T) { tempDir := t.TempDir() tests := []struct { name string inputPath string expectError bool skipOnOS string }{ { name: "normal path", inputPath: "normal-file.txt", }, { name: "path with spaces", inputPath: "file with spaces.txt", }, { name: "long filename", inputPath: strings.Repeat("a", 255), // Max filename length on most filesystems }, { name: "invalid characters", inputPath: "file<>|.txt", expectError: true, skipOnOS: "linux", // Linux allows these characters }, { name: "case sensitivity test", inputPath: "CaseSensitive.TXT", }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { if tt.skipOnOS == runtime.GOOS { t.Skipf("Skipping test on %s", runtime.GOOS) } filePath := filepath.Join(tempDir, tt.inputPath) err := os.WriteFile(filePath, []byte("test content"), 0644) if tt.expectError { assert.Error(t, err) } else { assert.NoError(t, err) // Verify file can be read back content, readErr := os.ReadFile(filePath) assert.NoError(t, readErr) assert.Equal(t, "test content", string(content)) } }) } } func TestSymlinkHandling(t *testing.T) { if runtime.GOOS == "windows" { t.Skip("Symlink test requires Unix-like system") } tempDir := t.TempDir() // Create target file targetFile := filepath.Join(tempDir, "target.txt") err := os.WriteFile(targetFile, []byte("target content"), 0644) require.NoError(t, err) // Create symlink linkFile := filepath.Join(tempDir, "link.txt") err = os.Symlink(targetFile, linkFile) require.NoError(t, err) // Test reading through symlink walker := NewDirectoryWalker() files, err := walker.ListFiles(tempDir) require.NoError(t, err) // Should find both target and link assert.Len(t, files, 2) // Verify link behavior linkInfo, err := os.Lstat(linkFile) require.NoError(t, err) assert.True(t, linkInfo.Mode()&os.ModeSymlink != 0) }
Platform-specific tests help ensure your code works correctly across different environments. Use build tags or runtime checks to handle platform differences gracefully.
Integration tests bridge the gap between unit tests and production deployment. They catch environment-specific issues while maintaining enough control to be reliable and repeatable. The key is finding the right balance between real-world conditions and test stability.
Advanced Testing Patterns
Advanced testing patterns push beyond traditional unit and integration tests to discover edge cases and performance characteristics that conventional testing might miss. These techniques are particularly valuable for file system code, which often handles unpredictable inputs and operates under varying system conditions.
Property-Based Testing
Property-based testing generates random inputs to verify that your code maintains certain invariants regardless of the specific data it processes. For file system operations, this approach excels at finding edge cases in path handling, data processing, and error recovery:
func TestFilePathNormalization(t *testing.T) { property := func(pathComponents []string) bool { if len(pathComponents) == 0 { return true // Skip empty input } // Filter out empty components and those with null bytes validComponents := make([]string, 0, len(pathComponents)) for _, component := range pathComponents { if component != "" && !strings.Contains(component, "\x00") { validComponents = append(validComponents, component) } } if len(validComponents) == 0 { return true } // Build path and normalize it rawPath := strings.Join(validComponents, "/") normalized := NormalizePath(rawPath) // Properties that should always hold: // 1. Normalized path should not contain double slashes if strings.Contains(normalized, "//") { t.Logf("Double slash found in normalized path: %s", normalized) return false } // 2. Should not end with slash unless it's root if len(normalized) > 1 && strings.HasSuffix(normalized, "/") { t.Logf("Unexpected trailing slash: %s", normalized) return false } // 3. Should be idempotent - normalizing twice gives same result if NormalizePath(normalized) != normalized { t.Logf("Not idempotent: %s != %s", normalized, NormalizePath(normalized)) return false } return true } // Run property test with random inputs for i := 0; i < 1000; i++ { components := generateRandomPathComponents(rand.Intn(10) + 1) if !property(components) { t.Fatalf("Property violation with input: %v", components) } } } func generateRandomPathComponents(count int) []string { components := make([]string, count) chars := "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_." for i := 0; i < count; i++ { length := rand.Intn(20) + 1 component := make([]byte, length) for j := range component { component[j] = chars[rand.Intn(len(chars))] } components[i] = string(component) } return components }
Property-based testing for file operations can verify data integrity across various transformations:
func TestFileCompressionRoundTrip(t *testing.T) { compressor := NewFileCompressor() for i := 0; i < 100; i++ { // Generate random data with various characteristics originalData := generateRandomData(rand.Intn(10000) + 1) // Compress and decompress compressed, err := compressor.Compress(originalData) require.NoError(t, err) decompressed, err := compressor.Decompress(compressed) require.NoError(t, err) // Property: round-trip should preserve data exactly assert.Equal(t, originalData, decompressed, "Round-trip failed for data of length %d", len(originalData)) // Property: compressed size should be reasonable ratio := float64(len(compressed)) / float64(len(originalData)) assert.Less(t, ratio, 2.0, "Compression ratio too high: %f for data length %d", ratio, len(originalData)) } } func generateRandomData(size int) []byte { data := make([]byte, size) // Mix different patterns to test various compression scenarios switch rand.Intn(4) { case 0: // Random data rand.Read(data) case 1: // Repeated patterns pattern := []byte("ABCDEFGH") for i := range data { data[i] = pattern[i%len(pattern)] } case 2: // Mostly zeros with some random for i := 0; i < size/10; i++ { data[rand.Intn(size)] = byte(rand.Intn(256)) } case 3: // Text-like data chars := "abcdefghijklmnopqrstuvwxyz \n\n" for i := range data { data[i] = chars[rand.Intn(len(chars))] } } return data }
Fuzzing File System Operations
Fuzzing automatically generates test inputs to find crashes, panics, and unexpected behaviors. Go's built-in fuzzing support makes it easy to fuzz file system operations:
func FuzzFilePathParser(f *testing.F) { // Seed corpus with known interesting cases f.Add("/path/to/file.txt") f.Add("../../../etc/passwd") f.Add("file with spaces.txt") f.Add("file\x00with\x00nulls") f.Add(strings.Repeat("a", 1000)) f.Fuzz(func(t *testing.T, input string) { // The function should never panic, regardless of input defer func() { if r := recover(); r != nil { t.Errorf("ParseFilePath panicked with input %q: %v", input, r) } }() result, err := ParseFilePath(input) if err == nil { // If parsing succeeded, result should be valid if result.Dir == "" && result.Base == "" { t.Errorf("ParseFilePath returned empty result for valid input: %q", input) } // Reconstructed path should be equivalent to original reconstructed := filepath.Join(result.Dir, result.Base) if !pathsEquivalent(input, reconstructed) { t.Errorf("Path reconstruction mismatch: %q -> %q", input, reconstructed) } } }) } func FuzzFileReader(f *testing.F) { f.Add([]byte("normal file content")) f.Add([]byte("")) f.Add(make([]byte, 10000)) // Large file f.Add([]byte("\x00\x01\x02\xff")) // Binary data f.Fuzz(func(t *testing.T, data []byte) { // Create temporary file with fuzz data tempFile := filepath.Join(t.TempDir(), "fuzz-test.dat") err := os.WriteFile(tempFile, data, 0644) require.NoError(t, err) // Test file reader with arbitrary data reader := NewFileReader() // Should handle any data without crashing content, err := reader.ReadFile(tempFile) if err != nil { // Error is acceptable, but should not panic return } // If read succeeded, content should match original assert.Equal(t, data, content) }) }
Performance Benchmarking
Performance benchmarks measure how your file system code behaves under different load conditions. Effective benchmarks test realistic scenarios and measure relevant metrics:
func BenchmarkFileOperations(b *testing.B) { tempDir := b.TempDir() // Test different file sizes fileSizes := []int{1024, 10240, 102400, 1024000} // 1KB to 1MB for _, size := range fileSizes { b.Run(fmt.Sprintf("size_%d", size), func(b *testing.B) { data := make([]byte, size) rand.Read(data) b.ResetTimer() b.ReportAllocs() for i := 0; i < b.N; i++ { filename := filepath.Join(tempDir, fmt.Sprintf("bench_%d.dat", i)) // Measure write performance err := os.WriteFile(filename, data, 0644) if err != nil { b.Fatal(err) } // Measure read performance _, err = os.ReadFile(filename) if err != nil { b.Fatal(err) } // Clean up os.Remove(filename) } // Report throughput bytesPerOp := int64(size * 2) // Read + write b.SetBytes(bytesPerOp) }) } } func BenchmarkConcurrentFileAccess(b *testing.B) { tempDir := b.TempDir() numWorkers := runtime.NumCPU() b.ResetTimer() b.RunParallel(func(pb *testing.PB) { workerID := atomic.AddInt64(&workerCounter, 1) data := make([]byte, 1024) rand.Read(data) for pb.Next() { filename := filepath.Join(tempDir, fmt.Sprintf("worker_%d_%d.dat", workerID, time.Now().UnixNano())) // Write file err := os.WriteFile(filename, data, 0644) if err != nil { b.Fatal(err) } // Read it back _, err = os.ReadFile(filename) if err != nil { b.Fatal(err) } // Clean up os.Remove(filename) } }) } var workerCounter int64 func BenchmarkDirectoryTraversal(b *testing.B) { // Create directory structure for testing tempDir := b.TempDir() createTestDirectoryStructure(tempDir, 5, 10) // 5 levels deep, 10 files per level walker := NewDirectoryWalker() b.ResetTimer() for i := 0; i < b.N; i++ { files, err := walker.WalkDirectory(tempDir) if err != nil { b.Fatal(err) } // Ensure we found expected number of files if len(files) < 50 { b.Fatalf("Expected at least 50 files, found %d", len(files)) } } }
Advanced testing patterns reveal problems that traditional testing approaches miss. Property-based testing finds edge cases in your logic, fuzzing discovers inputs that cause crashes, and performance benchmarks ensure your code scales appropriately. These techniques require more investment than basic unit tests, but they provide confidence that your file system code handles the full range of conditions it will encounter in production.
Top comments (0)