A comprehensive Python package for cognitive computing, implementing various brain-inspired computing paradigms for robust, efficient, and adaptive information processing.
The cognitive-computing
package provides implementations of several cognitive computing paradigms:
- Sparse Distributed Memory (SDM) β - Fully implemented
- Holographic Reduced Representations (HRR) β - Fully implemented
- Vector Symbolic Architectures (VSA) β - Fully implemented
- Hyperdimensional Computing (HDC) β - Fully implemented
These technologies enable:
- π§ Brain-inspired computing - Models based on human memory and cognition
- π Content-addressable storage - Retrieve data by content, not location
- π Noise tolerance - Graceful degradation with noisy inputs
- β‘ Fast approximate computing - Trade precision for speed and robustness
- π Symbolic reasoning - Combine neural and symbolic approaches
git clone https://github.com/transparentai-tech/cognitive-computing.git cd cognitive-computing pip install -e .
pip install -e ".[dev]"
pip install -e ".[viz]"
pip install -e ".[gpu]"
import numpy as np from cognitive_computing.sdm import create_sdm # Create a 1000-dimensional SDM sdm = create_sdm(dimension=1000) # Generate random binary patterns address = np.random.randint(0, 2, 1000) data = np.random.randint(0, 2, 1000) # Store the pattern sdm.store(address, data) # Recall with perfect address recalled_data = sdm.recall(address) print(f"Perfect recall accuracy: {np.mean(recalled_data == data):.2%}") # Recall with noisy address (10% noise) from cognitive_computing.sdm.utils import add_noise noisy_address = add_noise(address, noise_level=0.1) recalled_noisy = sdm.recall(noisy_address) print(f"Noisy recall accuracy: {np.mean(recalled_noisy == data):.2%}")
from cognitive_computing.sdm import SDM, SDMConfig # Custom configuration config = SDMConfig( dimension=2000, num_hard_locations=5000, activation_radius=900, storage_method="counters", # or "binary" parallel=True, num_workers=4 ) # Create SDM with custom parameters sdm = SDM(config) # Use different address decoders from cognitive_computing.sdm.address_decoder import create_decoder # Options: 'hamming', 'jaccard', 'random', 'adaptive', 'hierarchical', 'lsh' decoder = create_decoder('adaptive', config, sdm.hard_locations)
from cognitive_computing.hrr import create_hrr from cognitive_computing.hrr.encoding import RoleFillerEncoder # Create HRR system hrr = create_hrr(dimension=1024) # Basic binding and unbinding role = hrr.generate_vector() filler = hrr.generate_vector() binding = hrr.bind(role, filler) retrieved = hrr.unbind(binding, role) print(f"Similarity: {hrr.similarity(retrieved, filler):.3f}") # Encode structured information encoder = RoleFillerEncoder(hrr) person = encoder.encode_structure({ "name": hrr.generate_vector(), # Vector for "John" "age": hrr.generate_vector(), # Vector for "25" "city": hrr.generate_vector() # Vector for "Boston" }) # Cleanup memory for robust retrieval from cognitive_computing.hrr.cleanup import CleanupMemory, CleanupMemoryConfig cleanup = CleanupMemory(CleanupMemoryConfig(threshold=0.3), dimension=1024) cleanup.add_item("john", hrr.generate_vector()) cleanup.add_item("mary", hrr.generate_vector()) # Clean up noisy vectors noisy_vector = retrieved + np.random.randn(1024) * 0.2 name, clean_vector, similarity = cleanup.cleanup(noisy_vector, return_similarity=True)
from cognitive_computing.vsa import create_vsa, VSAConfig, VectorType # Create VSA system vsa = create_vsa(dimension=10000, vector_type=VectorType.BIPOLAR) # Basic binding operations a = vsa.generate_vector() b = vsa.generate_vector() bound = vsa.bind(a, b) recovered = vsa.unbind(bound, a) print(f"Similarity: {vsa.similarity(recovered, b):.3f}") # Bundle multiple vectors vectors = [vsa.generate_vector() for _ in range(5)] bundled = vsa.bundle(vectors) # Use different architectures from cognitive_computing.vsa.architectures import BSC, MAP, FHRR # Binary Spatter Codes bsc = BSC(dimension=8192) x = bsc.generate_vector() y = bsc.generate_vector() z = bsc.bind(x, y) # XOR binding # Multiply-Add-Permute map_vsa = MAP(dimension=10000) bound = map_vsa.bind(a, b) # Uses multiplication and permutation # Fourier HRR fhrr = FHRR(dimension=1024) complex_bound = fhrr.bind(a, b) # Complex-valued binding
-
Multiple Storage Methods
- Counter-based (default) - Better noise tolerance
- Binary - Lower memory usage
-
Six Address Decoders
- Hamming - Classic distance-based
- Jaccard - For sparse data
- Random - O(1) hashing
- Adaptive - Self-adjusting
- Hierarchical - Multi-level
- LSH - Locality-sensitive hashing
-
Comprehensive Analysis Tools
- Memory capacity estimation
- Activation pattern analysis
- Performance benchmarking
- Crosstalk measurement
-
Data Encoding Utilities
- Integer encoding
- Float encoding
- String encoding
- Vector encoding
-
Visualization Support
- Memory distribution plots
- Activation patterns
- Recall accuracy curves
- Interactive 3D visualizations
-
Core Operations
- Circular convolution binding
- Circular correlation unbinding
- Vector bundling (superposition)
- Real and complex storage modes
-
Encoding Strategies
- Role-filler binding
- Sequence encoding (positional/chaining)
- Hierarchical structures
- Tree encoding
-
Cleanup Memory
- Item storage and retrieval
- Similarity-based cleanup
- Multiple similarity metrics
- Persistence support
-
Analysis Tools
- Binding capacity analysis
- Crosstalk measurement
- Performance benchmarking
- Vector generation utilities
-
Visualization Support
- Similarity matrices
- Convolution spectra
- Cleanup space visualization
- Performance dashboards
-
Vector Types
- Binary vectors {0, 1}
- Bipolar vectors {-1, +1}
- Ternary vectors {-1, 0, +1}
- Complex unit vectors
- Integer vectors
-
Binding Operations
- XOR (self-inverse for binary)
- Element-wise multiplication
- Circular convolution
- MAP (Multiply-Add-Permute)
- Permutation-based binding
-
VSA Architectures
- Binary Spatter Codes (BSC)
- Multiply-Add-Permute (MAP)
- Fourier HRR (FHRR)
- Sparse VSA
- HRR-compatible mode
-
Encoding Strategies
- Random indexing for text
- Spatial encoding for coordinates
- Temporal encoding for sequences
- Level encoding for continuous values
- Graph encoding for networks
-
Analysis and Utilities
- Capacity analysis
- Vector generation utilities
- Architecture comparison tools
- Performance benchmarking
- Cross-architecture conversion
-
Hypervector Types
- Binary hypervectors {0, 1}
- Bipolar hypervectors {-1, +1}
- Ternary hypervectors {-1, 0, +1}
- Level hypervectors (multi-level quantized)
-
Core Operations
- Binding (XOR for binary, multiplication for others)
- Bundling (majority vote, averaging, weighted)
- Permutation (cyclic shift, random, block)
- Similarity (Hamming, cosine, Euclidean, Jaccard)
-
Classifiers
- One-shot learning classifier
- Adaptive online classifier
- Ensemble voting classifier
- Hierarchical multi-level classifier
-
Item Memory
- Associative storage and retrieval
- Content-based cleanup
- Similarity queries
- Merge and update operations
-
Encoding Strategies
- Scalar encoding (thermometer, level)
- Categorical encoding
- Sequence encoding (n-gram, positional)
- Spatial encoding (multi-dimensional)
- Record encoding (structured data)
- N-gram text encoding
-
Analysis and Utilities
- Capacity measurement
- Noise robustness testing
- Performance benchmarking
- Binding property analysis
- Similarity distribution analysis
from cognitive_computing.sdm import create_sdm from cognitive_computing.sdm.utils import generate_random_patterns, add_noise # Create SDM sdm = create_sdm(dimension=1000) # Generate base pattern and variations base_pattern = np.random.randint(0, 2, 1000) variations = [add_noise(base_pattern, 0.05) for _ in range(10)] # Store all variations with same label label = np.array([1, 0, 0, 0] + [0] * 996) # One-hot encoded label for variant in variations: sdm.store(variant, label) # Recognize noisy input test_input = add_noise(base_pattern, 0.15) recalled_label = sdm.recall(test_input) print(f"Pattern recognized: {np.argmax(recalled_label)}")
# Store sequential patterns sequence = generate_random_patterns(10, 1000)[0] # 10 patterns for i in range(len(sequence) - 1): sdm.store(sequence[i], sequence[i + 1]) # Recall sequence current = sequence[0] recalled_sequence = [current] for _ in range(len(sequence) - 1): current = sdm.recall(current) recalled_sequence.append(current)
from cognitive_computing.sdm.utils import PatternEncoder encoder = PatternEncoder(dimension=1000) sdm = create_sdm(dimension=1000) # Encode and store different data types # Integer age = 25 age_encoded = encoder.encode_integer(age) sdm.store(age_encoded, age_encoded) # String name = "Alice" name_encoded = encoder.encode_string(name) sdm.store(name_encoded, name_encoded) # Float array features = np.array([0.1, 0.5, 0.9, 0.2]) features_encoded = encoder.encode_vector(features) sdm.store(features_encoded, features_encoded)
SDM operations are highly efficient:
from cognitive_computing.sdm.utils import evaluate_sdm_performance # Run performance benchmark results = evaluate_sdm_performance(sdm, test_patterns=100) print(f"Write time: {results.write_time_mean*1000:.2f} ms") print(f"Read time: {results.read_time_mean*1000:.2f} ms") print(f"Recall accuracy: {results.recall_accuracy_mean:.2%}")
For large-scale applications, enable parallel processing:
config = SDMConfig( dimension=5000, num_hard_locations=10000, parallel=True, num_workers=8 ) sdm = SDM(config)
from cognitive_computing.sdm.visualizations import ( plot_memory_distribution, plot_recall_accuracy, visualize_memory_contents ) # Analyze memory distribution fig = plot_memory_distribution(sdm) # Test and plot recall accuracy test_results = evaluate_sdm_performance(sdm) fig = plot_recall_accuracy(test_results) # Interactive 3D visualization fig = visualize_memory_contents(sdm, interactive=True)
Run the test suite:
# Run all tests pytest # Run specific module tests pytest tests/test_sdm/ # Run with coverage pytest --cov=cognitive_computing # Run only fast tests pytest -m "not slow"
We welcome contributions! Please see our Contributing Guide for details.
# Install in development mode with all dependencies pip install -e ".[dev,viz]" # Run code formatting black cognitive_computing tests # Run linting flake8 cognitive_computing tests # Run type checking mypy cognitive_computing
-
β Sparse Distributed Memory (SDM) - Complete
- Core implementation with counter/binary storage
- Six address decoder strategies
- Comprehensive utilities and visualizations
- Full test coverage (226/226 tests passing)
-
β Holographic Reduced Representations (HRR) - Complete
- Circular convolution/correlation operations
- Role-filler and sequence encoding
- Cleanup memory implementation
- Full test coverage (184/184 tests passing)
-
β Vector Symbolic Architectures (VSA) - Complete
- Five vector types (binary, bipolar, ternary, complex, integer)
- Five binding operations (XOR, multiplication, convolution, MAP, permutation)
- Five complete architectures (BSC, MAP, FHRR, Sparse VSA, HRR-compatible)
- Comprehensive encoding strategies and utilities
- Near-complete test coverage (294/295 tests passing - 99.7%)
-
β Hyperdimensional Computing (HDC) - Complete
- Four hypervector types (binary, bipolar, ternary, level)
- Core operations (bind, bundle, permute, similarity)
- Item memory with associative retrieval
- Advanced classifiers (one-shot, adaptive, ensemble, hierarchical)
- Multiple encoding strategies (scalar, categorical, sequence, spatial, n-gram)
- Full test coverage (193/193 tests passing - 100%)
- Total Tests: 898 (897 passing, 1 skipped - 99.89% success rate)
- Total Modules: 36 core implementation files
- Example Scripts: 20 (all tested and working)
- Documentation: Complete API references, theory guides, and examples
-
π§ Advanced Integration Features
- Cross-paradigm operations
- Neural network interfaces (PyTorch, TensorFlow)
- GPU acceleration
- Distributed computing support
-
π§ Future Enhancements (see
planned_development/
)- Advanced decoders and storage mechanisms for SDM
- Enhanced convolution operations for HRR
- Learning and adaptation mechanisms for VSA
- Extreme-scale operations and quantum integration for HDC
- Unified cognitive architecture
- Complementary Technologies
If you use this package in your research, please cite:
@software{cognitive_computing, title = {Cognitive Computing: A Python Package for Brain-Inspired Computing}, author = {Ian Hamilton}, year = {2025}, url = {https://github.com/transparentai-tech/cognitive-computing} }
- Kanerva, P. (1988). Sparse Distributed Memory. MIT Press.
- Kanerva, P. (1993). "Sparse Distributed Memory and Related Models." Associative Neural Memories.
- Plate, T. A. (1995). "Holographic Reduced Representations." IEEE Transactions on Neural Networks.
- Gayler, R. W. (2003). "Vector Symbolic Architectures Answer Jackendoff's Challenges for Cognitive Neuroscience."
- Kanerva, P. (2009). "Hyperdimensional Computing: An Introduction to Computing in Distributed Representation."
- Plate, T. A. (2003). Holographic Reduced Representation: Distributed Representation for Cognitive Structures. CSLI Publications.
This project is licensed under the MIT License - see the LICENSE file for details.
- Inspired by Pentti Kanerva's groundbreaking work on Sparse Distributed Memory
- Thanks to all contributors and the cognitive computing research community
Note: This package is ready for production use! All four core paradigms (SDM, HRR, VSA, HDC) are fully implemented with comprehensive test coverage (99.89%). The package is actively maintained and we welcome contributions for advanced integration features.