PS C:\Users\matt2\Desktop\Software\universal-meta-language (6)> activate
(.venv) PS C:\Users\matt2\Desktop\Software\universal-meta-language (6)> python main.py
Loading Wikitext‑2 dataset…
Sampled 10000 sentences.
Computing embeddings with Sentence‑BERT (all‑MiniLM‑L6‑v2)…
Batches: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 79/79 [00:37<00:00, 2.12it/s]
Testing codebook sizes: [50, 100, 200, 500, 1000, 2000]
→ Clustering with K=50…
C:\Users\matt2\Desktop\Software\universal-meta-language (6).venv\Lib\site-packages\joblib\externals\loky\backend\context.py:131: UserWarning: Could not find the number of physical cores for the following reason:
[WinError 2] The system cannot find the file specified
Returning the number of logical cores instead. You can silence this warning by setting LOKY_MAX_CPU_COUNT to the number of cores you want to use.
warnings.warn(
File "C:\Users\matt2\Desktop\Software\universal-meta-language (6).venv\Lib\site-packages\joblib\externals\loky\backend\context.py", line 247, in _count_physical_cores
cpu_count_physical = _count_physical_cores_win32()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\matt2\Desktop\Software\universal-meta-language (6).venv\Lib\site-packages\joblib\externals\loky\backend\context.py", line 299, in _count_physical_cores_win32
cpu_info = subprocess.run(
^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.2800.0_x64__qbz5n2kfra8p0\Lib\subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.2800.0_x64_qbz5n2kfra8p0\Lib\subprocess.py", line 1026, in __init_
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.2800.0_x64__qbz5n2kfra8p0\Lib\subprocess.py", line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OpenBLAS warning: precompiled NUM_THREADS exceeded, adding auxiliary array for thread metadata.
To avoid this warning, please rebuild your copy of OpenBLAS with a larger NUM_THREADS setting
or set the environment variable OPENBLAS_NUM_THREADS to 24 or lower
→ Clustering with K=100…
→ Clustering with K=200…
→ Clustering with K=500…
→ Clustering with K=1000…
→ Clustering with K=2000…
[DEBUG] Expected rows = 6, Collected rows = 6
[DEBUG] First few result entries:
[{'K': 50,
'ambiguity_rate': 0.995,
'bits_per_code': np.float64(5.643856189774724),
'compression_ratio': np.float64(251.71905783489265),
'silhouette_score': 0.03647768124938011},
{'K': 100,
'ambiguity_rate': 0.99,
'bits_per_code': np.float64(6.643856189774724),
'compression_ratio': np.float64(213.83156439060306),
'silhouette_score': 0.04587062820792198},
{'K': 200,
'ambiguity_rate': 0.98,
'bits_per_code': np.float64(7.643856189774724),
'compression_ratio': np.float64(185.85725939561272),
'silhouette_score': 0.05711694434285164}]
Final DataFrame:
K ambiguity_rate bits_per_code compression_ratio silhouette_score
0 50 0.995 5.643856 251.719058 0.036478
1 100 0.990 6.643856 213.831564 0.045871
2 200 0.980 7.643856 185.857259 0.057117
3 500 0.950 8.965784 158.454198 0.075950
4 1000 0.900 9.965784 142.554376 0.081304
5 2000 0.800 10.965784 129.554451 0.094293
Saved results to semantic_compression_results.csv
Plot saved to semantic_compression_metrics.png
!/usr/bin/env python3
"""
main.py
AI‑Driven Semantic Compression Experiment on Wikitext‑2
Includes debug prints to ensure the results list is built correctly.
import random import numpy as np import pandas as pd import matplotlib.pyplot as plt from datasets import load_dataset from sentence_transformers import SentenceTransformer from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score from pprint import pprint def main(): # 1. Load Wikitext-2 and sample 10k sentences print("Loading Wikitext‑2 dataset…") ds = load_dataset('wikitext', 'wikitext-2-raw-v1', split='train') sentences = [s for s in ds['text'] if len(s.split()) > 5] if len(sentences) < 10000: raise ValueError(f"Not enough sentences (found {len(sentences)}).") random.seed(42) corpus = random.sample(sentences, k=10000) N = len(corpus) print(f"Sampled {N} sentences.") # 2. Compute Sentence-BERT embeddings print("Computing embeddings with Sentence‑BERT (all‑MiniLM‑L6‑v2)…") model = SentenceTransformer('all-MiniLM-L6-v2') embeddings = model.encode(corpus, batch_size=128, show_progress_bar=True) # 3. Define codebook sizes to test K_values = [50, 100, 200, 500, 1000, 2000] K_values = [K for K in K_values if K <= N] print(f"Testing codebook sizes: {K_values}") # 4. Clustering & metrics computation results = [] avg_tokens = np.mean([len(s.split()) for s in corpus]) orig_bits_per_sentence = avg_tokens * np.log2(30000) # rough estimate for K in K_values: print(f" → Clustering with K={K}…") kmeans = KMeans(n_clusters=K, random_state=42) labels = kmeans.fit_predict(embeddings) # Ambiguity: collisions unique_clusters = len(set(labels)) collisions = N - unique_clusters ambiguity_rate = collisions / N # Compression ratio bits_per_code = np.log2(K) compression_ratio = orig_bits_per_sentence / bits_per_code # Silhouette score sil = silhouette_score(embeddings, labels) if K > 1 else float('nan') results.append({ 'K': K, 'ambiguity_rate': ambiguity_rate, 'bits_per_code': bits_per_code, 'compression_ratio': compression_ratio, 'silhouette_score': sil }) # --- DEBUG CHECKS --- print(f"\n[DEBUG] Expected rows = {len(K_values)}, Collected rows = {len(results)}") print("[DEBUG] First few result entries:") pprint(results[:3]) # 5. Build DataFrame & save df = pd.DataFrame(results) print("\nFinal DataFrame:") print(df) df.to_csv("semantic_compression_results.csv", index=False) print("\nSaved results to semantic_compression_results.csv") # 6. Plot metrics plt.figure(figsize=(8, 5)) plt.plot(df['K'], df['ambiguity_rate'], marker='o', label='Ambiguity Rate') plt.plot(df['K'], df['compression_ratio'], marker='x', label='Compression Ratio') plt.plot(df['K'], df['silhouette_score'], marker='s', label='Silhouette Score') plt.xscale('log') plt.xlabel('Codebook Size (K)') plt.title('Semantic Compression Metrics vs. K') plt.legend() plt.tight_layout() plt.savefig("semantic_compression_metrics.png") print("Plot saved to semantic_compression_metrics.png") plt.show() if __name__ == "__main__": main()
Comprehensive Expansion of AI-Driven Semantic Compression Experiment
This expanded report extends the original Wikitext‑2 clustering-based semantic compression experiment to a broader scope, investigating additional datasets, advanced quantization techniques, hybrid models, and downstream task evaluations. It is structured as follows:
Objectives and Scope
Expand dataset diversity (text, code, multilingual).
Integrate advanced quantization methods (PQ, OPQ, AQ).
Evaluate end-to-end semantic reconstruction on downstream tasks (QA, summarization, translation).
Analyze trade-offs across codebook strategies and hybrid pipelines.
Datasets
Wikitext-2 (English Wikipedia) — baseline.
OpenWebText (45 GB scraped web data) — large-scale text.
CodeSearchNet (Python code) — code semantics.
Multilingual TED Talks — multilingual text.
ImageNet Captions — vision-language pairs for multimodal.
Embedding Models
Sentence-BERT variants:
all-MiniLM-L6-v2 (384‑dim) — fast baseline.
all-mpnet-base-v2 (768‑dim) — richer embeddings.
xlm-r-100langs (512‑dim) — multilingual.
CLIP-ViT-B/32 — image-text embeddings.
Quantization & Clustering Methods
KMeans & MiniBatchKMeans: baseline discrete codes.
Product Quantization (PQ): FAISS PQ with 8–16 subquantizers.
Optimized PQ (OPQ): learned rotation before PQ.
Additive Quantization (AQ): multi-codebook additive representation.
Vector-Quantized Autoencoders: VQ-VAE style latent quantization.
Experimental Pipeline
Preprocessing: sentence/sentencepiece tokenization; vector normalization.
Embedding Extraction: batched inference with GPU acceleration.
Quantization & Indexing: training quantizers, measuring code size and storage footprint.
Reconstruction:
Indirect: nearest-centroid decoding, feeding reconstructed embeddings into a decoder LLM.
Direct: reconstruct text via retrieval-augmented generation (RAG) against original corpus.
Metrics
Intrinsic:
Ambiguity Rate, Bits/code, Compression Ratio, Silhouette Score, Quantization MSE.
Extrinsic:
QA accuracy (e.g., SQuAD, TyDi QA), ROUGE and BERTScore for summarization, BLEU for translation, code generation accuracy (CodeBLEU).
Latency & Throughput: encode/decode speed, memory footprint.
Results & Analysis
Comparative tables and plots of intrinsic metrics across methods and datasets.
Downstream task performance vs. compression ratio curves.
Analysis of semantic drift and failure cases.
Technical Challenges & Solutions
High-dimensional quantizer training on large-scale datasets.
Hybrid model integration complexities.
Balancing offline quantizer training vs. online adaptation.
Conclusions & Future Work
Summary of best-performing pipelines by use-case.
Recommendations for real-world deployment (e.g., IoT, edge devices).
Directions: adaptive codebooks, reinforcement-learned quantization, cross-modal semantic compression.
This document serves as a blueprint for implementing and reporting a thorough, end-to-end AI-driven semantic compression study across modalities and tasks.
Top comments (0)