Skip to content
View Mattbusel's full-sized avatar
🎯
Focusing
🎯
Focusing

Block or report Mattbusel

Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Mattbusel/README.md

I’m an AI/ML engineer and founder focused on building the next layer of intelligence infrastructure, systems where speed, safety, and cognition meet.

My work bridges Rust-native performance with large-scale model architecture. I design inference pipelines, latency-adaptive schedulers, and LLM optimization frameworks that push models closer to real-time intelligence.

Before Rust, I worked primarily in Python, building ML pipelines for finance, and signal modeling. That foundation in data and systems design evolved into Tensorust, where I now focus on production-grade Rust tools for high-performance inference.

Featured Projects

Every Other Token – Rust-based LLM optimizer reducing inference latency by 48%.

LLM Affector – runtime controller for token generation and contextual flow.

Adaptive Task Scheduler - distributed orchestration layer for multi-model pipelines.

Mycelium-AI Platform – biological sensing + AI integration for environmental feedback loops.

Core Stack Rust, Python, PyTorch, Hugging Face Transformers, Flask, FastAPI, Docker, Linux, PostgreSQL

Focus Areas LLM infrastructure, inference optimization, cognitive systems, Rust AI tooling

I like systems that scale under pressure, and the people who help build them. If you’re experimenting at the edge of AI infrastructure, I’d love to connect.

Total Stars

mattbusel@gmail.com linkedin.com/in/matthewbusel medium.com/@mattbusel

Pinned Loading

  1. Every-Other-Token Every-Other-Token Public

    A real-time LLM stream interceptor for token-level interaction research

    Rust 19 1

  2. LLM-Hallucination-Detection-Script LLM-Hallucination-Detection-Script Public

    A comprehensive toolkit for detecting potential hallucinations in LLM responses. Compatible with any LLM API (OpenAI, Anthropic, local models, etc.)

    Rust 13 1

  3. llm_affector llm_affector Public

    An async Rust library for LLM-based content analysis, providing hallucination detection and code critique functionality. Built with Tokio for high-performance concurrent operations.

    Rust 4 1

  4. tokio-prompt-orchestrator tokio-prompt-orchestrator Public

    Multi-core, Tokio-native orchestration for LLM pipelines.

    Rust 7

  5. FinRL_DeepSeek_Crypto_Trading FinRL_DeepSeek_Crypto_Trading Public

    This repository contains our solution to Task 1 of the FinRL Contest 2025: developing a high-performance crypto trading agent using reinforcement learning and LLM-derived market signals.

    Python 7 2

  6. Quantum-Neural-Network-Model-The-Future-of-AI-Cognition Quantum-Neural-Network-Model-The-Future-of-AI-Cognition Public

    Quantum computing is revolutionizing the way we think about processing power. Now, imagine applying the principles of quantum mechanics to a neural network—one that mimics and accelerates human cog…

    Python 6 1