- Alignment Lab AI
- Global Team
- https://www.alignmentlab.ai/
- @alignment_lab
- in/john-cook-95a58a275
Lists (2)
Sort Name ascending (A-Z)
Stars
🏡 Open source home automation that puts local control and privacy first.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
A Python library for extracting structured information from unstructured text using LLMs with precise source grounding and interactive visualization.
A framework to enable multimodal models to operate a computer.
Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audi…
FreeAskInternet is a completely free, PRIVATE and LOCALLY running search aggregator & answer generate using MULTI LLMs, without GPU needed. The user can ask a question and the system will make a mu…
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
OpenChat: Advancing Open-source Language Models with Imperfect Data
The LLM's practical guide: From the fundamentals to deploying advanced LLM and RAG apps to AWS using LLMOps best practices
A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 37.3% tasks (pass@1) in SWE-bench lite and 46.2% tasks (pass@1) in SWE-bench verified with…
Large-scale, Informative, and Diverse Multi-round Chat Data (and Models)
OpenDAN is an open source Personal AI OS , which consolidates various AI modules in one place for your personal use.
CodeTF: One-stop Transformer Library for State-of-the-art Code LLM
Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"
Automate the analysis of GitHub repositories for LLMs with RepoToTextForLLMs. Fetch READMEs, structure, and non-binary files efficiently. Outputs include analysis prompts to aid in comprehensive re…
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch
Domain Adapted Language Modeling Toolkit - E2E RAG
Simple next-token-prediction for RLHF
An implemtation of Everyting of Thoughts (XoT).
Flash Attention Triton kernel with support for second-order derivatives
Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same-tokenizer and cross-tokenizer LLM distillation.
kijai / ComfyUI-VoiceCraft
Forked from jasonppy/VoiceCraftZero-Shot Speech Editing and Text-to-Speech in the Wild
Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Streamlines the creation of dataset to train a Large Language Model with triplets : instruction-input-output . The default configuration fits github.com/tloen/alpaca-lora requirements.




