Experiments in Joint Embedding Predictive Architectures (JEPAs).
- Updated
Jan 5, 2024 - Python
Experiments in Joint Embedding Predictive Architectures (JEPAs).
👆PyTorch Implementation of JEDi Metric described in "Beyond FVD: Enhanced Evaluation Metrics for Video Generation Quality"
A Video Joint Embedding Predictive Architecture (JEPA) that runs on a personal computer.
Train a JEPA world model on a set of pre-collected trajectories from an environment involving an agent in two rooms.
A simple and efficient implementation of Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture (I-JEPA)
Project for Yann Lecun's Deep Learning class. In this project, we train a JEPA world model on a set of pre-collected trajectories from a toy environment involving an agent in two rooms.
Demo implementations of JEPA World Models to support research
Joint Embedding Predictive Architecture (JEPA) world model trained on agent trajectories to predict future latent states from pixel inputs and actions. Uses VICReg loss with RNN dynamics to evaluate how well learned embeddings reflect spatial behavior in toy environments.
A PyTorch implementation of Latent Embedding JEPA for learning world models in continuous environments without latent collapse.
PointJEPA-based, label-efficient 3D grasp joint-angle prediction (IROS 2025 FMRD Workshop).
Small JEPA + voting for video understanding
An AI architecture where one model sees geometry and another speaks it — together building a world model of understanding.
Curiosity-driven Actor–Critic using JEPA latent dynamics.
Training backend for Cell Observatory models
Add a description, image, and links to the jepa topic page so that developers can more easily learn about it.
To associate your repository with the jepa topic, visit your repo's landing page and select "manage topics."