https://huggingface.co/nielsr/vitpose-base-simple with ONNX weights to be compatible with Transformers.js.

Usage (Transformers.js)

If you haven't already, you can install the Transformers.js JavaScript library from NPM using:

npm i @huggingface/transformers 

Example: Pose estimation w/ onnx-community/vitpose-base-simple.

import { AutoModel, AutoImageProcessor, RawImage } from '@huggingface/transformers'; // Load model and processor const model_id = 'onnx-community/vitpose-base-simple'; const model = await AutoModel.from_pretrained(model_id); const processor = await AutoImageProcessor.from_pretrained(model_id); // Load image and prepare inputs const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/ryan-gosling.jpg'; const image = await RawImage.read(url); const inputs = await processor(image); // Predict heatmaps const { heatmaps } = await model(inputs); // Post-process heatmaps to get keypoints and scores const boxes = [[[0, 0, image.width, image.height]]]; const results = processor.post_process_pose_estimation(heatmaps, boxes)[0][0]; console.log(results); 

Optionally, visualize the outputs (Node.js usage shown here, using the canvas library):

import { createCanvas, createImageData } from 'canvas'; // Create canvas and draw image const canvas = createCanvas(image.width, image.height); const ctx = canvas.getContext('2d'); const imageData = createImageData(image.rgba().data, image.width, image.height); ctx.putImageData(imageData, 0, 0); // Draw edges between keypoints const points = results.keypoints; ctx.lineWidth = 4; ctx.strokeStyle = 'blue'; for (const [i, j] of model.config.edges) { const [x1, y1] = points[i]; const [x2, y2] = points[j]; ctx.beginPath(); ctx.moveTo(x1, y1); ctx.lineTo(x2, y2); ctx.stroke(); } // Draw circle at each keypoint ctx.fillStyle = 'red'; for (const [x, y] of points) { ctx.beginPath(); ctx.arc(x, y, 8, 0, 2 * Math.PI); ctx.fill(); } // Save image to file import fs from 'fs'; const out = fs.createWriteStream('pose.png'); const stream = canvas.createPNGStream(); stream.pipe(out) out.on('finish', () => console.log('The PNG file was created.')); 
Input image Output image
image/jpeg image/png

Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx).

Downloads last month
21
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for onnx-community/vitpose-base-simple

Quantized
(1)
this model