TorchServe is a flexible and easy to use tool for serving PyTorch models.
- Serving Quick Start - Basic server usage tutorial
- Model Archive Quick Start - Tutorial that shows you how to package a model archive file.
- Installation - Installation procedures
- Serving Models - Explains how to use torchserve
- REST API - Specification on the API endpoint for TorchServe
- Packaging Model Archive - Explains how to package model archive file, use
model-archiver. - Inference API - How to check for the health of a deployed model and get inferences
- Management API - How to manage and scale models
- Logging - How to configure logging
- Metrics - How to configure metrics
- Metrics API - How to configure metrics API
- Batch inference with TorchServe - How to create and serve a model with batch inference in TorchServe
- Model Zoo - List of pre-trained model archives ready to be served for inference with TorchServe.
- Examples - Many examples of how to package and deploy models and workflows with TorchServe
- Advanced configuration - Describes advanced TorchServe configurations.
- Custom Service - Describes how to develop custom inference services.
- Unit Tests - Housekeeping unit tests for TorchServe.
- Benchmark - Use JMeter to run TorchServe through the paces and collect benchmark data.
- TorchServe on Kubernetes - Demonstrates a Torchserve deployment in Kubernetes using Helm Chart.
- Image Classifier - This handler takes an image and returns the name of object in that image
- Text Classifier - This handler takes a text (string) as input and returns the classification text based on the model vocabulary
- Object Detector - This handler takes an image and returns list of detected classes and bounding boxes respectively
- Image Segmenter- This handler takes an image and returns output shape as [CL H W], CL - number of classes, H - height and W - width