Skip to content

FaridSoroush/Machine_Learning_Inference_Serivce

Repository files navigation

ML Service (FastAPI + Docker + Helm)

End-to-end machine learning inference services using FastAPI, Docker and K8s.

Read the full article here: Inference for Machine Learning Services (FastAPI + Docker + Kubernetes + Helm)

Quickstart (Local)

pip install -r requirements.txt
uvicorn app.main:app --host 0.0.0.0 --port 8080

Health Check

curl -s http://localhost:8080/healthz

Predict

curl -s -X POST http://localhost:8080/v1/predict \
  -H 'Content-Type: application/json' \
  -d '{"features":[0.1,0.2,0.3]}'

About

End-to-end machine learning inference services using FastAPI, Docker and K8s

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors