Skip to content

cortexlabs/cortex

Repository files navigation

Machine learning model serving infrastructure


installdocsexampleswe're hiringchat with us

Demo


Key features

  • Multi framework: deploy TensorFlow, PyTorch, scikit-learn, and other models.
  • Autoscaling: automatically scale APIs to handle production workloads.
  • ML instances: run inference on G4, P2, M5, C5 and other AWS instance types.
  • Spot instances: save money with spot instances.
  • Rolling updates: update deployed APIs with no downtime.
  • Log streaming: stream logs from deployed models to your CLI.
  • Prediction monitoring: monitor API performance and prediction results.

Deploying a model

Install the CLI

$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.16/get-cli.sh)"

Implement your predictor

# predictor.py class PythonPredictor: def __init__(self, config): self.model = download_model() def predict(self, payload): return self.model.predict(payload["text"])

Configure your deployment

# cortex.yaml - name: sentiment-classifier predictor: type: python path: predictor.py compute: gpu: 1 mem: 4G

Deploy your model

$ cortex deploy creating sentiment-classifier

Serve predictions

$ curl http://localhost:8888 \ -X POST -H "Content-Type: application/json" \ -d '{"text": "serving models locally is cool!"}' positive

Deploying models at scale

Spin up a cluster

Cortex clusters are designed to be self-hosted on any AWS account:

$ cortex cluster up aws region: us-east-1 aws instance type: g4dn.xlarge spot instances: yes min instances: 0 max instances: 5 your cluster will cost $0.19 - $2.85 per hour based on cluster size and spot instance pricing/availability ○ spinning up your cluster ... your cluster is ready!

Deploy to your cluster with the same code and configuration

$ cortex deploy --env aws creating sentiment-classifier

Serve predictions at scale

$ curl http://***.amazonaws.com/sentiment-classifier \ -X POST -H "Content-Type: application/json" \ -d '{"text": "serving models at scale is really cool!"}' positive

Monitor your deployment

$ cortex get sentiment-classifier status up-to-date requested last update avg request 2XX live 1 1 8s 24ms 12 class count positive 8 negative 4

How it works

The CLI sends configuration and code to the cluster every time you run cortex deploy. Each model is loaded into a Docker container, along with any Python packages and request handling code. The model is exposed as a web service using a Network Load Balancer (NLB) and FastAPI / TensorFlow Serving / ONNX Runtime (depending on the model type). The containers are orchestrated on Elastic Kubernetes Service (EKS) while logs and metrics are streamed to CloudWatch.

Cortex manages its own Kubernetes cluster so that end-to-end functionality like request-based autoscaling, GPU support, and spot instance management can work out of the box without any additional DevOps work.


What is Cortex similar to?

Cortex is an open source alternative to serving models with SageMaker or building your own model deployment platform on top of AWS services like Elastic Kubernetes Service (EKS), Lambda, or Fargate and open source projects like Docker, Kubernetes, TensorFlow Serving, and TorchServe.


Examples

About

Production infrastructure for machine learning at scale

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors 22