Skip to content

Conversation

RobertLucian
Copy link
Member

@RobertLucian RobertLucian commented May 21, 2020

@vishalbollu when the model's path is not an S3 path, the model cacher and validator still treats it as an S3 path instead of just a local path. This prevents the deployment of APIs with locally-provided models (ONNX and TensorFlow).

How to reproduce:

# cortex.yaml - name: api-tf-cd7 predictor: type: tensorflow path: predictor.py  model: ./inferentia-models/tensorflow/resnet50

With the above config, the resulting error would be
error: /home/robert/sandbox/feature/multi-model-endpoint/cortex_tf_sample.yaml: api-tf: predictor: model: "./inferentia-models/tensorflow/resnet50" is not a valid s3 path (e.g. s3://cortex-examples/iris-classifier/tensorflow is a valid s3 path)


checklist:

  • run make test and make lint
  • test manually (i.e. build/push all images, restart operator, and re-deploy APIs)
@vishalbollu vishalbollu merged commit 4ad6d53 into cortexlabs:master May 21, 2020
@RobertLucian RobertLucian deleted the fix/locally-saved-models branch May 21, 2020 14:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants