You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Secondly, define helper functions to calculate segmentation performance and read in segmentation mask for each training image.
67
67
68
-
**Note**: it's tempting to define one-line python `lambda` functions to pass to fastai, however, this will introduce issue for serialization when we want to export a FastAI model. Therefore we avoid using anonymous python functions during FastAI modeling steps.
68
+
**Note**: it's tempting to define one-line python `lambda` functions to pass to fastai, however, this will introduce issue on serialization when we want to export a FastAI model. Therefore we avoid using anonymous python functions during FastAI modeling steps.
69
69
70
70
```python
71
71
defacc_camvid(inp, targ, void_code=0):
@@ -439,7 +439,7 @@ As described in the previous section, we re-define the image transform steps and
439
439
440
440
#### `inference`
441
441
442
-
Now convert image into Pytorch Tensor, load it into GPU is available, and pass it through the model.
442
+
Now convert image into Pytorch Tensor, load it into GPU if available, and pass it through the model.
443
443
444
444
```python
445
445
definference(self, img):
@@ -564,11 +564,17 @@ plt.imshow(pred_decoded);
564
564
565
565
### Clean up
566
566
567
-
Make sure that you delete the Amazon SageMaker endpoint to prevent any additional charges.
567
+
Make sure that you delete the following resources to prevent any additional charges:
568
+
569
+
1. Amazon SageMaker endpoint.
570
+
2. Amazon SageMaker endpoint configuration.
571
+
3. Amazon SageMaker model.
572
+
4. Amazon Elastic Container Registry (ECR).
573
+
5. Amazon Simple Storage Service (S3) Buckets.
568
574
569
575
## Conclusion
570
576
571
-
This repository presented an end-to-end demonstration of deploying FastAI trained PyTorch models on TorchServe eager model and host in Amazon SageMaker Endpoint. You can use this repository as a template to deploy your own FastAI models. This approach eliminates the self-maintaining effort to build and manage an customized inference server, which helps you to speed up the process from training a cutting-edge deep learning model to its online application in real-world environment at scale.
577
+
This repository presented an end-to-end demonstration of deploying FastAI trained PyTorch models on TorchServe eager model and host in Amazon SageMaker Endpoint. You can use this repository as a template to deploy your own FastAI models. This approach eliminates the self-maintaining effort to build and manage a customized inference server, which helps you to speed up the process from training a cutting-edge deep learning model to its online application in real-world at scale.
572
578
573
579
If you have questions please create an issue or submit Pull Request on the GitHub repository.
0 commit comments