|
419 | 419 | "metadata": {}, |
420 | 420 | "source": [ |
421 | 421 | "## Hosting your model\n", |
422 | | - "One can use trained model to get real time predictions using HTTP endpoint. Following steps walk you through the process." |
| 422 | + "You can use a trained model to get real time predictions using HTTP endpoint. Follow these steps to walk you through the process." |
423 | 423 | ] |
424 | 424 | }, |
425 | 425 | { |
|
505 | 505 | "metadata": {}, |
506 | 506 | "source": [ |
507 | 507 | "## Run Batch Transform Job\n", |
508 | | - "One can use trained model to get inference on large data sets by using [Amazon SageMaker Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html). A Batch Transform Job takes your input data S3 location and publishes prediction result to specified S3 output folder. Similar to hosting, lets extract inferences for training data to demo Batch Transform mechanism." |
| 508 | + "You can use a trained model to get inference on large data sets by using [Amazon SageMaker Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html). A batch transform job takes your input data S3 location and outputs the predictions to the specified S3 output folder. Similar to hosting, you can extract inferences for training data to test batch transform." |
509 | 509 | ] |
510 | 510 | }, |
511 | 511 | { |
512 | 512 | "cell_type": "markdown", |
513 | 513 | "metadata": {}, |
514 | 514 | "source": [ |
515 | 515 | "### Create a Transform Job\n", |
516 | | - "We'll create an `Transformer` that defines how to use the container to get inference results on a data set. This includes the configuration we need to invoke SageMaker Batch transform:\n", |
| 516 | + "We'll create an `Transformer` that defines how to use the container to get inference results on a data set. This includes the configuration we need to invoke SageMaker batch transform:\n", |
517 | 517 | "\n", |
518 | 518 | "* The __instance count__ which is the number of machines to use to extract inferences\n", |
519 | 519 | "* The __instance type__ which is the type of machine to use to extract inferences\n", |
|
538 | 538 | "cell_type": "markdown", |
539 | 539 | "metadata": {}, |
540 | 540 | "source": [ |
541 | | - "We use tranform() on the transfomer to get inference results against the data that we uploaded above. We provide below options when invoking transformer. \n", |
| 541 | + "We use tranform() on the transfomer to get inference results against the data that we uploaded. You can use these options when invoking the transformer. \n", |
| 542 | + "\n", |
542 | 543 | "* The __data_location__ which is the location of input data\n", |
543 | 544 | "* The __content_type__ which is the content type set when making HTTP request to container to get prediction\n", |
544 | 545 | "* The __split_type__ which is the delimiter used for splitting input data " |
|
558 | 559 | "cell_type": "markdown", |
559 | 560 | "metadata": {}, |
560 | 561 | "source": [ |
561 | | - "For more configuration options and details, please visit [CreateTransformJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html) page" |
| 562 | + "For more information on the configuration options, see [CreateTransformJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html)" |
562 | 563 | ] |
563 | 564 | }, |
564 | 565 | { |
|
0 commit comments