Skip to content

Commit 8316bef

Browse files
authored
Proof reading and making small edits (aws#1253)
1 parent 7d145fb commit 8316bef

File tree

2 files changed

+27
-34
lines changed

2 files changed

+27
-34
lines changed

r_examples/r_batch_transform/r_xgboost_batch_transform.ipynb

Lines changed: 11 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,13 @@
88
"\n",
99
"**Note:** You will need to use R kernel in SageMaker for this notebook.\n",
1010
"\n",
11-
"This sample Notebook describes how to do batch transform to make predictions for abalone age as measured by the number of rings in the shell. The notebook will use the public [abalone dataset](https://archive.ics.uci.edu/ml/datasets/abalone) hosted by [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php).\n",
11+
"This sample Notebook describes how to do batch transform to make predictions for an abalone's age, which is measured by the number of rings in the shell. The notebook will use the public [abalone dataset](https://archive.ics.uci.edu/ml/datasets/abalone) hosted by [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php).\n",
1212
"\n",
1313
"You can find more details about SageMaker's Batch Trsnform here: \n",
1414
"- [Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html) using a Transformer\n",
1515
"\n",
1616
"We will use `reticulate` library to interact with SageMaker:\n",
17-
"- [`Reticulate` library](https://rstudio.github.io/reticulate/): provides an R interface to make API calls [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/latest/index.html) to make API calls to Amazon SageMaker. The `reticulate` package translates between R and Python objects, and Amazon SageMaker provides a serverless data science environment to train and deploy ML models at scale.\n",
17+
"- [`Reticulate` library](https://rstudio.github.io/reticulate/): provides an R interface to use the [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/latest/index.html) to make API calls to Amazon SageMaker. The `reticulate` package translates between R and Python objects, and Amazon SageMaker provides a serverless data science environment to train and deploy ML models at scale.\n",
1818
"\n",
1919
"Table of Contents:\n",
2020
"- [Reticulating the Amazon SageMaker Python SDK](#Reticulating-the-Amazon-SageMaker-Python-SDK)\n",
@@ -26,7 +26,7 @@
2626
"- [Download the Batch Transform Output](#Download-the-Batch-Transform-Output)\n",
2727
"\n",
2828
"\n",
29-
"**Note:** The first portion of this notebook focused on data ingestion and preparing the data for model training is inspired by the data preparation part outlined in [\"Using R with Amazon SageMaker\"](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/r_kernel/using_r_with_amazon_sagemaker.ipynb) notebook on AWS SageMaker Examples Github repository with some modifications."
29+
"**Note:** The first portion of this notebook focused on data ingestion and preparing the data for model training is inspired by the data preparation section outlined in the [\"Using R with Amazon SageMaker\"](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/r_kernel/using_r_with_amazon_sagemaker.ipynb) notebook on AWS SageMaker Examples Github repository with some modifications."
3030
]
3131
},
3232
{
@@ -110,7 +110,7 @@
110110
"source": [
111111
"<h3>Downloading and Processing the Dataset</h3>\n",
112112
"\n",
113-
"The model uses the [abalone dataset](https://archive.ics.uci.edu/ml/datasets/abalone) from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php). First, download the data and start the [exploratory data analysis](https://en.wikipedia.org/wiki/Exploratory_data_analysis). Use tidyverse packages to read the data, plot the data, and transform the data into ML format for Amazon SageMaker:"
113+
"The model uses the [abalone dataset](https://archive.ics.uci.edu/ml/datasets/abalone) from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php). First, download the data and start the [exploratory data analysis](https://en.wikipedia.org/wiki/Exploratory_data_analysis). Use tidyverse packages to read, plot, and transform the data into ML format for Amazon SageMaker:"
114114
]
115115
},
116116
{
@@ -187,7 +187,7 @@
187187
"source": [
188188
"<h3>Preparing the Dataset for Model Training</h3>\n",
189189
"\n",
190-
"The model needs three datasets: one each for training, testing, and validation. First, convert `sex` into a [dummy variable](https://en.wikipedia.org/wiki/Dummy_variable_(statistics)) and move the target, `rings`, to the first column. Amazon SageMaker algorithm require the target to be in the first column of the dataset."
190+
"The model needs three datasets: one for training, testing, and validation. First, convert `sex` into a [dummy variable](https://en.wikipedia.org/wiki/Dummy_variable_(statistics)) and move the target, `rings`, to the first column. Amazon SageMaker algorithm require the target to be in the first column of the dataset."
191191
]
192192
},
193193
{
@@ -231,24 +231,21 @@
231231
"cell_type": "markdown",
232232
"metadata": {},
233233
"source": [
234-
"Later in the notebook, we are going to use Batch Transform and Endpoint to make inference in two different ways and we will compare the results. The maximum number of rows that we can send to an endpoint for inference in one batch is 500 rows. We are going to reduce the number of rows for the test dataset to 500 and use this for batch and online inference for comparison. "
234+
"Upload the training and validation data to Amazon S3 so that you can train the model. First, write the training and validation datasets to the local filesystem in .csv format:"
235235
]
236236
},
237237
{
238238
"cell_type": "code",
239239
"execution_count": null,
240240
"metadata": {},
241241
"outputs": [],
242-
"source": [
243-
"num_predict_rows <- 500\n",
244-
"abalone_test <- abalone_test[1:num_predict_rows, ]"
245-
]
242+
"source": []
246243
},
247244
{
248245
"cell_type": "markdown",
249246
"metadata": {},
250247
"source": [
251-
"Upload the training and validation data to Amazon S3 so that you can train the model. First, write the training and validation datasets to the local filesystem in .csv format:"
248+
"Second, upload the two datasets to the Amazon S3 bucket into the `data` key:"
252249
]
253250
},
254251
{
@@ -264,13 +261,6 @@
264261
"write_csv(abalone_test[-1], 'abalone_test.csv', col_names = FALSE)"
265262
]
266263
},
267-
{
268-
"cell_type": "markdown",
269-
"metadata": {},
270-
"source": [
271-
"Second, upload the two datasets to the Amazon S3 bucket into the `data` key:"
272-
]
273-
},
274264
{
275265
"cell_type": "code",
276266
"execution_count": null,
@@ -436,9 +426,9 @@
436426
"\n",
437427
"In many situations, using a deployed model for making inference is not the best option, especially when the goal is not to make online real-time inference but to generate predictions from a trained model on a large dataset. In these situations, using Batch Transform may be more efficient and appropriate.\n",
438428
"\n",
439-
"This section of the notebook explain how to set up the Batch Transform Job, and generate predictions.\n",
429+
"This section of the notebook explains how to set up the Batch Transform Job and generate predictions.\n",
440430
"\n",
441-
"To do this, we need to define the batch input data path on S3, and also where to save the generated predictions on S3."
431+
"To do this, we need to identify the batch input data path in S3 and specify where generated predictions will be stored in S3."
442432
]
443433
},
444434
{
@@ -595,4 +585,4 @@
595585
},
596586
"nbformat": 4,
597587
"nbformat_minor": 2
598-
}
588+
}

r_examples/r_xgboost_hpo_batch_transform/r_xgboost_hpo_batch_transform.ipynb

Lines changed: 16 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,16 @@
66
"source": [
77
"<h1>Hyperparamter Optimization Using R with Amazon SageMaker</h1>\n",
88
"\n",
9-
"This sample Notebook describes how to conduct Hyperparamter tuning and batch transform to make predictions for abalone age as measured by the number of rings in the shell. The notebook will use the public [abalone dataset](https://archive.ics.uci.edu/ml/datasets/abalone) hosted by [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php).\n",
9+
"This sample Notebook demonstrates how to conduct Hyperparamter tuning and how to generate predictions for abalone age using two methods:\n",
1010
"\n",
11-
"We will use two methods to generate predictionsm after performin Hyperparameter Optimization (HPO). The goal is to demonstrate how each method works in R. These methods are:\n",
12-
"- [Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html) using a Transformer\n",
13-
"- [Deploying the model](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html) as an endpoint and making inference using the endpoint \n",
11+
"- [Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html) using a Transformer.\n",
12+
"- [Deploying the model](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html) as an endpoint and making online inferences. \n",
1413
"\n",
15-
"We will also use two different libraries to interact with SageMaker:\n",
14+
"The goal is to demonstrate how these methods work in R. \n",
15+
"\n",
16+
"Abalone age is measured by the number of rings in the shell. The notebook will use the public [abalone dataset](https://archive.ics.uci.edu/ml/datasets/abalone) hosted by [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php). \n",
17+
"\n",
18+
"We will use two different libraries to interact with SageMaker:\n",
1619
"- [`Reticulate` library](https://rstudio.github.io/reticulate/): that provides an R interface to make API calls [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/latest/index.html) to make API calls to Amazon SageMaker. The `reticulate` package translates between R and Python objects, and Amazon SageMaker provides a serverless data science environment to train and deploy ML models at scale.\n",
1720
"- [`paws` library](https://cran.r-project.org/web/packages/paws/index.html): that provides an interface to make API calls to AWS services, similar to how [`boto3`](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) works. `boto3` is the Amazon Web Services (AWS) SDK for Python. It enables Python developers to create, configure, and manage AWS services, such as EC2 and S3. Boto provides an easy to use, object-oriented API, as well as low-level access to AWS services. `paws` provides the same capabilities in R.\n",
1821
"\n",
@@ -33,8 +36,8 @@
3336
" - [Deleting the Endpoint](#Deleting-the-Endpoint)\n",
3437
" \n",
3538
" \n",
36-
"**Note:** The first portion of this notebook focused on data ingestion and preparing the data for model training is similar to the data preparation outlined in [\"Using R with Amazon SageMaker\"](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/r_kernel/using_r_with_amazon_sagemaker.ipynb) notebook on AWS SageMaker Examples Github repository with some modifications.\n",
37-
"Also the last portion of this notebook focused on making inference using an endpoint is inspired by the method outlined in the notebook referenced here."
39+
"**Note:** The first portion of this notebook focused on data ingestion and preparing the data for model training is similar to the data preparation outlined in the [\"Using R with Amazon SageMaker\"](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/r_kernel/using_r_with_amazon_sagemaker.ipynb) notebook on AWS SageMaker Examples Github repository with some modifications.\n",
40+
"Also the last portion of this notebook focused on making inference using an endpoint is inspired by the method outlined in the notebook referenced [here](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/r_examples/r_end_2_end/r_sagemaker_abalone.ipynb)."
3841
]
3942
},
4043
{
@@ -118,7 +121,7 @@
118121
"source": [
119122
"<h3>Downloading and Processing the Dataset</h3>\n",
120123
"\n",
121-
"The model uses the [abalone dataset](https://archive.ics.uci.edu/ml/datasets/abalone) from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php). First, download the data and start the [exploratory data analysis](https://en.wikipedia.org/wiki/Exploratory_data_analysis). Use tidyverse packages to read the data, plot the data, and transform the data into ML format for Amazon SageMaker:"
124+
"The model uses the [abalone dataset](https://archive.ics.uci.edu/ml/datasets/abalone) from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php). First, download the data and start the [exploratory data analysis](https://en.wikipedia.org/wiki/Exploratory_data_analysis). Use tidyverse packages to read, plot, and transform the data into ML format for Amazon SageMaker:"
122125
]
123126
},
124127
{
@@ -195,7 +198,7 @@
195198
"source": [
196199
"<h3>Preparing the Dataset for Model Training</h3>\n",
197200
"\n",
198-
"The model needs three datasets: one each for training, testing, and validation. First, convert `sex` into a [dummy variable](https://en.wikipedia.org/wiki/Dummy_variable_(statistics)) and move the target, `rings`, to the first column. Amazon SageMaker algorithm require the target to be in the first column of the dataset."
201+
"The model needs three datasets: one for training, testing, and validation. First, convert `sex` into a [dummy variable](https://en.wikipedia.org/wiki/Dummy_variable_(statistics)) and move the target, `rings`, to the first column. Amazon SageMaker algorithm require the target to be in the first column of the dataset."
199202
]
200203
},
201204
{
@@ -322,7 +325,7 @@
322325
"source": [
323326
"<h3>Hyperparameter Tuning for the XGBoost Model</h3>\n",
324327
"\n",
325-
"Amazon SageMaker algorithm are available via a [Docker](https://www.docker.com/) container. To train an [XGBoost](https://en.wikipedia.org/wiki/Xgboost) model, specify the training containers in [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/) (Amazon ECR) for the AWS Region."
328+
"Amazon SageMaker algorithms are available via a [Docker](https://www.docker.com/) container. To train an [XGBoost](https://en.wikipedia.org/wiki/Xgboost) model, specify the training containers in [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/) (Amazon ECR) for the AWS Region."
326329
]
327330
},
328331
{
@@ -402,7 +405,7 @@
402405
"For tuning the hyperparamters you need to also specify the type and range of hyperparamters to be tuned. You can specify either a `ContinuousParameter` or an `IntegerParameter`, as outlined in the documentation. In addition, the algorithm documentation provides suggestions for the hyperparamter range.\n",
403406
"\n",
404407
"\n",
405-
"One the Estimator and its hyperparamters and tunable hyperparamter ranges are specified, you can create a `HyperparameterTuner` and then train (or fit) that tuner which will conduct the tuning and will select the most optimzied model that you can then use to do either Batch Transform, or deply as an endpoint and use for online inference."
408+
"Once the Estimator and its hyperparamters and tunable hyperparamter ranges are specified, you can create a `HyperparameterTuner` (tuner). You can train (or fit) that tuner which will conduct the tuning and will select the most optimzied model. You can then generate predictions using this model with Batch Transform, or by deploying the model as an endpoint and using it for online inference."
406409
]
407410
},
408411
{
@@ -593,7 +596,7 @@
593596
"\n",
594597
"We can extract the **ModelDataUrl** by describing the best training job using `paws` library and `describe_training_job()` method. [More details can be found here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.describe_training_job).\n",
595598
" \n",
596-
"Then we will create a model using this model container. We will use `paws` library and `create_model` method. [Documentaiton of this method can be found here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_model). "
599+
"Then we will create a model using this model container. We will use `paws` library and `create_model` method. [Documentation of this method can be found here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_model). "
597600
]
598601
},
599602
{
@@ -959,4 +962,4 @@
959962
},
960963
"nbformat": 4,
961964
"nbformat_minor": 2
962-
}
965+
}

0 commit comments

Comments
 (0)