Skip to content

Commit 214236e

Browse files
authored
Upgrade to use newest pytorch 1.0 version in pytorch examples. (aws#529)
1 parent a8fecc3 commit 214236e

File tree

3 files changed

+7
-13
lines changed

3 files changed

+7
-13
lines changed

sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -183,9 +183,7 @@
183183
"\n",
184184
"The `PyTorch` class allows us to run our training function on SageMaker. We need to configure it with our training script, an IAM role, the number of training instances, and the training instance type. For local training with GPU, we could set this to \"local_gpu\". In this case, `instance_type` was set above based on your whether you're running a GPU instance.\n",
185185
"\n",
186-
"After we've constructed our `PyTorch` object, we fit it using the data we uploaded to S3. Even though we're in local mode, using S3 as our data source makes sense because it maintains consistency with how SageMaker's distributed, managed training ingests data.\n",
187-
"\n",
188-
"You can try the \"Preview\" version of PyTorch by specifying ``'1.0.0.dev'`` for ``framework_version`` when creating your PyTorch estimator."
186+
"After we've constructed our `PyTorch` object, we fit it using the data we uploaded to S3. Even though we're in local mode, using S3 as our data source makes sense because it maintains consistency with how SageMaker's distributed, managed training ingests data.\n"
189187
]
190188
},
191189
{
@@ -198,7 +196,7 @@
198196
"\n",
199197
"cifar10_estimator = PyTorch(entry_point='source/cifar10.py',\n",
200198
" role=role,\n",
201-
" framework_version='0.4.0',\n",
199+
" framework_version='1.0.0',\n",
202200
" train_instance_count=1,\n",
203201
" train_instance_type=instance_type)\n",
204202
"\n",

sagemaker-python-sdk/pytorch_lstm_word_language_model/pytorch_rnn.ipynb

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -171,9 +171,7 @@
171171
"metadata": {},
172172
"source": [
173173
"### Run training in SageMaker\n",
174-
"The PyTorch class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script and source directory, an IAM role, the number of training instances, and the training instance type. In this case we will run our training job on ```ml.p2.xlarge``` instance. As you can see in this example you can also specify hyperparameters. \n",
175-
"\n",
176-
"You can try the \"Preview\" version of PyTorch by specifying ``'1.0.0.dev'`` for ``framework_version`` when creating your PyTorch estimator."
174+
"The PyTorch class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script and source directory, an IAM role, the number of training instances, and the training instance type. In this case we will run our training job on ```ml.p2.xlarge``` instance. As you can see in this example you can also specify hyperparameters. \n"
177175
]
178176
},
179177
{
@@ -186,7 +184,7 @@
186184
"\n",
187185
"estimator = PyTorch(entry_point='train.py',\n",
188186
" role=role,\n",
189-
" framework_version='0.4.0',\n",
187+
" framework_version='1.0.0',\n",
190188
" train_instance_count=1,\n",
191189
" train_instance_type='ml.p2.xlarge',\n",
192190
" source_dir='source',\n",
@@ -280,7 +278,7 @@
280278
"trained_model_location = desc['ModelArtifacts']['S3ModelArtifacts']\n",
281279
"model = PyTorchModel(model_data=trained_model_location,\n",
282280
" role=role,\n",
283-
" framework_version='0.4.0',\n",
281+
" framework_version='1.0.0',\n",
284282
" entry_point='generate.py',\n",
285283
" source_dir='source',\n",
286284
" predictor_cls=JSONPredictor)"

sagemaker-python-sdk/pytorch_mnist/pytorch_mnist.ipynb

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -143,9 +143,7 @@
143143
"source": [
144144
"### Run training in SageMaker\n",
145145
"\n",
146-
"The `PyTorch` class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script, an IAM role, the number of training instances, the training instance type, and hyperparameters. In this case we are going to run our training job on 2 ```ml.c4.xlarge``` instances. But this example can be ran on one or multiple, cpu or gpu instances ([full list of available instances](https://aws.amazon.com/sagemaker/pricing/instance-types/)). The hyperparameters parameter is a dict of values that will be passed to your training script -- you can see how to access these values in the `mnist.py` script above.\n",
147-
"\n",
148-
"You can try the \"Preview\" version of PyTorch by specifying ``'1.0.0.dev'`` for ``framework_version`` when creating your PyTorch estimator."
146+
"The `PyTorch` class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script, an IAM role, the number of training instances, the training instance type, and hyperparameters. In this case we are going to run our training job on 2 ```ml.c4.xlarge``` instances. But this example can be ran on one or multiple, cpu or gpu instances ([full list of available instances](https://aws.amazon.com/sagemaker/pricing/instance-types/)). The hyperparameters parameter is a dict of values that will be passed to your training script -- you can see how to access these values in the `mnist.py` script above.\n"
149147
]
150148
},
151149
{
@@ -158,7 +156,7 @@
158156
"\n",
159157
"estimator = PyTorch(entry_point='mnist.py',\n",
160158
" role=role,\n",
161-
" framework_version='0.4.0',\n",
159+
" framework_version='1.0.0',\n",
162160
" train_instance_count=2,\n",
163161
" train_instance_type='ml.c4.xlarge',\n",
164162
" hyperparameters={\n",

0 commit comments

Comments
 (0)