Skip to content

Commit 20225d5

Browse files
j-t-1facebook-github-bot
authored andcommitted
Small changes to the getting started tutorial (meta-pytorch#1536)
Summary: Pull Request resolved: meta-pytorch#1536 The word "embarcation" is changed to "embarkation" to match data use. Pull Request resolved: meta-pytorch#1512 Test Plan: Imported from GitHub, without a `Test Plan:` line. Note: Only tutorial wording changes Reviewed By: craymichael Differential Revision: D72183213 Pulled By: Ayush-Warikoo fbshipit-source-id: cd0711ec1f9d59137e1b6d5381f3abdf396cd0bb
1 parent 17e1ad3 commit 20225d5

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

tutorials/Titanic_Basic_Interpret.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@
7171
"cell_type": "markdown",
7272
"metadata": {},
7373
"source": [
74-
"With the data loaded, we now preprocess the data by converting some categorical features such as gender, location of embarcation, and passenger class into one-hot encodings (separate feature columns for each class with 0 / 1). We also remove some features that are more difficult to analyze, such as name, and fill missing values in age and fare with the average values."
74+
"With the data loaded, we now preprocess the data by converting some categorical features such as gender, location of embarkation, and passenger class into one-hot encodings (separate feature columns for each class with 0 / 1). We also remove some features that are more difficult to analyze, such as name, and fill missing values in age and fare with the average values."
7575
]
7676
},
7777
{
@@ -273,21 +273,21 @@
273273
"cell_type": "markdown",
274274
"metadata": {},
275275
"source": [
276-
"Beyond just considering the accuracy of the classifier, there are many important questions to understand how the model is working and it's decision, which is the purpose of Captum, to help make neural networks in PyTorch more interpretable."
276+
"Beyond just considering the accuracy of the classifier, there are many important questions to understand how the model is working and its decision, which is the purpose of Captum, to help make neural networks in PyTorch more interpretable."
277277
]
278278
},
279279
{
280280
"cell_type": "markdown",
281281
"metadata": {},
282282
"source": [
283-
"The first question we can ask is which of the features were actually important to the model to reach this decision? This is the first main component of Captum, the ability to obtain **Feature Attributions**. For this example, we will apply Integrated Gradients, which is one of the Feature Attribution methods included in Captum. More information regarding Integrated Gradients can be found in the original paper here: https://arxiv.org/pdf/1703.01365.pdf"
283+
"The first question we can ask is which of the features were actually important to the model to reach this decision? This is the first main component of Captum, the ability to obtain **Feature Attributions**. For this example, we will apply Integrated Gradients, which is one of the Feature Attribution methods included in Captum. More information regarding Integrated Gradients can be found in the original paper here: https://arxiv.org/pdf/1703.01365.pdf."
284284
]
285285
},
286286
{
287287
"cell_type": "markdown",
288288
"metadata": {},
289289
"source": [
290-
"To apply integrated gradients, we first create an IntegratedGradients object, providing the model object."
290+
"To apply Integrated Gradients, we first create an IntegratedGradients object, providing the model object."
291291
]
292292
},
293293
{
@@ -303,7 +303,7 @@
303303
"cell_type": "markdown",
304304
"metadata": {},
305305
"source": [
306-
"To compute the integrated gradients, we use the attribute method of the IntegratedGradients object. The method takes tensor(s) of input examples (matching the forward function of the model), and returns the input attributions for the given examples. For a network with multiple outputs, a target index must also be provided, defining the index of the output for which gradients are computed. For this example, we provide target = 1, corresponding to survival. \n",
306+
"To compute the Integrated Gradients, we use the attribute method of the IntegratedGradients object. The method takes tensor(s) of input examples (matching the forward function of the model), and returns the input attributions for the given examples. For a network with multiple outputs, a target index must also be provided, defining the index of the output for which gradients are computed. For this example, we provide target=1, corresponding to survival. \n",
307307
"\n",
308308
"The input tensor provided should require grad, so we call requires\\_grad\\_ on the tensor. The attribute method also takes a baseline, which is the starting point from which gradients are integrated. The default value is just the 0 tensor, which is a reasonable baseline / default for this task. \n",
309309
"\n",
@@ -481,7 +481,7 @@
481481
"cell_type": "markdown",
482482
"metadata": {},
483483
"source": [
484-
"This leads us to the second type of attributions available in Captum, **Layer Attributions**. Layer attributions allow us to understand the importance of all the neurons in the output of a particular layer. For this example, we will be using Layer Conductance, one of the Layer Attribution methods in Captum, which is an extension of Integrated Gradients applied to hidden neurons. More information regarding conductance can be found in the original paper here: https://arxiv.org/abs/1805.12233. "
484+
"This leads us to the second type of attributions available in Captum, **Layer Attributions**. Layer attributions allow us to understand the importance of all the neurons in the output of a particular layer. For this example, we will be using Layer Conductance, one of the Layer Attribution methods in Captum, which is an extension of Integrated Gradients applied to hidden neurons. More information regarding conductance can be found in the original paper here: https://arxiv.org/abs/1805.12233."
485485
]
486486
},
487487
{
@@ -504,7 +504,7 @@
504504
"cell_type": "markdown",
505505
"metadata": {},
506506
"source": [
507-
"We can now obtain the conductance values for all the test examples by calling attribute on the LayerConductance object. LayerConductance also requires a target index for networks with mutliple outputs, defining the index of the output for which gradients are computed. Similar to feature attributions, we provide target = 1, corresponding to survival. LayerConductance also utilizes a baseline, but we simply use the default zero baseline as in integrated gradients."
507+
"We can now obtain the conductance values for all the test examples by calling attribute on the LayerConductance object. LayerConductance also requires a target index for networks with mutliple outputs, defining the index of the output for which gradients are computed. Similar to feature attributions, we provide target=1, corresponding to survival. LayerConductance also utilizes a baseline, but we simply use the default zero baseline as in Integrated Gradients."
508508
]
509509
},
510510
{
@@ -689,7 +689,7 @@
689689
"cell_type": "markdown",
690690
"metadata": {},
691691
"source": [
692-
"We can now obtain the neuron conductance values for all the test examples by calling attribute on the NeuronConductance object. Neuron Conductance requires the neuron index in the target layer for which attributions are requested as well as the target index for networks with mutliple outputs, similar to layer conductance. As before, we provide target = 1, corresponding to survival, and compute neuron conductance for neurons 0 and 10, the significant neurons identified above. The neuron index can be provided either as a tuple or as just an integer if the layer output is 1-dimensional."
692+
"We can now obtain the neuron conductance values for all the test examples by calling attribute on the NeuronConductance object. Neuron Conductance requires the neuron index in the target layer for which attributions are requested as well as the target index for networks with mutliple outputs, similar to layer conductance. As before, we provide target=1, corresponding to survival, and compute neuron conductance for neurons 0 and 10, the significant neurons identified above. The neuron index can be provided either as a tuple or as just an integer if the layer output is 1-dimensional."
693693
]
694694
},
695695
{

0 commit comments

Comments
 (0)