Skip to content

Commit e8a2fe1

Browse files
committed
Modify Readme
1 parent c69fbde commit e8a2fe1

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

README.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
1-
## PyTorch Code for 'Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions'
1+
## PyTorch Code for 'Deep *k*-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions'
22

33
## Introduction
44

5-
PyTorch Implementation of our ICML 2018 paper ["Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions"](https://arxiv.org/abs/1806.09228).
5+
PyTorch Implementation of our ICML 2018 paper ["Deep *k*-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions"](https://arxiv.org/abs/1806.09228).
66

7-
In our paper, we proposed a simple yet effective scheme for compressing convolutions though applying k-means clustering on the weights, compression is achieved through weight-sharing, by only recording K cluster centers and weight assignment indexes.
7+
In our paper, we proposed a simple yet effective scheme for compressing convolutions though applying *k*-means clustering on the weights, compression is achieved through weight-sharing, by only recording K cluster centers and weight assignment indexes.
88

9-
We then introduced a novel spectrally relaxed k-means regularization, which tends to make hard assignments of convolutional layer weights to K learned cluster centers during re-training.
9+
We then introduced a novel spectrally relaxed *k*-means regularization, which tends to make hard assignments of convolutional layer weights to K learned cluster centers during re-training.
1010

1111
We additionally propose an improved set of metrics to estimate energy consumption of CNN hardware implementations, whose estimation results are verified to be consistent with previously proposed energy estimation tool extrapolated from actual hardware measurements.
1212

13-
We finally evaluated Deep k-Means across several CNN models in terms of both compression ratio and energy consumption reduction, observing promising results without incurring accuracy loss.
13+
We finally evaluated Deep *k*-Means across several CNN models in terms of both compression ratio and energy consumption reduction, observing promising results without incurring accuracy loss.
1414

1515
### PyTorch Model
1616

@@ -37,11 +37,11 @@ Python 3.5
3737

3838
Sample Visualization of Wide ResNet (Conv2)
3939

40-
Pre-Trained Model | Re-Trained Model (Before Comp.)
40+
Pre-Trained Model (Before Comp.) | Pre-Trained Model (After Comp.)
4141
:-------------------------:|:-------------------------:
42-
![](https://raw.githubusercontent.com/Sandbox3aster/Deep-K-Means-pytorch/master/visuals/Conv2%20Pre-Trained%20Model.png) | ![](https://raw.githubusercontent.com/Sandbox3aster/Deep-K-Means-pytorch/master/visuals/Conv2%20Deep%20k-Means%20Re-Trained%20Model%20(Before%20Comp.).png)
43-
**Pre-Trained Model (After Comp.)** |
44-
![](https://raw.githubusercontent.com/Sandbox3aster/Deep-K-Means-pytorch/master/visuals/Conv2%20Deep%20k-Means%20Re-Trained%20Model%20(After%20Comp.).png) |
42+
![](https://raw.githubusercontent.com/Sandbox3aster/Deep-K-Means-pytorch/master/visuals/Conv2%20Pre-Trained%20Model.png) | ![](https://raw.githubusercontent.com/Sandbox3aster/Deep-K-Means-pytorch/master/visuals/Conv2%20Pre-Trained%20Model%20(After%20Comp.).png)
43+
**Deep *k*-Means Re-Trained Model (Before Comp.)** | **Deep *k*-Means Re-Trained Model (After Comp.)**
44+
![](https://raw.githubusercontent.com/Sandbox3aster/Deep-K-Means-pytorch/master/visuals/Conv2%20Deep%20k-Means%20Re-Trained%20Model%20(Before%20Comp.).png) | ![](https://raw.githubusercontent.com/Sandbox3aster/Deep-K-Means-pytorch/master/visuals/Conv2%20Deep%20k-Means%20Re-Trained%20Model%20(After%20Comp.).png)
4545

4646
## Citation
4747

0 commit comments

Comments
 (0)