Skip to content

Commit 8ece166

Browse files
authored
Merge pull request #281 from luotao1/readme
fix incorrectly displayed url
2 parents 46e398f + 067009d commit 8ece166

File tree

1 file changed

+16
-16
lines changed

1 file changed

+16
-16
lines changed

README.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -8,73 +8,73 @@ The word embedding expresses words with a real vector. Each dimension of the vec
88

99
In the example of word vectors, we show how to use Hierarchical-Sigmoid and Noise Contrastive Estimation (NCE) to accelerate word-vector learning.
1010

11-
- 1.1 [Hsigmoid Accelerated Word Vector Training] (https://github.com/PaddlePaddle/models/tree/develop/hsigmoid)
12-
- 1.2 [Noise Contrast Estimation Accelerated Word Vector Training] (https://github.com/PaddlePaddle/models/tree/develop/nce_cost)
11+
- 1.1 [Hsigmoid Accelerated Word Vector Training](https://github.com/PaddlePaddle/models/tree/develop/hsigmoid)
12+
- 1.2 [Noise Contrast Estimation Accelerated Word Vector Training](https://github.com/PaddlePaddle/models/tree/develop/nce_cost)
1313

1414

1515
## 2. Generate text using the recurrent neural network language model
1616

1717
The language model is important in the field of natural language processing. In addition to getting the word vector (a by-product of language model training), it can also help us to generate text. Given a number of words, the language model can help us predict the next most likely word. In the example of using the language model to generate text, we focus on the recurrent neural network language model. We can use the instructions in the document quickly adapt to their training corpus, complete automatic writing poetry, automatic writing prose and other interesting models.
1818

19-
- 2.1 [Generate text using the annotated neural network language model] (https://github.com/PaddlePaddle/models/tree/develop/generate_sequence_by_rnn_lm)
19+
- 2.1 [Generate text using the annotated neural network language model](https://github.com/PaddlePaddle/models/tree/develop/generate_sequence_by_rnn_lm)
2020

2121
## 3. Click-Through Rate prediction
2222
The click-through rate model predicts the probability that a user will click on an ad. This is widely used for advertising technology. Logistic Regression has a good learning performance for large-scale sparse features in the early stages of the development of click-through rate prediction. In recent years, DNN model because of its strong learning ability to gradually take the banner rate of the task of the banner.
2323

2424
In the example of click-through rate estimates, we give the Google's Wide & Deep model. This model combines the advantages of DNN and the applicable logistic regression model for DNN and large-scale sparse features.
2525

26-
- 3.1 [Click-Through Rate Model] (https://github.com/PaddlePaddle/models/tree/develop/ctr)
26+
- 3.1 [Click-Through Rate Model](https://github.com/PaddlePaddle/models/tree/develop/ctr)
2727

2828
## 4. Text classification
2929

3030
Text classification is one of the most basic tasks in natural language processing. The deep learning method can eliminate the complex feature engineering, and use the original text as input to optimize the classification accuracy.
3131

32-
For text classification, we provide a non-sequential text classification model based on DNN and CNN. (For LSTM-based model, please refer to PaddleBook [Sentiment Analysis] https://github.com/PaddlePaddle/book/blob/develop/06.understand_sentiment/README.cn.md)).
32+
For text classification, we provide a non-sequential text classification model based on DNN and CNN. (For LSTM-based model, please refer to PaddleBook [Sentiment Analysis](https://github.com/PaddlePaddle/book/blob/develop/06.understand_sentiment/README.cn.md)).
3333

34-
- 4.1 [Sentiment analysis based on DNN / CNN] (https://github.com/PaddlePaddle/models/tree/develop/text_classification)
34+
- 4.1 [Sentiment analysis based on DNN / CNN](https://github.com/PaddlePaddle/models/tree/develop/text_classification)
3535

3636
## 5. Learning to rank
3737

3838
Learning to rank (LTR) is one of the core problems in information retrieval and search engine research. Training data is used by a learning algorithm to produce a ranking model which computes the relevance of documents for actual queries.
3939
The depth neural network can be used to model the fractional function to form various LTR models based on depth learning.
4040

41-
The algorithms for learning to rank are usually categorized into three groups by their input representation and the loss function. These are pointwise, pairwise and listwise approaches. Here we demonstrate RankLoss loss function method (pairwise approach), and LambdaRank loss function method (listwise approach). (For Pointwise approaches, please refer to [Recommended System] (https://github.com/PaddlePaddle/book/ blob / develop / 05.recommender_system / README.cn.md)).
41+
The algorithms for learning to rank are usually categorized into three groups by their input representation and the loss function. These are pointwise, pairwise and listwise approaches. Here we demonstrate RankLoss loss function method (pairwise approach), and LambdaRank loss function method (listwise approach). (For Pointwise approaches, please refer to [Recommended System](https://github.com/PaddlePaddle/book/blob/develop/05.recommender_system/README.cn.md)).
4242

43-
- 5.1 [Learning to rank based on Pairwise and Listwise approches] (https://github.com/PaddlePaddle/models/tree/develop/ltr)
43+
- 5.1 [Learning to rank based on Pairwise and Listwise approches](https://github.com/PaddlePaddle/models/tree/develop/ltr)
4444

4545
## 6. Semantic model
4646
The deep structured semantic model uses the DNN model to learn the vector representation of the low latitude in a continuous semantic space, finally models the semantic similarity between the two sentences.
4747

4848
In this example, we demonstrate how to use PaddlePaddle to implement a generic deep structured semantic model to model the semantic similarity between two strings. The model supports different network structures such as CNN (Convolutional Network), FC (Fully Connected Network), RNN (Recurrent Neural Network), and different loss functions such as classification, regression, and sequencing.
4949

50-
- 6.1 [Deep structured semantic model] (https://github.com/PaddlePaddle/models/tree/develop/dssm)
50+
- 6.1 [Deep structured semantic model](https://github.com/PaddlePaddle/models/tree/develop/dssm)
5151

5252
## 7. Sequence tagging
5353

5454
Given the input sequence, the sequence tagging model is one of the most basic tasks in the natural language processing by assigning a category tag to each element in the sequence. Recurrent neural network models with Conditional Random Field (CRF) are commonly used for sequence tagging tasks.
5555

5656
In the example of the sequence tagging, we describe how to train an end-to-end sequence tagging model with the Named Entity Recognition (NER) task as an example.
5757

58-
- 7.1 [Name Entity Recognition] (https://github.com/PaddlePaddle/models/tree/develop/sequence_tagging_for_ner)
58+
- 7.1 [Name Entity Recognition](https://github.com/PaddlePaddle/models/tree/develop/sequence_tagging_for_ner)
5959

6060
## 8. Sequence to sequence learning
6161

6262
Sequence-to-sequence model has a wide range of applications. This includes machine translation, dialogue system, and parse tree generation.
6363

6464
As an example for sequence-to-sequence learning, we take the machine translation task. We demonstrate the sequence-to-sequence mapping model without attention mechanism, which is the basis for all sequence-to-sequence learning models. We will use scheduled sampling to improve the problem of error accumulation in the RNN model, and machine translation with external memory mechanism.
6565

66-
- 8.1 [Basic Sequence-to-sequence model] (https://github.com/PaddlePaddle/models/tree/develop/nmt_without_attention)
66+
- 8.1 [Basic Sequence-to-sequence model](https://github.com/PaddlePaddle/models/tree/develop/nmt_without_attention)
6767

6868
## 9. Image classification
6969

7070
For the example of image classification, we show you how to train AlexNet, VGG, GoogLeNet and ResNet models in PaddlePaddle. It also provides a model conversion tool that converts Caffe trained model files into PaddlePaddle model files.
7171

72-
- 9.1 [convert Caffe model file to PaddlePaddle model file] (https://github.com/PaddlePaddle/models/tree/develop/image_classification/caffe2paddle)
73-
- 9.2 [AlexNet] (https://github.com/PaddlePaddle/models/tree/develop/image_classification)
74-
- 9.3 [VGG] (https://github.com/PaddlePaddle/models/tree/develop/image_classification)
75-
- 9.4 [Residual Network] (https://github.com/PaddlePaddle/models/tree/develop/image_classification)
72+
- 9.1 [convert Caffe model file to PaddlePaddle model file](https://github.com/PaddlePaddle/models/tree/develop/image_classification/caffe2paddle)
73+
- 9.2 [AlexNet](https://github.com/PaddlePaddle/models/tree/develop/image_classification)
74+
- 9.3 [VGG](https://github.com/PaddlePaddle/models/tree/develop/image_classification)
75+
- 9.4 [Residual Network](https://github.com/PaddlePaddle/models/tree/develop/image_classification)
7676

7777

7878
## Copyright and License
7979

80-
PaddlePaddle is provided under the [Apache-2.0 license] (LICENSE).
80+
PaddlePaddle is provided under the [Apache-2.0 license](LICENSE).

0 commit comments

Comments
 (0)