You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-30Lines changed: 1 addition & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -194,36 +194,7 @@ Note 2 (**Update: 22-01-2019**): Most of the English-language experiments are ex
194
194
195
195
In this title, I will save the previous updates for me and the visitors to keep track.
196
196
197
-
## February 2019
198
-
199
-
### 04-02-2019
200
-
201
-
- Code development for NER evaluation (F1 scores for tag based, bio+tag based) still continues. I slowed down it a little bit due to personal life issues.
202
-
- Decided to simplify README.md and use [Wiki](https://github.com/hbahadirsahin/nlp-experiments-in-pytorch/wiki) of this repository.
203
-
- All entries related to the updates in January 2019 are moved to [related Wiki page](https://github.com/hbahadirsahin/nlp-experiments-in-pytorch/wiki/Previous-Updates-(January-2019)).
204
-
- I will save my experiment results in Wiki, too.
205
-
- While trying to figure out how to use README and Wiki more efficiently, I will figure better things out hopefully =)
206
-
- As you may noticed, TextCNN experiments are finished. I will continue with LSTM/GRU experiments (first, I have to figure out a good parameter set =)).
207
-
208
-
### Update 12-02-2019
209
-
210
-
- Since I separated classification and NER trainers/evaluators, I decided to create "scorer" folder to prevent bloating "utils.py" with metric calculation functions.
211
-
- "scorer/" will contain current and future metric calculation methods (not giving too much details, since you can always check it =)).
212
-
- I encountered some bugs in NER training and evaluation processes due to save/load functionalities. Hopefully, I fully fixed them. but if anyone out there reading this and using this repository, if you find any bugs, just let me know.
213
-
- Made some minor changes in namings and indexing (not much crucial stuff, details can be found in git commit message).
214
-
- Personal life issues still continues, hence slow development-slow experiment mode still continues.
215
-
216
-
### Update 21-02-2019
217
-
218
-
- Precision, recall and F1 metrics are added into "ner_scorer.py".
219
-
- Since these metrics must be calculated for full set (not batch-based), I changed the evaluator flow a little bit.
220
-
- Evaluator reports mean precision, recall and F1 scores over all tags/named-entities.
221
-
- Detailed, tag-based, scores can be also reported by activating boolean detailed_ner_log (default value is true).
222
-
- In LSTM, I encountered a minor bug while using "bidirectional=true". Hopefully, it is fixed (at least training/evaluating was working on a small set).
223
-
- I tried a larger set to see whether my code is working but I got a "cuda illegal memory access" error. I think it is because OOM issues, but I am not sure for now.
224
-
- An "allowed_transition" stuff will be added in near future (like allennlp/conditional_random_field.py).
225
-
- Also, I updated my libraries, hence requirements.txt is changed, too =)
226
-
- Personal life issues still continues, hence slow development-slow experiment mode still continues.
0 commit comments