aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorYigit Sever2019-09-27 21:06:39 +0300
committerYigit Sever2019-09-27 21:06:39 +0300
commit97848102702d1914fa208f06916ea597faa5ce24 (patch)
tree54c2c3a9bb6cc0f15e44ca5c88a2f51c8b70e50e
parent4b1b0fc50b48d32d200f968bb607080498f6f452 (diff)
parentb83eb074894d13d02ee6173c74129982d59b8976 (diff)
downloadEvaluating-Dictionary-Alignment-97848102702d1914fa208f06916ea597faa5ce24.tar.gz
Evaluating-Dictionary-Alignment-97848102702d1914fa208f06916ea597faa5ce24.tar.bz2
Evaluating-Dictionary-Alignment-97848102702d1914fa208f06916ea597faa5ce24.zip
Merge branch 'master' of github.com:yigitsever/Evaluating-Dictionary-Alignment
-rw-r--r--README.md40
1 files changed, 40 insertions, 0 deletions
diff --git a/README.md b/README.md
index da7fe5d..e80d946 100644
--- a/README.md
+++ b/README.md
@@ -195,3 +195,43 @@ python sentence_embedding.py it ro bilingual_embeddings/it_to_ro.vec bilingual_e
195 195
196Will run on Italian and Romanian definitions, using sentence embedding representation for matching. 196Will run on Italian and Romanian definitions, using sentence embedding representation for matching.
197 197
198
199### learn_and_predict.py - Supervised Alignment
200
201```
202usage: learn_and_predict.py [-h] -sl SOURCE_LANG -tl TARGET_LANG -df DATA_FILE
203 -es SOURCE_EMB_FILE -et TARGET_EMB_FILE
204 [-l MAX_LEN] [-z HIDDEN_SIZE] [-b] [-n NUM_ITERS]
205 [-lr LEARNING_RATE]
206
207optional arguments:
208 -h, --help show this help message and exit
209 -sl SOURCE_LANG, --source_lang SOURCE_LANG
210 Source language.
211 -tl TARGET_LANG, --target_lang TARGET_LANG
212 Target language.
213 -df DATA_FILE, --data_file DATA_FILE
214 Path to dataset.
215 -es SOURCE_EMB_FILE, --source_emb_file SOURCE_EMB_FILE
216 Path to source embedding file.
217 -et TARGET_EMB_FILE, --target_emb_file TARGET_EMB_FILE
218 Path to target embedding file.
219 -l MAX_LEN, --max_len MAX_LEN
220 Maximum number of words in a sentence.
221 -z HIDDEN_SIZE, --hidden_size HIDDEN_SIZE
222 Number of units in LSTM layer.
223 -b, --batch running in batch (store results to csv) or running in
224 a single instance (output the results)
225 -n NUM_ITERS, --num_iters NUM_ITERS
226 Number of iterations/epochs.
227 -lr LEARNING_RATE, --learning_rate LEARNING_RATE
228 Learning rate for optimizer.
229```
230
231Example;
232
233```
234python learn_and_predict.py -sl en -tl ro -df ./wordnets/tsv_files/en_to_ro.tsv -es bilingual_embeddings/en_to_ro.vec -et bilingual_embeddings/ro_to_en.vec
235```
236
237Will run on English and Romanian definitions.