Blame view

src/doc/kaldi_for_dummies.dox 21 KB
8dcb6dfcb   Yannick Estève   first commit
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
  // doc/kaldi_for_dummies.dox
  
  // Copyright 2016 Wit Zieliński  Pavel Denisov
  
  // See ../../COPYING for clarification regarding multiple authors
  //
  // Licensed under the Apache License, Version 2.0 (the "License");
  // you may not use this file except in compliance with the License.
  // You may obtain a copy of the License at
  
  //  http://www.apache.org/licenses/LICENSE-2.0
  
  // THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
  // KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED
  // WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,
  // MERCHANTABLITY OR NON-INFRINGEMENT.
  // See the Apache 2 License for the specific language governing permissions and
  // limitations under the License.
  
  /**
   \page kaldi_for_dummies Kaldi for Dummies tutorial
  
  \tableofcontents
  
  \section kaldi_for_dummies_introduction Introduction
  
  This is a step by step tutorial for absolute beginners on how to create a
  simple ASR (Automatic Speech Recognition) system in Kaldi toolkit using your
  own set of data. I really would have liked to read something like this when I
  was starting to deal with Kaldi. This is all based on my experience as an
  amateur in case of speech recognition subject and script programming as well.
  If you have ever delved through Kaldi tutorial on the official project site and
  felt a little bit lost, well, my piece of art might be the choice for you. You
  will learn how to install Kaldi, how to make it work and how to run an ASR
  system using your own audio data. As an effect you will get your first speech
  decoding results.  It was created by Wit Zielinski.
  
  First of all - get to know what Kaldi actually is and why you should use it
  instead of something else. In my opinion Kaldi requires solid knowledge about
  speech recognition and ASR systems in general. It is also good to know the
  basics of script programming languages (bash, perl, python). C++ migh be useful
  in the future (probably you will want to make some modifications in the source
  code).
  
  To read: <br>
  \ref about <br>
  \ref tutorial_prereqs
  
  \section kaldi_for_dummies_environment Environment
  
  Rule number 1 - use Linux. Although it is possible to use Kaldi on Windows,
  most people I find trustworthy convinced me that Linux will do the job with the
  less amount of problems. I have chosen Ubuntu 14.10. This was (in 2014/15) a
  rich and stable Linux representation which I honestly recommend. When you
  finally have your Linux running properly, please open a terminal and install
  some necessary stuff (if you do not already have it):
  
  (has to be installed)
   - \c atlas – automation and optimization of calculations in the
  field of linear algebra,
   - \c autoconf – automatic software compilation on different operating systems,
   - \c automake – creating portable Makefile files,
   - \c git – distributed revision control system,
   - \c libtool – creating static and dynamic libraries,
   - \c svn – revision control system (Subversion), necessary for Kaldi download
  and installation,
   - \c wget – data transfer using HTTP, HTTPS and FTP protocols,
   - \c zlib – data compression,
  
  (probably has to be installed)
   - \c awk – programming language, used for searching and processing patterns
  in files and data streams,
   - \c bash – Unix shell and script programming language,
   - \c grep – command-line utility for searching plain-text datasets for lines
  matching a regular expression,
   - \c make – automatically builds executable programs and libraries from
  source code,
   - \c perl – dynamic programming language, perfect for text files processing.
  
  Done. Operating system and all the necessary Linux tools are ready to go.
  
  \section kaldi_for_dummies_download Download Kaldi
  
  Just follow the instruction carefully:
  \ref install.
  If you do not have much idea about how to use GIT, please read about it:
  \ref tutorial_git.
  
  I installed Kaldi in this directory (called 'Kaldi root path'):
  \c /home/{user}/kaldi
  
  \section kaldi_for_dummies_directories Kaldi directories structure
  
  Try to acknowledge where particular Kaldi components are placed. Also it would
  be nice if you read any \c README files you find.
  
  \c kaldi - main Kaldi directory which contains:
   - \c egs – example scripts allowing you to quickly build ASR
  systems for over 30 popular speech corpora (documentation is attached for each
  project),
   - \c misc – additional tools and supplies, not needed for proper
  Kaldi functionality,
   - \c src – Kaldi source code,
   - \c tools – useful components and external tools,
   - \c windows – tools for running Kaldi using Windows.
  
  The most important directory for you is obviously \c egs. Here you
  will create your own ASR system.
  
  \section kaldi_for_dummies_project Your exemplary project
  
  For the purpose of this tutorial, imagine that you have the same simple set of
  data as me (described below, in \ref kaldi_for_dummies_audio "Audio data" section).
  Then try to 'transpose' every action I do straight into your own project. If
  you completely do not have any audio data or you want to follow my tutorial in
  an identical way, feel free to record your own tracks - it will be even bigger
  experience to play with ASR. Here we go.
  
  <h2>Your precondition</h2>
  You have some amount of audio data that contain only spoken digits by at least
  several different speakers. Each audio file is an entire spoken sentence
  (e.g. 'one, nine, five').
  
  <h2>Your purpose</h2>
  You want to divide your data into train and test sets, set up an ASR system,
  train it, test it and get some decoding results.
  
  <h2>Your first task</h2>
  Something to begin with - create a folder \c digits in
  \c kaldi/egs/ directory. This is a place where you will put all
  the stuff related to your project.
  
  \section kaldi_for_dummies_data Data preparation
  
  \subsection kaldi_for_dummies_audio Audio data
  
  I assume that you want to set up an ASR system, basing on your own audio data.
  For example - let it be a set of 100 files. File format is WAV. Each file
  contains 3 spoken digits recorded in English language, one by one. Each of
  these audio files is named in a recognizable way (e.g. \c 1_5_6.wav,
  which in my pattern means that the spoken sentence is 'one, five, six') and
  placed in the recognizable folder representing particular speaker during a
  particular recording session (there may be a situation that you have recordings
  of the same person but in two different quality/noise environments - put these
  in separate folders). So to sum up, my exemplary dataset looks like this:
   - 10 different speakers (ASR systems must be trained and tested on different
  speakers, the more speakers you have the better),
   - each speaker says 10 sentences,
   - 100 sentences/utterances (in 100 *.wav files placed in 10 folders related to
  particular speakers - 10 *.wav files in each folder),
   - 300 words (digits from zero to nine),
   - each sentence/utterance consist of 3 words.
  
  Whatever your first dataset is, adjust my example to your particular case. Be
  careful with big datasets and complex grammars - start with something simple.
  Sentences that contain only digits are perfect in this case.
  
  <h2>Task</h2>
  Go to \c kaldi/egs/digits directory and create
  \c digits_audio folder. In \c kaldi/egs/digits/digits_audio
  create two folders: \c train and \c test. Select one speaker
  of your choice to represent testing dataset. Use this speaker's 'speakerID' as
  a name for an another new folder in \c kaldi/egs/digits/digits_audio/test
  directory. Then put there all the audio files related to that person. Put the
  rest (9 speakers) into \c train folder - this will be your training
  dataset. Also create subfolders for each speaker.
  
  \subsection kaldi_for_dummies_acoustic Acoustic data
  
  Now you have to create some text files that will allow Kaldi to communicate with
  your audio data. Consider these files as 'must be done'. Each file that you will
  create in this section (and in \ref kaldi_for_dummies_language "Language data"
  section as well) can be considered as a text file with some number of strings
  (each string in a new line). These strings need to be sorted. If you will
  encounter any sorting issues you can use Kaldi scripts for checking
  (\c utils/validate_data_dir.sh) and fixing (\c utils/fix_data_dir.sh) data order.
  And for your information - \c utils directory will be attached to your project in
  \ref kaldi_for_dummies_tools "Tools attachment" section.
  
  <h2>Task</h2>
  In \c kaldi/egs/digits directory, create a folder \c data. Then create
  \c test and \c train subfolders inside. Create in each subfolder following files
  (so you have files named in <b>the same way in \c test and \c train subfolders
  but they relate to two different datasets</b> that you created before):
  
  a.) \c spk2gender <br>
  This file informs about speakers gender. As we assumed, 'speakerID' is a unique
  name of each speaker (in this case it is also a 'recordingID' - every speaker
  has only one audio data folder from one recording session). In my example there
  are 5 female and 5 male speakers (f = female, m = male).
  
  <b>Pattern:</b> <speakerID> <gender>
  \verbatim
  cristine f
  dad m
  josh m
  july f
  # and so on...
  \endverbatim
  
  b.) \c wav.scp <br>
  This file connects every utterance (sentence said by one person during
  particular recording session) with an audio file related to this utterance. If
  you stick to my naming approach, 'utteranceID' is nothing more than 'speakerID'
  (speaker's folder name) glued with *.wav file name without '.wav' ending (look
  for examples below).
  
  <b>Pattern:</b> <uterranceID> <full_path_to_audio_file>
  \verbatim
  dad_4_4_2 /home/{user}/kaldi/egs/digits/digits_audio/train/dad/4_4_2.wav
  july_1_2_5 /home/{user}/kaldi/egs/digits/digits_audio/train/july/1_2_5.wav
  july_6_8_3 /home/{user}/kaldi/egs/digits/digits_audio/train/july/6_8_3.wav
  # and so on...
  \endverbatim
  
  c.) \c text <br>
  This file contains every utterance matched with its text transcription.
  
  <b>Pattern:</b> <uterranceID> <text_transcription>
  \verbatim
  dad_4_4_2 four four two
  july_1_2_5 one two five
  july_6_8_3 six eight three
  # and so on...
  \endverbatim
  
  d.) \c utt2spk <br>
  This file tells the ASR system which utterance belongs to particular speaker.
  
  <b>Pattern:</b> <uterranceID> <speakerID>
  \verbatim
  dad_4_4_2 dad
  july_1_2_5 july
  july_6_8_3 july
  # and so on...
  \endverbatim
  
  e.) \c corpus.txt <br>
  This file has a slightly different directory. In \c kaldi/egs/digits/data
  create another folder \c local. In \c kaldi/egs/digits/data/local create a
  file \c corpus.txt which should contain every single utterance transcription
  that can occur in your ASR system (in our case it will be 100 lines from 100
  audio files).
  
  <b>Pattern:</b> <text_transcription>
  \verbatim
  one two five
  six eight three
  four four two
  # and so on...
  \endverbatim
  
  \subsection kaldi_for_dummies_language Language data
  
  This section relates to language modeling files that also need to be considered
  as 'must be done'. Look for the syntax details here: \ref data_prep (each file
  is precisely described). Also feel free to read some examples in other \c egs
  scripts. Now is the perfect time.
  
  <h2>Task</h2>
  In \c kaldi/egs/digits/data/local directory, create a folder \c dict. In
  \c kaldi/egs/digits/data/local/dict create following files:
  
  a.) \c lexicon.txt <br>
  This file contains every word from your dictionary with its 'phone
  transcriptions' (taken from \c /egs/voxforge).
  
  <b>Pattern:</b> <word> <phone 1> <phone 2> ...
  \verbatim
  !SIL sil
  <UNK> spn
  eight ey t
  five f ay v
  four f ao r
  nine n ay n
  one hh w ah n
  one w ah n
  seven s eh v ah n
  six s ih k s
  three th r iy
  two t uw
  zero z ih r ow
  zero z iy r ow
  \endverbatim
  
  b.) \c nonsilence_phones.txt <br>
  This file lists nonsilence phones that are present in your project.
  
  <b>Pattern:</b> <phone>
  \verbatim
  ah
  ao
  ay
  eh
  ey
  f
  hh
  ih
  iy
  k
  n
  ow
  r
  s
  t
  th
  uw
  w
  v
  z
  \endverbatim
  
  c.) \c silence_phones.txt <br>
  This file lists silence phones.
  
  <b>Pattern:</b> <phone>
  \verbatim
  sil
  spn
  \endverbatim
  
  d.) \c optional_silence.txt <br>
  This file lists optional silence phones.
  
  <b>Pattern:</b> <phone>
  \verbatim
  sil
  \endverbatim
  
  \section kaldi_for_dummies_finalization Project finalization
  
  Last chapter before runnig scripts creation. Your project structure will become
  complete.
  
  \subsection kaldi_for_dummies_tools Tools attachment
  
  You need to add necessary Kaldi tools that are widely used in exemplary scripts.
  
  <h2>Task</h2>
  From \c kaldi/egs/wsj/s5 copy two folders (with the whole content) -
  \c utils and \c steps - and put them in your
  \c kaldi/egs/digits directory. You can also create links to these
  directories. You may find such links in, for example,
  \c kaldi/egs/voxforge/s5.
  
  \subsection kaldi_for_dummies_scoring Scoring script
  
  This script will help you to get decoding results.
  
  <h2>Task</h2>
  From \c kaldi/egs/voxforge/s5/local copy the script \c score.sh into
  similar location in your project (\c kaldi/egs/digits/local).
  
  \subsection kaldi_for_dummies_srilm SRILM installation
  
  You also need to install language modelling toolkit that is used in my
  example - SRI Language Modeling Toolkit (SRILM).
  
  <h2>Task</h2>
  For detailed installation instructions go to
  \c kaldi/tools/install_srilm.sh (read all comments inside).
  
  \subsection kaldi_for_dummies_configuration Configuration files
  
  It is not necessary to create configuration files but it can be a good habit
  for future.
  
  <h2>Task</h2>
  In \c kaldi/egs/digits create a folder \c conf. Inside
  \c kaldi/egs/digits/conf create two files (for some configuration
  modifications in decoding and mfcc feature extraction processes - taken from
  \c /egs/voxforge):
  
  a.) \c decode.config <br>
  \verbatim
  first_beam=10.0
  beam=13.0
  lattice_beam=6.0
  \endverbatim
  
  b.) \c mfcc.conf <br>
  \verbatim
  --use-energy=false
  \endverbatim
  
  \section kaldi_for_dummies_running Running scripts creation
  
  Your first ASR system written in Kaldi environment is almost ready. Your last
  job is to prepare running scripts to create ASR system of your choice. I put
  some comments in prepared scripts for ease of understanding.
  
  These scripts are based on solution used in \c /egs/voxforge directory. I
  decided to use two different training methods:
  - MONO - monophone training,
  - TRI1 - simple triphone training (first triphone pass).
  
  These two methods are enough to show noticable differences in decoding results
  using only digits lexicon and small training dataset.
  
  <h2>Task</h2>
  In \c kaldi/egs/digits directory create 3 scripts:
  
  a.) \c cmd.sh <br>
  \code{.sh}
  # Setting local system jobs (local CPU - no external clusters)
  export train_cmd=run.pl
  export decode_cmd=run.pl
  \endcode
  
  b.) \c path.sh <br>
  \code{.sh}
  # Defining Kaldi root directory
  export KALDI_ROOT=`pwd`/../..
  
  # Setting paths to useful tools
  export PATH=$PWD/utils/:$KALDI_ROOT/src/bin:$KALDI_ROOT/tools/openfst/bin:$KALDI_ROOT/src/fstbin/:$KALDI_ROOT/src/gmmbin/:$KALDI_ROOT/src/featbin/:$KALDI_ROOT/src/lmbin/:$KALDI_ROOT/src/sgmm2bin/:$KALDI_ROOT/src/fgmmbin/:$KALDI_ROOT/src/latbin/:$PWD:$PATH
  
  # Defining audio data directory (modify it for your installation directory!)
  export DATA_ROOT="/home/{user}/kaldi/egs/digits/digits_audio"
  
  # Enable SRILM
  . $KALDI_ROOT/tools/env.sh
  
  # Variable needed for proper data sorting
  export LC_ALL=C
  \endcode
  
  c.) \c run.sh
  \code{.sh}
  #!/bin/bash
  
  . ./path.sh || exit 1
  . ./cmd.sh || exit 1
  
  nj=1       # number of parallel jobs - 1 is perfect for such a small dataset
  lm_order=1 # language model order (n-gram quantity) - 1 is enough for digits grammar
  
  # Safety mechanism (possible running this script with modified arguments)
  . utils/parse_options.sh || exit 1
  [[ $# -ge 1 ]] && { echo "Wrong arguments!"; exit 1; }
  
  # Removing previously created data (from last run.sh execution)
  rm -rf exp mfcc data/train/spk2utt data/train/cmvn.scp data/train/feats.scp data/train/split1 data/test/spk2utt data/test/cmvn.scp data/test/feats.scp data/test/split1 data/local/lang data/lang data/local/tmp data/local/dict/lexiconp.txt
  
  echo
  echo "===== PREPARING ACOUSTIC DATA ====="
  echo
  
  # Needs to be prepared by hand (or using self written scripts):
  #
  # spk2gender  [<speaker-id> <gender>]
  # wav.scp     [<uterranceID> <full_path_to_audio_file>]
  # text	      [<uterranceID> <text_transcription>]
  # utt2spk     [<uterranceID> <speakerID>]
  # corpus.txt  [<text_transcription>]
  
  # Making spk2utt files
  utils/utt2spk_to_spk2utt.pl data/train/utt2spk > data/train/spk2utt
  utils/utt2spk_to_spk2utt.pl data/test/utt2spk > data/test/spk2utt
  
  echo
  echo "===== FEATURES EXTRACTION ====="
  echo
  
  # Making feats.scp files
  mfccdir=mfcc
  # Uncomment and modify arguments in scripts below if you have any problems with data sorting
  # utils/validate_data_dir.sh data/train     # script for checking prepared data - here: for data/train directory
  # utils/fix_data_dir.sh data/train          # tool for data proper sorting if needed - here: for data/train directory
  steps/make_mfcc.sh --nj $nj --cmd "$train_cmd" data/train exp/make_mfcc/train $mfccdir
  steps/make_mfcc.sh --nj $nj --cmd "$train_cmd" data/test exp/make_mfcc/test $mfccdir
  
  # Making cmvn.scp files
  steps/compute_cmvn_stats.sh data/train exp/make_mfcc/train $mfccdir
  steps/compute_cmvn_stats.sh data/test exp/make_mfcc/test $mfccdir
  
  echo
  echo "===== PREPARING LANGUAGE DATA ====="
  echo
  
  # Needs to be prepared by hand (or using self written scripts):
  #
  # lexicon.txt           [<word> <phone 1> <phone 2> ...]
  # nonsilence_phones.txt	[<phone>]
  # silence_phones.txt    [<phone>]
  # optional_silence.txt  [<phone>]
  
  # Preparing language data
  utils/prepare_lang.sh data/local/dict "<UNK>" data/local/lang data/lang
  
  echo
  echo "===== LANGUAGE MODEL CREATION ====="
  echo "===== MAKING lm.arpa ====="
  echo
  
  loc=`which ngram-count`;
  if [ -z $loc ]; then
  	if uname -a | grep 64 >/dev/null; then
  		sdir=$KALDI_ROOT/tools/srilm/bin/i686-m64
  	else
  			sdir=$KALDI_ROOT/tools/srilm/bin/i686
  	fi
  	if [ -f $sdir/ngram-count ]; then
  			echo "Using SRILM language modelling tool from $sdir"
  			export PATH=$PATH:$sdir
  	else
  			echo "SRILM toolkit is probably not installed.
  				Instructions: tools/install_srilm.sh"
  			exit 1
  	fi
  fi
  
  local=data/local
  mkdir $local/tmp
  ngram-count -order $lm_order -write-vocab $local/tmp/vocab-full.txt -wbdiscount -text $local/corpus.txt -lm $local/tmp/lm.arpa
  
  echo
  echo "===== MAKING G.fst ====="
  echo
  
  lang=data/lang
  arpa2fst --disambig-symbol=#0 --read-symbol-table=$lang/words.txt $local/tmp/lm.arpa $lang/G.fst
  
  echo
  echo "===== MONO TRAINING ====="
  echo
  
  steps/train_mono.sh --nj $nj --cmd "$train_cmd" data/train data/lang exp/mono  || exit 1
  
  echo
  echo "===== MONO DECODING ====="
  echo
  
  utils/mkgraph.sh --mono data/lang exp/mono exp/mono/graph || exit 1
  steps/decode.sh --config conf/decode.config --nj $nj --cmd "$decode_cmd" exp/mono/graph data/test exp/mono/decode
  
  echo
  echo "===== MONO ALIGNMENT ====="
  echo
  
  steps/align_si.sh --nj $nj --cmd "$train_cmd" data/train data/lang exp/mono exp/mono_ali || exit 1
  
  echo
  echo "===== TRI1 (first triphone pass) TRAINING ====="
  echo
  
  steps/train_deltas.sh --cmd "$train_cmd" 2000 11000 data/train data/lang exp/mono_ali exp/tri1 || exit 1
  
  echo
  echo "===== TRI1 (first triphone pass) DECODING ====="
  echo
  
  utils/mkgraph.sh data/lang exp/tri1 exp/tri1/graph || exit 1
  steps/decode.sh --config conf/decode.config --nj $nj --cmd "$decode_cmd" exp/tri1/graph data/test exp/tri1/decode
  
  echo
  echo "===== run.sh script is finished ====="
  echo
  \endcode
  
  \section kaldi_for_dummies_results Getting results
  
  Now all you have to do is to run \c run.sh script. If I have made any mistakes
  in this tutorial, logs from the terminal should guide you how to deal with it.
  
  Besides the fact that you will notice some decoding results in the terminal
  window, go to newly made \c kaldi/egs/digits/exp. You may notice there
  folders with \c mono and \c tri1 results as well - directories structure are the
  same. Go to \c mono/decode directory. Here you may find result files (named in
  a <c>wer_{number}</c> way). Logs for decoding process may be found in \c log
  folder (same directory).
  
  \section kaldi_for_dummies_summary Summary
  
  This is just an example. The point of this short tutorial is to show you how to
  create 'anything' in Kaldi and to get a better understanding of how to think
  while using this toolkit. Personally I started with looking for tutorials made
  by the Kaldi authors/developers. After successful Kaldi installation I launched
  some example scripts (Yesno, Voxforge, LibriSpeech - they are relatively easy
  and have free acoustic/language data to download - I used these three as a base
  for my own scripts).
  
  Make sure you follow <a href="http://kaldi-asr.org/">
  http://kaldi-asr.org/</a>- official project website.
  There are two very useful sections for beginners inside: <br>
  a.) \ref tutorial - almost 'step by step' tutorial on how to set up an ASR
  system; up to some point this can be done without RM dataset. It is good to
  read it, <br>
  b.) \ref data_prep - very detailed explanation of how to use your own data
  in Kaldi.
  
  More useful links about Kaldi I found: <br>
  <a href="https://sites.google.com/site/dpovey/kaldi-lectures">
  https://sites.google.com/site/dpovey/kaldi-lectures
  </a> - Kaldi lectures created by the main author <br>
  <a href="http://www.superlectures.com/icassp2011/category.php?lang=en&id=131">
  http://www.superlectures.com/icassp2011/category.php?lang=en&id=131
  </a> - similar; video version <br>
  <a href="http://www.diplomovaprace.cz/133/thesis_oplatek.pdf">
  http://www.diplomovaprace.cz/133/thesis_oplatek.pdf
  </a> - some master diploma thesis about speech recognition using Kaldi
  
  This is all from my side. Good luck!
  
  */