Voir la notice de l'article provenant de la source Math-Net.Ru
@article{IZKAB_2018_6-2_a16, author = {E. S. Izrailova}, title = {Creating a database for modeling system for speech}, journal = {News of the Kabardin-Balkar scientific center of RAS}, pages = {181--186}, publisher = {mathdoc}, number = {6-2}, year = {2018}, language = {ru}, url = {http://geodesic.mathdoc.fr/item/IZKAB_2018_6-2_a16/} }
E. S. Izrailova. Creating a database for modeling system for speech. News of the Kabardin-Balkar scientific center of RAS, no. 6-2 (2018), pp. 181-186. http://geodesic.mathdoc.fr/item/IZKAB_2018_6-2_a16/
[1] E. S. Izrailova, “O sozdanii fonetiko-akusticheskoi bazy v ramkakh sinteza chechenskoi rechi”, Vestnik Voronezhskogo gosudarstvennogo universiteta. Seriya: Sistemnyi analiz i informatsionnye tekhnologii, 2017, no. 2, 111–115
[2] Aaron van den Oord, Sander Dieleman, Heiga Zen, WaveNet: A Generative Model for Raw Audio, (data obrascheniya: 12.09.2018, elektronnyi resurs) https://deepmind.com/blog/wavenet-generative-model-raw-audio/
[3] J. Sotelo et al, “Char2wav: End-to-end speech synthesis”, Proc. ICLR, 2017 | Zbl
[4] S. Arik et al, “Deep voice: Real-time neural text-to-speech”, Proc. ICML, 2017, 195–204
[5] K. Cho et al, “Learning phrase representations using RNN encoder-decoder for statistical machine translation”, Proc. EMNLP, 2014, 1724–1734 pp.
[6] Hideyuki Tachibana, Katsuya Uenoyama, Shunsuke Aihara, Efficiently trainable text-to-speech system based on deep convolutional networks with guided attention, arXiv: (data obrascheniya 19.11.2018, elektronnyi resurs) 1710.08969
[7] [Elektronnyi resurs], (data obrascheniya 19.11.2018) https://www.radiomarsho.com/z/17557
[8] E. S. Izrailova, “Foneticheskii alfavit chechenskogo yazyka kak osnova sistemy sinteza rechi”, NTI. ser. 2. Inform. protsessy i sistemy, 2018, no. 2, 35–39, VINITI RAN