A generative model is a mathematical formulation that generates a sample similar to real data. Many such models have been proposed using machine learning methods, including deep learning. Study of a good model serves to characterize the nature of a system and also to clarify the potential of machine learning. We study various time series models including classical Markov models, grammatical models, Simon processes, random walks on a network, neural models, autoencoders, and adversarial methods. The fundamental properties of generative models are studied in terms of whether they can generate samples resembling real data.
References
- Kumiko Tanaka-Ishii and Tatsuru Kobayashi. Taylor’s law for linguistic sequences and random walk models. Journal of Physics Communications, 2018, 2.11: 115024. [link]