Forecasting uncertainty in electricity demand
Tri K Wijaya, Mathieu Sinn, et al.
AAAI 2015
Generative Adversarial Networks (GANs) have become a widely popular framework for generative modelling of high-dimensional datasets. However their training is known to be difficult. This work presents a rigorous statistical analysis of GANs providing straight-forward explanations for common training pathologies such as vanishing gradients. Furthermore, it proposes a new training objective, Kernel GANs, and demonstrates its practical effectiveness on real-world data sets. A key element in the analysis is the distinction between training with respect to the (unknown) data distribution, and its empirical counterpart. To overcome issues in GAN training, we pursue the idea of smoothing the Jensen-Shannon Divergence (JSD) by incorporating noise in the input distributions of the discriminator. As we show, this effectively leads to an empirical version of the JSD in which the true and the generator densities are replaced by kernel density estimates, which leads to Kernel GANs.
Tri K Wijaya, Mathieu Sinn, et al.
AAAI 2015
Yan Liu, Xiaokang Chen, et al.
NeurIPS 2023
Buse Korkmaz, Rahul Nair, et al.
AAAI 2025
Byungchul Tak, Shu Tao, et al.
IC2E 2016