1. Data-limited scenarios
- Relevant non-linear transformations
- Kernel tricks and kernel regression
- Connection with Support Vector Machine
- Random features and neural network
- Few-shot learning
2. Implementation issues and strategies in deep neural network
- Non-convex problem, gradient search and backpropagation,
- Training issues in DNNs, vanishing and exploding gradient problems
- Training using different optimization methods, e.g., SGD, RMSprop, AdaDelta, Adam, Dropout, data augmentation, etc.
- Unbalanced data problem and useful tricks, such as data augmentation
- Cross-validation techniques and model optimization to address over-fitting, for example, grid search, random search, k-fold, Stratified k-fold, early-stopping, drop out, bias-variance trade-off for model monitoring.
3. Structured deep neural networks
- AlexNet, VGG-16, U-Net, ResNet, DenseNet, SciNet, etc
4. Generative models.
- Implicit and explicit models
- Generative Adversarial Networks (GANs)
- Auto encoders, such as Variational auto-encoder (VAE), Denoising auto-encoder
- DC GAN, Cycle GAN
- Normalized flow models and likelihood computation
- Real NVP and Glow models
- Mixture flow model with expectation-maximization and gradient search
5. Deep neural networks for dynamic signals
- Recursive neural networks (RNN), e.g. LSTM, reservoir computing,
- Hidden Markov models (HMM)
- Normalised Flow networks based HMM
- Attention mechanism
6. Incremental learning, transfer learning
- Incremental learning, Learning without forgetting
- Transfer learning via pretraining, Transfer learning with GANs
