Skip to content

learn-co-curriculum/enterprise-tuning-neural-networks-recap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Tuning Neural Networks - Recap

Key Takeaways

The key takeaways from this section include:

Tuning Neural Networks

  • Validation and test sets are used when iteratively building deep neural networks
  • Like traditional machine learning models, we need to watch out for the bias variance trade-off when building deep learning models
  • Examples of alternatives for gradient descent are: RMSprop, Adam, Gradient Descent with Momentum, etc.
  • Hyperparameter tuning is of crucial importance when working with deep learning models, as setting the parameters right can lead to great improvements in model performance

Regularization

  • Several regularization techniques can help us limit overfitting: L1 Regularization, L2 Regularization, Dropout Regularization, etc.

Normalization

  • Training of deep neural networks can be sped up by using normalized inputs
  • Normalized inputs can also help mitigate a common issue of vanishing or exploding gradients

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published