Note on Double Descent in Human Learning via chris-said.io
The way double descent is normally presented, increasing the number of model parameters can make performance worse before it gets better. But there is another even more shocking phenomenon called data double descent, where increasing the number of training samples can cause performance to get worse before it gets better. These two phenomena are essentially mirror images of each other. That’s because the explosion in test error depends on the ratio of parameters to training samples.
Reference
- Notes
- machine-learning, data
- Double Descent in Human Learning
-
Permalink to
2023.NTE.427
- Insight
- Edit
← Previous | Next → |
Note on Double Descent in Human Learning via chris-said.io | Note on Practical Frameworks for Beating Burnout via firstround.com |