吴恩达《Machine Learning Yearning》的中英文版(更新中):[第1~第57章](Machine Learning Yearning(by Andrew NG).pdf)
官网:
TODO
- 中文版正在整理,暂时未发布
Chapter 1. Why Machine Learning Strategy
Chapter 2. How to use this book to help your team
Chapter 3. Prerequisites and Notation
Chapter 4. Scale drives machine learning progress
Chapter 5. Your development and test sets
Chapter 6. Your dev and test sets should come from the same distribution
Chapter 7. How large do the dev/test sets need to be?
Chapter 8. Establish a single-number evaluation metric for your team to optimize
Chapter 9. Optimizingandsatisficingmetrics
Chapter 10. Having a dev set and metric speeds up iterations
Chapter 11. When to change dev/test sets and metrics
Chapter 12. Takeaways: Setting up development and test sets
Chapter 13. Build your first system quickly, then iterate
Chapter 14. Error analysis: Look at dev set examples to evaluate ideas
Chapter 15. Evaluate multiple ideas in parallel during error analysis
Chapter 16. Cleaning up mislabeled dev and test set examples
Chapter 17. If you have a large dev set, split it into two subsets, only one of which you look at
Chapter 18. How big should the Eyeball and Blackbox dev sets be?
Chapter 19. Takeaways: Basic error analysis
Chapter 20. Bias and Variance: The two big sources of error
Chapter 21. Examples of Bias and Variance
Chapter 22. Comparing to the optimal error rate
Chapter 23. Addressing Bias and Variance
Chapter 24. Bias vs. Variance tradeoff
Chapter 25. Techniques for reducing avoidable bias
Chapter 26. Techniques for reducing Variance
Chapter 27. Error analysis on the training set
Chapter 28. Diagnosing bias and variance: Learning curves
Chapter 29. Plotting training error
Chapter 30. Interpreting learning curves: High bias
Chapter 31. Interpreting learning curves: Other cases
Chapter 32. Plotting learning curves
Chapter 33. Why we compare to human-level performance
Chapter 34. How to define human-level performance
Chapter 35. Surpassing human-level performance
Chapter 36. Why train and test on different distributions
Chapter 37. Whether to use all your data
Chapter 38. Whether to include inconsistent data
Chapter 39. Weighting data
Chapter 40. Generalizing from the training set to the dev set
Chapete 41. Identifying Bias, Variance, and Data Mismatch Errors
Chapter 42. Addressing data mismatch
Chapter 43. Artificial data synthesis
Chapter 44. The Optimization Verification test
Chapter 45. General form of Optimization Verification test
Chapter 46. Reinforcement learning example
Chapter 47. The rise of end-to-end learning
Chapter 48. More end-to-end learning examples
Chapter 49. Pros and cons of end-to-end learning
Chapter 50. Choosing pipeline components: Data availability
Chapter 51. Choosing pipeline components: Task simplicity
Chapter 52. Directly learning rich outputs
Chapter 53. Error Analysis by Parts
Chapter 54. Beyond supervised learning: What’s next?
Chapter 55. Building a superhero team - Get your teammates to read this
Chapter 56. Big picture
Chapter 57. Credits