qi
All about learning, relentlessly

Notes on Machine Learning 2: Decision trees
(ML 2.1) Classification trees (CART) CART (Classification And Regression Trees) by Breiman et. al. (see: https://rafalab.github.io/pages/649/section11.pdf) Conceptually very simple approach to classification and regression. Can be extremely powerful, specially coupled with some randomizaiton technique, and essentially give the best performance. Main idea: Form a binary tree (by binary splits), and...

Notes on Deep Learning (Book)
In this page I summarize in a succinct and straighforward fashion what I learn from the book Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville, along with my own thoughts and related resources. I will update this page frequently, like every week, until it’s complete. Acronyms DL: Deep...

Notes on Convex Optimization
In this page I summarize in a succinct and straighforward fashion what I learn from Convex Optimization course by Stephen Boyd, along with my own thoughts and related resources. I will update this page frequently, like every week, until it’s complete. Acronyms LA: Linear Algebra Preceding materials to be added.....

Notes on Probability Primer 1: Measure theory
(PP 1.1) Measure theory: Why measure theory  The BanachTarski Paradox Why measure theory? A bit more detailed explanations on the BanachTarski paradox here: The Banach–Tarski Paradox. (PP 1.2) Measure theory: $\sigma$algebras Definition. Given a set $\Omega$, a $\sigma$algebra on $\Omega$ is a nonempty collection $\mathcal{A} \subset 2^{\Omega}$ s.t. closed...

Notes on Probability Primer (master page)
This is the master page for Notes on Machine Learning posts, in which I summarize in a succinct and straighforward fashion what I learn from Probability Primer course by Mathematical Monk, along with my own thoughts and related resources. Acronyms RV: random variable Notes on Probability Primer 1: Measure theory...