• Notes on Information Theory (master page)

    This is the master page for Notes on Machine Learning posts, in which I summarize in a succinct and straighforward fashion what I learn from Information Theory course by Mathematical Monk, along with my own thoughts and related resources. Notes on Machine Learning 1: Information theory and Coding To be...


  • Notes on Machine Learning 4: Maximum Likelihood Estimation

    (ML 4.1) (ML 4.2) Maximum Likelihood Estimation (MLE) (part 1, 2) Setup. Given data $D = (x_1, \ldots, x_n)$ where $x_i \in \mathbb{R}^d$. Assume a family of distributions $\{p_\theta : \theta \in \Theta\}$ on $\mathbb{R}^d$. $\,$ $p_\theta(x) = p(x \vert \theta) $ Assume $D$ is a sample from $X_1, \ldots,...


  • Notes on Probability Primer 2: Conditional probability & independence

    (PP 2.1) Conditional Probability Conditinoal probability and independence are critical topics in applications of probability. Notation “Suppress” $(\Omega, \mathcal{A})$. Whenever write $P(E)$, we are implicitly assuming some underlying probability measure sapce ($\Omega$, $\mathscr{A}$ $p$). Terminology event = measureable set = set in $\mathcal{A}$ sample space = $\Omega$ Definition Assuming $P(B)...


  • Notes on Machine Learning 3: Decision theory

    (ML 3.1) Decision theory (Basic Framework) Idea. “Minimize expected loss” Example. Spam (classification): $x, y, \hat{y}$ Loss function $L(y, \hat{y}) \in \mathbb{R}$ Loss can be thought of as reward or utility depending on the sign of the value. General framework: State $s$ (unknown) Observation (known) e.g, $x$ Actoin $a$ Loss...


  • Notes on Deep Reinforcement Learning

    In this page I summarize in a succinct and straighforward fashion what I learn from Deep Reinforcement Learning course by Sergey Levine, along with my own thoughts and related resources. I will update this page frequently, like every week, until it’s complete. Acronyms RL: Reinforcement Learning DRL Deep Reinfocement Learning...