qi
All about learning, relentlessly

Notes on Machine Learning 9: Linear regression
(ML 9.1) Linear regression  Nonlinearity via basis functions “It’s truly a workhorse of statistics!” “It’s not just about lines & planes!” Setup. Given $D = ((x_1, y_1), \ldots, (x_n, y_n))$ with $x_i \in \mathbb{R}^d$ and $y_i \in \mathbb{R}$. Goal. Select “good” $f : \mathbb{R} \rightarrow \mathbb{R}$ for predicting $y$...

Notes on Probability Primer 5: Multiple random variables
(PP 5.1) Multiple discrete random variables Definition. Given $(\Omega, \mathscr{A}, P)$, a random vector is a measurable function where $d \in \mathbb{N}$. Definition. A discrete random vector $X \in \mathbb{R}^d$ is s.t. $X(\Omega)$ is countable. Definition. The (joint) PMF (or joint distribution) of a discrete random vector $X \in \mathbb{R}^d$...

Notes on Probability Primer 4: Expectations, etc.
(PP 4.1) Expectation for discrete random variables (“Average value”) Let $\mathscr{X} = X(\Omega)$. Definition. The expectation of a random variable $X$ with PMF $p$ is when this sum is ‘welldefined.’ Otherwise, the expactation does not exist. “Welldefined means welldeinfed as a sum of infinite sereis.” Definition. Let $a_1, a_2, \ldots...

Notes on Machine Learning 8: Naive Bayes
(ML 8.1) Naive Bayes classification Naive Bayes is a family of models that is not necessarily a “Bayesian” method! Setup: Given data $D = ((x^{(1)}, y_1), \ldots, (x^{(n)}, y_n))$ with $x^{(i)} = (x_1^{(1)}, \ldots, x_1^{(d)}) \in \mathbb{R}^d$ and $y_i \in \mathcal{Y} = \{1, \ldots, m\}$. Assume a family of joint...

RLPG discussions #1 (DQN)
RLPG 그룹 멤버들 간의 토론을 통해 이 노트가 만들어지고 있습니다: 권휘 김경환(부랩짱, 윈짱) 김민지(맥짱) 류주영 박창규 백병인 이규복 이승재 전효정 조동헌(우짱) 이일구(코짱) 정재윤 최윤규 Bellman equation Bellman equation 은 MDP 형태로 주어진 강화학습 문제에서 임의의 policy $\pi$ 가 주어졌을 때, 이에 해당하는 value 함수 $v_\pi: \mathscr{S} \rightarrow \mathbb{R}$ 와 $q_\pi:...