KL Regularization
참고 : [[Regularization, Normalization, Standardization, Generalization]] [[VAE]]의 Regularization Error 라고 보면 된다. $$ L i(\phi, \theta, x i) = \mathbb{E} {q \phi}(z|x i)[log(x i|g \theta(z))] + KL(q \phi(z|x i)||p(z)) $$ 해당...
Quick context
First, this page captures one concrete build-log step, research note, or project lesson from Jeffrey Kim.
Next, use the tags, related reading, and home archive to move from this note to deeper material in the same topic cluster.
Finally, follow the RSS feed if you want the next experiment, retrospective, or paper review as soon as it ships.
Archive note
First, this imported note is intentionally compact. It acts as a pointer into the wider SecondBrain archive rather than a long-form standalone article.
Next, use the tags, related reading, and project sections to move toward deeper context. Those paths usually lead to fuller write-ups, experiments, or project retrospectives.
Finally, revisit this page together with the home archive and RSS feed when you want the follow-up posts that expand the same topic.
참고 : [[Regularization, Normalization, Standardization, Generalization]]
[[VAE]]의 Regularization Error라고 보면 된다.
해당 Loss에서 뒷항이 Regularization Error이다. 하나의 샘플링 함수에 regularize 시켜서 high variance를 최대한 줄이고자 한다.