통계 (19) 썸네일형 리스트형 Monotone Mapping A mapping ϕ is monotone if for all x,y∈domϕ,(x−y)T(ϕ(x)−ϕ(y))≥0 Reference:Boyd, S. P., & Vandenberghe, L. (2004). Convex optimization. Cambridge university press.Khanh, P., Luong, H. C., Mordukhovich, B., & Tran, D. (2024). Fundamental convergence analysis of sharpness-aware minimization. Advances in Neural Information Processing Systems, 37, 13149-13182. Measure theoretic description of change-of-variables 1. SettingsTwo measurable spaces:(X,A,μ)and(Y,B,μT)Consider a transform T such that:T:X→Ywhere T is a measurable map. i.e., T−1(B)∈A∀B∈B.T induces a pushforward measure μT:μT(B)=μ(T−1(B))2. Change-of-variablesLet ν and μT are measures defined on Y.. Measure theoretic description of KL divergence In this writing, I consider the KL divergence with measure theoretic approach. Suppose P and Q are probability measures on a measurable space (Ω,F), and P≪Q. That is, P is absolutely continuous with respect to Q. (Or, equivalently, Q dominates P) KL divergence is defined as below:\[\begin{aligned} \mathcal{D}_{KL}(P||Q)&:=E_P\left[\log \frac{dP}.. limsup <=> inf{sup} 원래 알던 정의:lim supnxn:=limn→∞supk≥nxk새로 발견한 정의:lim supnxn:=infm{supn≥mxn}둘이 동치임을 증명:Xm=supn≥mxn이라 하면 Xm은 Xm+1≤Xm 이므로 non-increasing sequence임. decreasing 하거나 그대로거나.⇒limm→∞Xm=infmXm$$\Rightarrow\lim_{m\rightarrow \infty} \sup_{n \geq m} x_n= \inf_m \{ \sup_.. 급수 수렴 판정법 1. Partial Sum{Sn=∑i=1nai}n=1∞converges→∑i=1naiconverges{Sn=∑i=1nai}n=1∞diverges→∑i=1naidiverges2. Cauchy Criterion∀ϵ>0,∃Ns.t.$$n>m>N\;\rightarrow\;\left|\sum_{i=m+1}.. 수열 수렴 판정법 1. Direct Method (epsilon-delta)∀ϵ>0,∃N∈Z+s.t.n>N→|an−c|2.BoxMethod(i)Boundedbelow(above)(ii)Monotonedecreasing(increasing)\(→\)\(an\)converges.3.CauchyCriterion\forall \epsilon>0,\;\exists N \in \mathbb{Z}^+\quad s.t.m,n>N\;\rightarrow\;|a_m-a_n| LASSO, Ridge regression LASSOminβ(y−Xβ)T(y−Xβ)subjectto||β||1≤tβi가 0이 되는 걸 허용한다. 0이 됨으로써 variable selection도 되는 듯.Ridge regressionminβ(y−Xβ)T(y−Xβ)subjectto||β||2=c Lagrangian당연히 둘 다 constraint를 Lagrangian을 이용해 objective에 포함시킬 수 있다. Referenceshttps://en.wikipedia.org/wiki/Lasso_(statistics)https://en.wikipedia.org/.. 2d symmetric KL-divergence Implementation import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def normal2d(mu:np.array, sigma): def normal2d_(x, y): x = np.array([x,y]) x = x.reshape((2,1)) I = np.eye(2) V = sigma**2*I V_inv = np.linalg.inv(V) mul = np.linalg.det(2*np.pi*V)**(-0.5) px = -0.5*(x-mu).T@V_inv@(x-mu) result = mul*np.exp(px[0]) return result[0] return normal2d_ def calcul.. 이전 1 2 3 다음 목록 더보기