Editing DP-SGD 기법

From CS Wiki
Revision as of 19:34, 3 February 2024 by 인공지능입니다 (talk | contribs) (새 문서: '''Differential Privacy – Stochastic Gradient Descent''' * SGD는 AI 모델을 학습 방식의 대표적인 방법으로 입력 데이터를 작은 크기로 분할 집합(Mini Batch)하여 학습 진행 ** 여기서 AI 모델 학습은 새로운 데이터 입력으로 AI 모델 내의 노드 등의 수치가 최적화(Optimize)되는 과정으로 정의한다. * DP-SGD는 SGD방식에서 차등 프라이버시 기법을 적용하여 학습 진행 ∙ 분할 집합마다 각...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Warning: You are editing an out-of-date revision of this page. If you publish it, any changes made since this revision will be lost.

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

Please note that all contributions to CS Wiki are considered to be released under the Creative Commons Attribution-NonCommercial-ShareAlike (see CS Wiki:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!
Cancel Editing help (opens in new window)