All Posts
27 articles published
#linux8#cli8#transformer5#NLP5#markdown3#prompt-optimization3#attention3#nextjs2#blog2#convention2#RAG2#git2#writing2#DSPy2#self-attention2#google-search-console1#seo1#github-pages1#sitemap1#claude-code1#skill1#opensource1#pptx1#automation1#giscus1#github-discussions1#comments1#python1#ruff1#linter1#pre-commit1#benchmark1#evaluation1#retrieval1#sparse1#SIGIR-20241#text-processing1#ssh1#rsync1#network1#permissions1#monitoring1#logging1#workflow1#github1#GFM1#MDX1#survey1#GEPA1#evolutionary-algorithm1#meta-learning1#NeurIPS-20251#FFN1#feed-forward1#GELU1#SwiGLU1#MoE1#layer-norm1#residual-connection1#pre-ln1#post-ln1#deep-learning1#positional-encoding1#sin-cos1#embeddings1#cross-attention1#multi-head1#encoder1#decoder1#Q/K/V1
Study
2026-02-05 14:30
6 min read
Markdown 정복기 (1) — 기본 문법 한 번에 정리
Markdown 시리즈 1편. 헤딩, 강조, 리스트, 링크, 인용 등 어떤 환경에서도 통하는 기본 문법을 한 번에 정리한다.
Study
2024-07-01 09:00
9 min read
[NLP] Transformer 3가지 Attention 자세히 보기 (Encoder/Decoder Self-Attention, Cross-Attention, Multi-Head)
이전 글에서 등장한 Transformer의 3가지 Attention(Encoder Self-Attention, Decoder Masked Self-Attention, Encoder-Decoder Attention)이 각각 어떻게 동작하는지, 그리고 Multi-Head Attention이 왜 필요한지 정리한다.