Notice
Recent Posts
Recent Comments
ยซ   2024/12   ยป
์ผ ์›” ํ™” ์ˆ˜ ๋ชฉ ๊ธˆ ํ† 
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
Tags
more
Archives
Today
Total
๊ด€๋ฆฌ ๋ฉ”๋‰ด

Hello Potato World

[ํฌํ…Œ์ดํ†  ์Šคํ„ฐ๋””] LRP: Layer-wise Relevance Propagation ๋ณธ๋ฌธ

Study๐Ÿฅ”/XAI

[ํฌํ…Œ์ดํ†  ์Šคํ„ฐ๋””] LRP: Layer-wise Relevance Propagation

Heosuab 2021. 8. 2. 01:40

 

โ‹† ๏ฝก หš โ˜๏ธŽ หš ๏ฝก โ‹† ๏ฝก หš โ˜ฝ หš ๏ฝก โ‹† 

[XAI study_ Interpretable Machine Learning]

20210802 XAI study ๋ฐœํ‘œ์ž๋ฃŒ (์ฐธ๊ณ ๋ธ”๋กœ๊ทธ์ฃผ์†Œ References)

 

 


  LRP: Layer-wise Relevance Propagation


 

d-์ฐจ์› ๊ฐ๊ฐ์— ๋Œ€ํ•œ Relevance score๋ฅผ ๊ณ„์‚ฐํ•˜์—ฌ prediction ์„ค๋ช…

LRP : ๋ถ„ํ•ด๋ฅผ ํ†ตํ•œ ์„ค๋ช…(Explanation by Decomposition)์„ ํ†ตํ•ด Neural Network์˜ ๊ฒฐ๊ณผ๋ฌผ์„ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋„์™€์ฃผ๋Š” ๋ฐฉ๋ฒ•

input x๊ฐ€ ์ด d์ฐจ์›์œผ๋กœ ์ด๋ฃจ์–ด์ ธ์žˆ๋‹ค๊ณ  ํ•˜๋ฉด, d์ฐจ์›์˜ ๊ฐ๊ฐ์˜ feature๋“ค์ด ์ตœ์ข… output์„ ๋„์ถœํ•˜๋Š”๋ฐ์— ์„œ๋กœ๋‹ค๋ฅธ ์˜ํ–ฅ๋ ฅ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๊ณ  ์ด ๊ธฐ์—ฌ๋„(Relevance Score)๋ฅผ ๊ณ„์‚ฐํ•˜์—ฌ ๋ถ„์„ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค.

 

 

  • $x$ : sample image
  • $f(x)$ : ์ด๋ฏธ์ง€ x์— ๋Œ€ํ•œ prediction "Rooster"
  • $R_i$ : prediction $f(x)$๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ x์˜ ๊ฐ pixel๋“ค์ด ๊ธฐ์—ฌํ•˜๋Š” ์ •๋„(๊ฐ ์ฐจ์›์˜ Relevance Score)
  • LRP์˜ ๊ฒฐ๊ณผ heatmap : ์ด๋ฏธ์ง€ x์˜ ๊ฐ pixel๋“ค์˜ Relevance Score๋ฅผ ์ƒ‰๊น”๋กœ ํ‘œ์‹œ
    => ์˜ค๋ฅธ์ชฝ ์ƒ๋‹จ(์ˆ˜ํƒ‰์˜ ๋ถ€๋ฆฌ๋‚˜ ๋จธ๋ฆฌ)์„ ๋ณด๊ณ  x์— ๋Œ€ํ•œ prediction์„ "Rooster"๋กœ ์ถœ๋ ฅํ–ˆ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค

 


  Intuition & Mathematically


2-1. Intuition

LRP: Layer-wise Relevance Propagation (Top-down)
  : Relevance score๋ฅผ Output layer์—์„œ Input layer ๋ฐฉํ–ฅ์œผ๋กœ ๊ณ„์‚ฐํ•ด๋‚˜๊ฐ€๋ฉฐ ๊ทธ ๋น„์ค‘์„ ์žฌ๋ถ„๋ฐฐํ•˜๋Š” ๋ฐฉ๋ฒ•

 

  • ๋ชจ๋“  Neuron์€ ๊ฐ๊ฐ์˜ ๊ธฐ์—ฌ๋„(Certain Relevance)๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค
  • Relevance๋Š” Top-down ๋ฐฉ์‹์œผ๋กœ ์žฌ๋ถ„๋ฐฐ
  • ์žฌ๋ถ„๋ฐฐ์‹œ Relevance๋Š” ๋ณด์กด๋œ๋‹ค
    (ex) "Rooster"์˜ prediction ํ™•๋ฅ ์ด 0.9์˜€๋‹ค๊ณ  ํ•œ๋‹ค๋ฉด, Neuron๋“ค์— Relevance score๋ฅผ ์žฌ๋ถ„๋ฐฐํ•œ ํ›„์— ๊ฐ layer์—์„œ์˜ relevance score์˜ ํ•ฉ์€ 0.9๋กœ ๋ณด์กด๋˜์–ด์•ผ ํ•œ๋‹ค.

 

2-1. Mathematically

 

Deep Neural Network์˜ prediction($f(x)$)๋ฅผ ์ˆ˜ํ•™์ ์œผ๋กœ ๋ถ„ํ•ดํ•˜๊ณ , Certain Relevace๋ฅผ ์ •์˜ํ•  ๋ฐฉ๋ฒ•
  => ๊ฐ Neuron์˜ input๊ณผ output์˜ ๊ด€๊ณ„ ์ด์šฉ
(Relevance Score : input์˜ ๋ณ€ํ™”์— ๋”ฐ๋ฅธ ์ถœ๋ ฅ์˜ ๋ณ€ํ™” ์ •๋„)

2์ฐจ์› ์ž…๋ ฅ(2weights, 1bias)์„ ๊ฐ–๋Š” Neuron
$y=f(x)=f(x_1,x_2)$
x๊ฐ’์˜ ๋ณ€ํ™”์— ๋”ฐ๋ฅธ $f(x)$์˜ ๋ณ€ํ™”๋Ÿ‰์„ ํ†ตํ•ด Relevance Score๋ฅผ ์˜ˆ์ธกํ•˜๊ธฐ ์œ„ํ•ด ๋ฏธ๋ถ„(๋ณ€ํ™”๋Ÿ‰) ์ด์šฉ

 

output $f(x)$์— ๋Œ€ํ•œ ๊ฐ๊ฐ์˜ ์ž…๋ ฅ $x_1, x_2$์˜ ๊ธฐ์—ฌ๋„ ํ‘œํ˜„

์œ„์™€ ๊ฐ™์ด ํ‘œํ˜„๋œ $x_1, x_2$์˜ ๊ธฐ์—ฌ๋„์™€ $f(x)$์˜ ๊ด€๊ณ„๋ฅผ ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ๋Š” ์‹์„ ๋„์ถœํ•  ์ˆ˜ ์žˆ๋Š” Taylor Series ๋„์ž…

 


 Taylor Series


Taylor Series : ์–ด๋–ค ์ ์—์„œ ๋ฌดํ•œ ๋ฒˆ ๋ฏธ๋ถ„๊ฐ€๋Šฅํ•œ ํ•จ์ˆ˜๋ฅผ ๊ทธ ์ ์—์„œ ๋ฏธ๋ถ„๊ณ„์ˆ˜ ๊ฐ’์œผ๋กœ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๋Š” ๋ฌดํ•œ๊ธ‰์ˆ˜๋กœ ํ‘œํ˜„๋œ ํ•จ์ˆ˜

2์ฐจ ์ด์ƒ์˜ ๋ฏธ๋ถ„๊ณ„์ˆ˜์˜ ํ•ญ๋“ค์„ error($\epsilon$)์œผ๋กœ ์„ค์ •ํ•˜์—ฌ First-order Taylor Series๋กœ ๋‚˜ํƒ€๋‚ด๋ฉด

์œ„์˜ ์˜ˆ์‹œ์—์„œ input์€ $x_1, x_2$์ด๋ฏ€๋กœ(์‹ค์ œ Neural Network์—์„œ๋Š” ๋” ๋ณต์žกํ•œ ๋‹ค๋ณ€์ˆ˜) ๋‹ค๋ณ€์ˆ˜ ํ•จ์ˆ˜์˜ Taylor๊ธ‰์ˆ˜ ์ด์šฉ
2-dimension์—์„œ์˜ Taylor๊ธ‰์ˆ˜ ์˜ˆ์‹œ

๋งˆ์ฐฌ๊ฐ€์ง€๋กœ error term์„ ์ด์šฉํ•ด ๋‚˜ํƒ€๋‚ธ First-order Taylor Series

Middle term์ด Relevance score๋ฅผ ๊ฒฐ์ •ํ•˜๊ณ , x์˜ ๋ณ€ํ™”์— ๋”ฐ๋ฅธ $f(x)$์˜ ๋ณ€ํ™”๋ฅผ ์•Œ ์ˆ˜ ์žˆ๋‹ค
๋ถˆํ•„์š”ํ•œ Term์ธ $f(a)$์™€ $\epsilon$์„ ์—†์• ๊ธฐ ์œ„ํ•ด 0์œผ๋กœ ๊ทผ์‚ฌ

  • $f(a)=0$
  • ReLU ํ™œ์„ฑํ™”ํ•จ์ˆ˜ ํŠน์„ฑ์„ ์ด์šฉํ•ด $\epsilon=0$์œผ๋กœ ๊ทผ์‚ฌ

 


  ReLU ํŠน์„ฑ์„ ์ด์šฉํ•œ $\epsilon=0$


์˜ˆ์‹œ๋กœ ์‚ฌ์šฉํ•œ ๊ทธ๋ฆผ๊ณผ ๊ฐ™์ด 2๊ฐœ์˜ input $x_1, x_2$๋ฅผ ๊ฐ–๋Š” Neuron์˜ ReLU ํ•จ์ˆ˜

  • case 1 : ์ด๋ฏธ 0์˜ ๊ฐ’์„ ๊ฐ€์ง€๋ฏ€๋กœ ๋ณ€ํ˜•X
  • case 2 : Taylor๊ธ‰์ˆ˜๋กœ ํ‘œํ˜„

Taylor๊ธ‰์ˆ˜๋กœ ํ‘œํ˜„ํ•œ ์‹์„ $f(x)$์˜ weight ํ‘œํ˜„๊ณผ ๋น„๊ตํ•˜๋ฉด $w_1, w_2$๋ฅผ $x_1, x_2$๊ฐ๊ฐ์— ๋Œ€ํ•œ 1์ฐจ ํŽธ๋ฏธ๋ถ„ ๊ฐ’์œผ๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ๋‹ค

2์ฐจ ์ด์ƒ์˜ ํŽธ๋ฏธ๋ถ„ ๊ณ„์ˆ˜๋Š” ๋ชจ๋‘ 0์œผ๋กœ ๋‚˜์˜ค๊ธฐ ๋•Œ๋ฌธ์—(๊ธฐ์šธ๊ธฐ ๋ณ€ํ™” ์—†๋Š” ReLU์˜ ํŠน์„ฑ), 2์ฐจ ์ด์ƒ์˜ ๋ฏธ๋ถ„์œผ๋กœ ํ‘œํ˜„ํ•œ $\epsilon$์˜ ๊ฐ’์ด 0์ด ๋˜๋Š”๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค
  => ๐œ–=0

 


  $f(a)=0$์„ ๋งŒ๋“œ๋Š” a๋ฅผ ์ฐพ๋Š” ๋ฐฉ๋ฒ•


ํ•˜๋‚˜์˜ Neuron์—์„œ, ๋‘ ๊ฐœ์˜ ์ž…๋ ฅ๊ฐ’ $x_1, x_2$๊ณผ ReLU ํ†ต๊ณผ ํ›„์˜ ์ถœ๋ ฅ๊ฐ’์„ ๋„์‹ํ™”ํ•œ ๊ทธ๋ฆผ (์ž…์ถœ๋ ฅ์˜ ๊ด€๊ณ„)

  • ํฐ์ƒ‰์— ๊ฐ€๊นŒ์šธ์ˆ˜๋ก 0์— ๊ฐ€๊น๊ณ  ๋นจ๊ฐ„์ƒ‰์— ๊ฐ€๊นŒ์šธ์ˆ˜๋ก ํฐ ๊ฐ’
  • ์‹ค์„ ํ‘œ์‹œ ์ง€์  : ๋ชจ๋“  ๊ฐ’์ด 0์ธ ์ง€์ 
  • ์ ์„ ํ‘œ์‹œ ์ง€์  : ๋™์ผํ•œ ๊ฐ’์„ ๊ฐ–๋Š” ๋“ฑ๊ณ ์„ 

 

$w^2$-rule์„ ์‚ฌ์šฉํ•˜์—ฌ a๊ฐ’ ๊ตฌํ•˜๋Š” ๋ฐฉ๋ฒ•

 

๊ทธ๋ฆผ์—์„œ์˜ ํ™”์‚ดํ‘œ vectorํ‘œํ˜„(๊ด€๊ณ„)

$f(a)=0$์„ ๋งŒ๋“œ๋Š” a๋ฅผ ๊ตฌํ•˜๊ธฐ ์œ„ํ•ด ํ•ด๋‹น ์ œ์•ฝ์กฐ๊ฑด์„ ๋งŒ๋“ค๋ฉด

๊ตฌํ•ด์ง„ t๊ฐ’์„ ์‚ฌ์šฉํ•ด ๋‹ค์‹œ ๋ฒกํ„ฐ x๋ฅผ ํ‘œํ˜„ํ•˜๋ฉด (a = x+tw์— ๋Œ€์ž…)

๋”ฐ๋ผ์„œ $f(x)$๋ฅผ ์žฌ์ •์˜ํ•˜๋ฉด ($f(x)$ํ‘œํ˜„์‹์— a๋Œ€์ž…)


$w^2−rule$์™ธ์— $z−rule, z^+−rule$๋“ฑ ๋‹ค๋ฅธ ๊ธฐ๋ฒ•๋„ ์‚ฌ์šฉํ•˜์—ฌ a๊ฐ’์„ ๊ตฌํ•  ์ˆ˜ ์žˆ๋‹ค

 


  Relevance Propagation Rule


 

์ ์ ˆํ•œ a๋ฅผ ์ฐพ์Œ์œผ๋กœ์จ ํ•˜๋‚˜์˜ Neuron์˜ ์ถœ๋ ฅ $f(x)$๋ฅผ ๋ถ„ํ•ด

์œ„์—์„œ ๋‹ค๋ฃฌ ๊ฐ Neuron์—์„œ์˜ ๊ณ„์‚ฐ ๋ฐฉ๋ฒ•์„ ์ „์ฒด Neural Network์— ์ ์šฉ

  • Forward Pass : input $x_p$์— ๋Œ€ํ•œ Neural Network์˜ ์ตœ์ข… output $x_f$
  • Relevance Propagation : ๊ฐ Neuron์ด ๊ฐ€์ง€๋Š” Relevance Score $R_f$๋ฅผ $x_f$์™€ ๋™์ผํ•˜๊ฒŒ ์„ค์ •ํ•œ ๋‹ค์Œ ๊ณ„์†ํ•ด์„œ Top-down ๊ณ„์‚ฐ ์ˆ˜ํ–‰
    => Neural Network์ƒ์˜ ๋ชจ๋“  Neuron๋“ค์˜ Relevance Score๋ฅผ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋œ๋‹ค.

 


  Decomposition


Decomposition : input์˜ ๊ฐ feature๊ฐ€ ๊ฒฐ๊ณผ์— ์–ผ๋งˆ๋‚˜ ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š”์ง€ ํ•ด์ฒดํ•˜๋Š” ๋ฐฉ๋ฒ•
(ex) Image x๋ฅผ "cat"์œผ๋กœ ๋ถ„๋ฅ˜ํ•˜๋Š”๋ฐ ๊ฐ hidden layer์—์„œ ๊ณ„์‚ฐํ•œ ๊ธฐ์—ฌ๋„๋ฅผ ํ† ๋Œ€๋กœ ํ•ด๋‹น input image x์˜ feature๋“ค์ด ๋ชจ๋ธ์„ ์–ด๋–ป๊ฒŒ ๋ฐ›์•„๋“ค์˜€๋Š”์ง€ ํžˆํŠธ๋งต์œผ๋กœ ๋„์‹ํ™”

  • Positive ์˜ํ–ฅ์„ ์ค€ feature : ๋นจ๊ฐ•
  • Negative ์˜ํ–ฅ์„ ์ค€ Feature : Blue
    => ์ด๋งˆ, ์ฝ”, ์ž… ์ฃผ๋ณ€์˜ pixel๋“ค์ด ๊ฒฐ๊ณผ์— ์˜ํ–ฅ์„ ๋งŽ์ด ์ค€ ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค

๋”ฐ๋ผ์„œ LRP๋Š” Relevance Propagation๊ณผ Decomposition๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์„ ํ•ด๋ถ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค

 

 


  Image Application Example


LRP๋Š” Image, Text ๋ฐ์ดํ„ฐ ๋“ฑ์— ๋‹ค์–‘ํ•˜๊ฒŒ ์ ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค.

(Image Classification ๋ชจ๋ธ์— LRP๋ฅผ ์ ์šฉํ•œ ๊ฒฐ๊ณผ)

 

 


  References


[1] Explaining NonLinear Classification Decisions with Deep Taylor Decomposition, Montavon et al., 2015

[2] XAI ์„ค๋ช… ๊ฐ€๋Šฅํ•œ ์ธ๊ณต์ง€๋Šฅ, ์ธ๊ณต์ง€๋Šฅ์„ ํ•ด๋ถ€ํ•˜๋‹ค, ์•ˆ์žฌํ˜„, 2020

[3] https://angeloyeo.github.io/2019/08/17/Layerwise_Relevance_Propagation.html

[4] Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey, Arun et al., 2020

Comments