Assumptions Behind GNN Explainers | How Does Explanation Become Computationally Possible? #10616
AnkitSSaxena
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
https://youtu.be/B4E7ir--34M
'Assumptions' play a very important role in explainer tools. They make the explanation process computationally feasible and shape what an explainer can (and cannot) reveal. There are several assumptions an explainer can adopt. Of course, the right assumptions always depend on the explainer’s goal.
In my new YouTube video, I break down the most common assumptions used by explainer tools for Graph Neural Networks (GNNs).
Do watch and share your thoughts in the comments.
https://youtu.be/B4E7ir--34M
Beta Was this translation helpful? Give feedback.
All reactions