A Coding Guide Implementing SHAP Explainability Workflows with Explainer Comparisons, Maskers, Interactions, Drift, and Black-Box Models

print(“\n” + “=”*72) print(“PART 3: Interaction decomposition”) print(“=”*72) inter = tree_expl.shap_interaction_values(X_te.iloc[:500]) inter_abs = np.abs(inter).mean(0) diag = np.diagonal(inter_abs).copy() off = inter_abs.copy(); np.fill_diagonal(off, 0) main_share = diag.sum() / (diag.sum() + off.sum()) print(f”Total attribution mass: {main_share*100:.1f}% main effects, ” f”{(1-main_share)*100:.1f}% interactions”) pairs = [(X.columns[i], X.columns[j], off[i, j]) for i in range(X.shape[1]) for j in range(i+1, X.shape[1])] pairs.sort(key=lambda t:…

Read More