**Kim, Okuno, Fukui, and Shimodaira (IJCAI2019, to appear)** proposes weighted inner product similarity (WIPS) $\langle \boldsymbol{f}_{\text{NN}}(\boldsymbol{x})
,
\boldsymbol{f}_{\text{NN}}(\boldsymbol{x}')
\rangle_{\boldsymbol{\lambda}}$, where $\langle \boldsymbol{y},\boldsymbol{y}'\rangle_{\boldsymbol{\lambda}}:=\sum_{k=1}^{K}\lambda_k y_k y_k'$ represents the weighted inner product, and $\boldsymbol{\lambda}=(\lambda_1,\lambda_2,\ldots,\lambda_K) \in \mathbb{R}^K$ is a weight vector to be estimated.
WIPS is capable of approximating general similarities $g(\boldsymbol{x},\boldsymbol{x}')$; WIPS overcomes the limitation of SIPS proposed by **Okuno, Kim, and Shimodaira (AISTATS2019)** and **Okuno and Shimodaira (ICML2018 TADGM-WS)**.
An important point of WIPS is, the weight vector can take real values including negative values. You can imagine the mechanism of WIPS as, the weight $\lambda_k$ and the $k$-th entry of NN $\boldsymbol{f}(\boldsymbol{x})$ captures the eigenvalue and eigenfunctions of the kernel $g(\boldsymbol{x},\boldsymbol{x}')$.

Extensive experiments on a large scale network dataset show that WIPS outperforms other similarities, including SIPS.