GATConv

class dgNN.layers.GATConv(self, in_feats, out_feats, num_heads, feat_drop=0., attn_drop=0., negative_slope=0.2, residual=False, activation=None, bias=True)

Graph attention layer from Graph Attention Network

\[h_{i}^{(l+1)}=\sum_{j \in \mathcal{N}(i)} \alpha_{i, j} W^{(l)} h_{j}^{(l)}\]
Parameters
  • in_feats (int) – input feature size.

  • out_feats (int) – output feature size.

  • num_heads (int) – number of heads in multi-head attention.

  • feat_drop (float) – dropout rate on node features.

  • attn_drop (float) – dropout rate on attention weights.

  • negative_slope (float) – negative slope for LeakyReLU.

  • residual (bool) – whether to use residual connection.

  • bias (bool) – whether to learn a bias term.

forward(self, row_ptr, col_ind, col_ptr, row_ind, feat)
Parameters
  • row_ptr (torch.tensor) – CSR format index pointer tensor of shape \((V+1)\), where \(V\) is the number of vertices.

  • col_ind (torch.tensor) – CSR format index tensor of shape \((E)\), where \(E\) is the number of edges.

  • col_ptr (torch.tensor) – CSC format index pointer tensor of shape \((N+1)\).

  • row_ind (torch.tensor) – CSC format index tensor of shape \((E)\).

  • feat (torch.tensor) – the input feature of shape \((N,F_{in})\), where \(F_{in}\) is the input feature size.

Returns

the output feature of shape \((N, H, F_{out})\), where \(F_{out}\) is the output feature size and \(H\) is the number of attention heads.

Return type

torch.tensor