nvidia nim

“I forgot that every little action of the common day makes or unmakes character,” Oscar Wilde wrote in De Profundis

went into a rabbit hole into nvidia software.

also did a bit of reading into graph neural networks.

main takeaway is you can create a graph structure from text and images, but it's more useful for heterogeneous structures where the number of neighbors for each node in a graph is variable (as apposed to fixed for text and images).

there are three main predictive tasks for GNNs

(1) graph-level

  • ex: molecule as a graph, predict its smell or probability of binding to a receptor
  • analogy for image: classify an entire image
  • analogy for text: label sentiment of an entire sentence

(2) node-level

  • ex: predict identity/role of each node
  • analogy for image: image segmentation
  • analogy for text: parts-of-speech

(3) edge-level

  • ex: image scene understanding, given nodes that represent objects in an image, predict which of these nodes share an edge or what the value of the edge is

the challenges of graphs in ML: representing graphs for neural networks

graphs have 4 types of info

  1. nodes
  2. edges
  3. global-context
  4. connectivity (hard part)

first three is straightforward, we create a node matrix N, where each node has an index i that stores the feature for node_i

connectivity is more complicated. first, adjacency matrix are sparse and space inefficient. second, many adjacency matrices can encode the same information, but without a guarantee that they produce the same result (they are not permutation invariant)

solution: represent sparse matrices as adjacency list

they describe connectivity of edge e_k between nodes n_i and n_j as a tuple (i, j) in the kth entry of the list.

conceptual

applied

found these datasets to work with

10/14/2024

Previous:

blue angels