5 simple questions by Eric Jang for understanding ML papers quickly.
- What are the inputs to the function approximator?
- What are the outputs to the function approximator?
- What loss supervises the output predictions? What assumptions about the world does this particular objective make?
- Once trained, what is the model able to generalize to, in regards to input/output pairs it hasn’t seen before?
- Are the claims in the paper falsifiable?