This interactive visualization demonstrates how various convolution parameters affect shapes and data dependencies between the input, weight, and output matrices. Hover over an input/output cell to highlight the corresponding output/input cells, or hover over a weight to highlight which inputs were used to compute an output.
In a convolution operation, a weight matrix (or kernel) slides over an input matrix to produce an output matrix. Each output value is computed as a weighted sum of input values covered by the kernel.
The operation visualized here is technically a correlation, not a convolution, as true convolutions flip their weights. However, most deep learning frameworks still call these convolutions.
Parameters explained: