465682 (1) [Avatar] Offline
A series of graphs show how training and test performance vary with epochs as different types of regularisation are applied. Although the regularisation does allow the model to train longer without over-fitting, this could be seen much more readily if the graphs were not spread across successive pages (at present, one must pick an epoch, read off the train & test error, then re-visit earlier graphs). Needing to show both train and test performance complicates things a bit, but I really think there is a strong case for trying to get all the curves onto a single graph.

This could perhaps best be done not by reducing the number of graphs, but by keeping them where they are and instead adding to each:

- Picking distinct line types for training vs. test (e.g. dashed vs. dotted)
- Keeping the original (non-regularised) results the same colour in each graph
- Fading out the lines for previously-shown regularisation methods (reduce their opacity to 50% or lower?)
- Having the latest method in a new colour, and at full opacity

For example, the graph in the drop-out section could show the original results in the same dark blue currently used for all results, earlier results as faded (low-opacity) versions of themselves, and of course the drop-out lines in full colour (I like tangerine!). If the graph looks cluttered - well, lower the opacity further on the intermediate results. The key thing is that a single glance now allows the reader to compare all methods.