Please post errors in the published version of Deep Learning with R here. If necessary, we'll publish a comprehensive list for everyone's convenience. Thank you!

The code for the naive_relu() and naive_add() functions needs to use the full range of row and column indexes for iteration. The correct code is:

naive_relu <- function(x) {
for (i in 1:nrow(x))
for (j in 1:ncol(x))
x[i, j] <- max(x[i, j], 0)
x
}
naive_add <- function(x, y) {
for (i in 1:nrow(x))
for (j in 1:ncol(x))
x[i, j] = x[i, j] + y[i, j]
x
}

This code appears in section 2.3.1 Element-wise operations.

The PDF/eBook version of the book has an incorrect Figure 5.16: Training and validation metrics for fine-tuning. This figure was modified during editing and only the black and white version used in the print book was updated.

The correct color version is attached to this message.

Read the book straight through this past week. Chapter 7 page 240 is the last page of Chapter 7, and the next page is 289 midway through Chapter 8 at 8.5.5. Then Chapter 9 begins at page 293 a few pages later, so I'm missing most of Chapter 8. But that's not all. Chapter 8 appears again after the index. Again it is partway through starting again all over at 8.5.5 followed by Chapter 9 followed by appendix and index again.

Update 2018-03-26: Manning customer support replaced my print book copy, and it arrived today in perfect condition.

PDF version, page 197: This yields an MAE of 0.29.
Actually running the code consistently gives me an error of 0.277, perhaps because the data have changed?
EDIT: Around 0.2775 after the other correction below

In section 6.3 there is an error in the definition of the generator. the +1 needs to be added to line 19 below, otherwise the samples returned always lack the second last observations because the length of the sequence is equal to length.out +1. I don't think It changes the section conclusions.

generator <- function(data, lookback, delay, min_index, max_index,
shuffle = FALSE, batch_size = 128, step = 6) {
if (is.null(max_index)) max_index <- nrow(data) - delay - 1
i <- min_index + lookback
function() {
if (shuffle) {
rows <- sample(c((min_index+lookback):max_index), size = batch_size)
} else {
if (i + batch_size >= max_index)
i <<- min_index + lookback
rows <- c(i:min(i+batch_size, max_index))
i <<- i + length(rows)
}
samples <- array(0, dim = c(length(rows),
lookback / step,
dim(data)[[-1]]))
targets <- array(0, dim = c(length(rows)))
for (j in 1:length(rows)) {
indices <- seq(rows[[j]] - lookback+1, rows[[j]],
length.out = dim(samples)[[2]])
samples[j,,] <- data[indices,]
targets[[j]] <- data[rows[[j]] + delay,2] #myy
}
list(samples, targets)
}
}

An alternate fix to the generator issue mentioned above is here. This addresses another issue with the selection of rows so is I believe preferable:

generator <- function(data, lookback, delay, min_index, max_index,
shuffle = FALSE, batch_size = 128, step = 6) {
if (is.null(max_index))
max_index <- nrow(data) - delay - 1
i <- min_index + lookback
function() {
if (shuffle) {
rows <- sample(c((min_index+lookback):max_index), size = batch_size)
} else {
if (i + batch_size >= max_index)
i <<- min_index + lookback
rows <- c(i:min(i+batch_size-1, max_index))
i <<- i + length(rows)
}
samples <- array(0, dim = c(length(rows),
lookback / step,
dim(data)[[-1]]))
targets <- array(0, dim = c(length(rows)))
for (j in 1:length(rows)) {
indices <- seq(rows[[j]] - lookback, rows[[j]]-1,
length.out = dim(samples)[[2]])
samples[j,,] <- data[indices,]
targets[[j]] <- data[rows[[j]] + delay,2]
}
list(samples, targets)
}
}

The change is basically to subtract 1 in the expression "i + batch_size - 1" (on line 12) and to also subtract 1 in the expression "rows[[j]]-1" on line 22.

Figure 4.2 in the print book (p. 90) regarding 3-fold cross-validation: There should just be one validation fold in each row. So while the shading is correct, the labels for the first partition in folds 2 and 3 are incorrect (it reads "Validation" instead of "Training"

563151 wrote:Figure 4.2 in the print book (p. 90) regarding 3-fold cross-validation: There should just be one validation fold in each row. So while the shading is correct, the labels for the first partition in folds 2 and 3 are incorrect (it reads "Validation" instead of "Training").

get_layer(model, index = 1) %>%
set_weights(list(embedding_matrix)) %>%
freeze_weights()

The error is this:

Error in py_call_impl(callable, dots$args, dots$keywords) :
ValueError: You called `set_weights(weights)` on layer "flatten_1" with a weight list of length 1, but the layer was expecting 0 weights. Provided weights: [array([[ 0. , 0. , 0. , ..., 0....
Detailed traceback:
File "C:\ANACON~1\lib\site-packages\keras\engine\base_layer.py", line 1062, in set_weights
str(weights)[:50] + '...')

Thanks for reporting the error with the pretrained word embeddings. The get_index() should be 1-based but it was originally implemented as 0-based. If you update to the latest development version of keras for R as follows then the code should work as expected:

On page 262, in listing 8.10, the calculation of the loss should be updated as follows:

# This code is no longer required since we don't use layer_dict below
# layer_dict <- model$layers
# names(layer_dict) <- lapply(layer_dict, function(layer) layer$name)
loss <- k_variable(0)
for (layer_name in names(layer_contributions)) {
coeff <- layer_contributions[[layer_name]]
activation <- get_layer(model, layer_name)$output
scaling <- k_prod(k_cast(k_shape(activation), "float32"))
loss <- loss + (coeff * k_sum(k_square(activation)) / scaling)
}