553084 (4) [Avatar] Offline
#1
I have an NVIDIA graphics card and have CUDA properly set up under Windows 10. To install keras in R I used the command
library(keras)
install_keras(tensorflow = "gpu")

and it installed successfully. Currently, I reached section 3.5.5 in the book. But monitoring the code, I notice that all the training for the treated examples is executed on the CPU instead of GPU.

Since I have the GPU version installed, I wonder if I should set some parameter to ensure that training happens on GPU?

Thanks for any suggestion!
256385 (46) [Avatar] Offline
#2
TensorFlow should be using the GPU by default if it's available. You can use this preamble to see specifically what device ops are mapped to:

library(tensorflow)
sess <- tf$Session(config = tf$ConfigProto(log_device_placement=TRUE))

library(keras)
k_set_session(sess)
553084 (4) [Avatar] Offline
#3
Hi, thanks for the answer!

Running `library(tensorflow)` returns
Warning message:
package ‘tensorflow’ was built under R version 3.4.4

Running `sess <- tf$Session(config = tf$ConfigProto(log_device_placement=TRUE))` returns
G:\Anaconda\envs\r-tensorflow\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters

Then looking at `sess` just gives
<tensorflow.python.client.session.Session>

Then executing `library(keras)` returns
Using TensorFlow backend.
Warning message:
package ‘keras’ was built under R version 3.4.4

And finally `k_set_session(sess)` does not return anything.

None of the above commands mentioned GPU though...

After that I tried to execute the complete training example of the Reuters data set in section 3.5.1-3.5.5 and training was still executed on the CPU.

Perhaps there is something else I can do?


EDIT:

I also tried something like
tf$print(sess.run())

to emulate what is done in the "Logging Device placement" example here: https://www.tensorflow.org/programmers_guide/using_gpu
but R just returned

Error in py_get_attr_impl(x, name, silent) :
AttributeError: module 'tensorflow' has no attribute 'print'
553084 (4) [Avatar] Offline
#4
OK, I think I came closer to the core of the problem. Turns out the first time I ran the IMDB example, it was executed on the GPU, but filled the GPU memory completely. Running the fit function again after that crashed R console and the GPU (The NVIDIA GPU was not able to be used even for games after that and even after a windows restart). I had to disable and enable the GPU in device manager to get its functionality back. It does get used for R tensorflow now, since I see memory usage and copy activity on the GPU when a fit command is executed. However, periodically I do get the same crashes, and sometimes I get a blue screen with "Driver_Power_State_Failure" message (which is certainly correlated with the R tensorflow activity since I have never seen this blue screen before on my system).

My impression is that R tensorflow graphics card support is not stable (e.g. it does not free memory on GPU after a fit function run). Running a separate tensorflow installation with python never produces such problems with GPU for me.
256385 (46) [Avatar] Offline
#5
The R behavior of fit should be identical to that of Python (it's an extremely thin wrapper over Python fit). The issue here may be that in R you are executing the fit within an interactive session (where the TF graph, etc) is retained across executions of `source` whereas in Python you are executing the script in it's own process which allows the memory to be freed after the process exits.

In R you an use the `k_clear_session()` function to clear out the TF graph between runs.
553084 (4) [Avatar] Offline
#6
Thanks for the suggestion! I'll give this a try.