I want my model to run on several GPU sharing options, but with different batches of data.
Can I do something similar using model.fit()? Is there any other alternative?
model.fit()
try using the make_parallel function in: https://github.com/kuza55/keras-extras/blob/master/utils/multi_gpu.py (it will work only with the backend tensor).
Keras ( 2.0.9) parallelism , keras.utils.multi_gpu_model.
keras.utils.multi_gpu_model
Tensorflow.
(docs): https://keras.io/getting-started/faq/#how-can-i-run-a-keras-model-on-multiple-gpus : https://datascience.stackexchange.com/a/25737