Deploying Keras models through Google Cloud ML

I want to use Google Cloud ML to host my Keras models so that I can call the API and make some predictions. I am facing some problems from Keras.

So far, I have managed to create a model using TensorFlow and deploy it to CloudML. To do this, I had to make some changes to my base TF code. The changes are described here: https://cloud.google.com/ml/docs/how-tos/preparing-models#code_changes

I was also able to train a similar model with Keras. I can even save the model in the same export and export.meta format as with TF.

from keras import backend as K saver = tf.train.Saver() session = K.get_session() saver.save(session, 'export') 

The part that I am missing is how to add placeholders for input and output to the graph that I built on Keras?

+7
tensorflow keras google-cloud-platform google-cloud-ml
source share
3 answers

After training your model in the Google Cloud ML Engine (check out this awesome tutorial ), I called the input and output of my graph with

 signature = predict_signature_def(inputs={'NAME_YOUR_INPUT': new_Model.input}, outputs={'NAME_YOUR_OUTPUT': new_Model.output}) 

You can see the full export example of the already trained keras model.h5 model below.

 import keras.backend as K import tensorflow as tf from keras.models import load_model, Sequential from tensorflow.python.saved_model import builder as saved_model_builder from tensorflow.python.saved_model import tag_constants, signature_constants from tensorflow.python.saved_model.signature_def_utils_impl import predict_signature_def # reset session K.clear_session() sess = tf.Session() K.set_session(sess) # disable loading of learning nodes K.set_learning_phase(0) # load model model = load_model('model.h5') config = model.get_config() weights = model.get_weights() new_Model = Sequential.from_config(config) new_Model.set_weights(weights) # export saved model export_path = 'YOUR_EXPORT_PATH' + '/export' builder = saved_model_builder.SavedModelBuilder(export_path) signature = predict_signature_def(inputs={'NAME_YOUR_INPUT': new_Model.input}, outputs={'NAME_YOUR_OUTPUT': new_Model.output}) with K.get_session() as sess: builder.add_meta_graph_and_variables(sess=sess, tags=[tag_constants.SERVING], signature_def_map={ signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature}) builder.save() 

You can also see my full implementation .

edit: And if my answer solves your problem, just leave me a splash here :)

+9
source share

I found that to use keras in the Google Cloud, you need to install it using setup.py script and put it in a folder in the same place where you run the gcloud command:

 ├── setup.py └── trainer ├── __init__.py ├── cloudml-gpu.yaml ├── example5-keras.py 

And in setup.py you place the content, for example:

 from setuptools import setup, find_packages setup(name='example5', version='0.1', packages=find_packages(), description='example to run keras on gcloud ml-engine', author='Fuyang Liu', author_email=' fuyang.liu@example.com ', license='MIT', install_requires=[ 'keras', 'h5py' ], zip_safe=False) 

Then you can run your work on gcloud, for example:

 export BUCKET_NAME=tf-learn-simple-sentiment export JOB_NAME="example_5_train_$(date +%Y%m%d_%H%M%S)" export JOB_DIR=gs://$BUCKET_NAME/$JOB_NAME export REGION=europe-west1 gcloud ml-engine jobs submit training $JOB_NAME \ --job-dir gs://$BUCKET_NAME/$JOB_NAME \ --runtime-version 1.0 \ --module-name trainer.example5-keras \ --package-path ./trainer \ --region $REGION \ --config=trainer/cloudml-gpu.yaml \ -- \ --train-file gs://tf-learn-simple-sentiment/sentiment_set.pickle 

To use the GPU, add a cloudml-gpu.yaml with the following contents to your file:

 trainingInput: scaleTier: CUSTOM # standard_gpu provides 1 GPU. Change to complex_model_m_gpu for 4 GPUs masterType: standard_gpu runtimeVersion: "1.0" 
+3
source share

I don't know anything about Keras. I have consulted with some experts and the following should work:

 from keras import backend as k # Build the model first model = ... # Declare the inputs and outputs for CloudML inputs = dict(zip((layer.name for layer in model.input_layers), (t.name for t in model.inputs))) tf.add_to_collection('inputs', json.dumps(inputs)) outputs = dict(zip((layer.name for layer in model.output_layers), (t.name for t in model.outputs))) tf.add_to_collection('outputs', json.dumps(outputs)) # Fit/train the model model.fit(...) # Export the model saver = tf.train.Saver() session = K.get_session() saver.save(session, 'export') 

Some important points:

  • You must call tf.add_to_collection after creating the model, but before you ever call K.get_session (), set, etc.,
  • You must be sure to name the input and output levels when you add them to the chart, because you will need to refer to them when you submit forecasting requests.
+1
source share

All Articles