asked    Eden     2018-07-06       python       131 view        1 Answer

[SOLVED] Unable to make predictions on google cloud ml, whereas same model is working on the local machine

I am trying to train a machine learning model usinf tensorflow library in the google cloud. I am able to train the model in the cloud after creating a bucket. I am facing the issue when I am tring to make predictions using the existing model. The code and the data is available in the following Github directory.

The tensorflow version on the cloud is 1.8 and the tensorflow version on my system is also 1.8

I tried to make predictions by giving the following input "gcloud ml-engine predict --model=earnings --version=v8 --json-instances=sample_input_prescaled.json"

It errored out with the following error "{ "error": "Prediction failed: Error during model execution: AbortionError(code=StatusCode.FAILED_PRECONDITION, details=\"Attempting to use uninitialized value output/biases4\n\t [[Node: output/biases4/read = IdentityT=DT_FLOAT, _output_shapes=[[1]], _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"]]\")" }"

  1 Answer  

        answered    Mary     2018-07-06      

The error message indicates that not all variables have been initialized. There is some sample code in the CloudML samples that demonstrate how to take care of initialization (link) Also, I recommend using tf.saved_model.simple_save on newer versions of TF. Try the following changes to your code:

def main_op():
  init_local = variables.local_variables_initializer()
  init_tables = lookup_ops.tables_initializer()
  return control_flow_ops.group(init_local, init_tables)

[...snip...]    

# This replaces everything from your SavedModelBuilder on
tf.saved_model.simple_save(
    session,
    export_dir='exported_model',
    inputs={'input': X},
    outputs={'earnings': prediction},
    legacy_init_op=main_op)  # This line is important




Your Answer





 2018-07-06         Norton

Tensorflow dynamic_rnn propogates nans for batch size greater than 1

Hoping someone can help me understand an issue I have been having using LSTMs with dynamic_rnn in Tensorflow. As per this MWE, when I have a batch size of 1 with sequences that are incomplete (I pad the short tensors with nan's as opposed to zeros to highlight) everything operates as normal, the nan's in the short sequences are ignored as expected...import tensorflow as tfimport numpy as npbatch_1 = np.random.randn(1, 10, 8)batch_2 = np.random.randn(1, 10, 8)batch_1[6:] = np.nan # lets make a short batch in batch 1 second sample of length 6 by padding with nansseq_lengths_b...
 tensorflow                     1 answers                     10 view
 2018-07-06         Kristin

MultiRNN is not working with list of same BasicLSTM cells

Following code is failing when same Basic cells are used (cell1, cell1) for MultiRNNCell:import tensorflow as tfcell1 = tf.contrib.rnn.BasicLSTMCell(128,reuse=False, name = "cell1")cell2 = tf.contrib.rnn.BasicLSTMCell(128,reuse=False,name = "cell2")multi = tf.contrib.rnn.MultiRNNCell([cell1, cell1] )init = multi.zero_state(64, tf.float32)output,state = multi(tf.ones([64,512]),init)Where as this code is working with (cell1, cell2). But cell2 is same as cell1:import tensorflow as tfcell1 = tf.contrib.rnn.BasicLSTMCell(128,reuse=False, name = "cell1")cell2 = tf.contrib.rnn.Bas...
 python                     1 answers                     11 view
 2018-07-06         Jason

How to apply dropout to the outputs of an RNN in TensorFlow Eager using the Keras API?

I would like to apply dropout to the outputs from an RNN. For example, in Tensorflow 1.8.0, I could do this:import tensorflow as tfimport tensorflow.contrib.eager as tfetfe.enable_eager_execution()x = tf.random_uniform((10, 5, 3))gru_cell1 = tf.contrib.rnn.GRUCell(2)gru_cell1 = tf.contrib.rnn.DropoutWrapper(gru_cell1, output_keep_prob=0.5)cell = tf.contrib.rnn.MultiRNNCell([gru_cell1])init_state = cell.zero_state(10, tf.float32)cell_output, _ = tf.nn.dynamic_rnn(cell, x, initial_state=init_state, time_major=False)cell_outputHow can I achiev...
 python                     1 answers                     13 view