asked    York     2018-07-10       python       273 view        1 Answer

[SOLVED] Error: 'NoneType' object has no attribute '_inbound_nodes'

[enter image description here][1]I am trying to make a parallel ANN network. I plan to :

  • input a 120X120 image.
  • disintegrate it into 9 40x40 images.
  • Run Convolutional Net.
  • Merge output in same pattern.
  • Run another conv-net on merged layer.

def conv_net(): 
    input_shape = [120,120,1]
    inp=Input(shape=input_shape)
    print(type(inp))
    print(inp.shape)
    row_layers  = []
    col_layers  = []

    # fn = lambda x: self.conv(x)
    for i in range(0, 120, 40):
        row_layers = []

        for j in range(0, 120, 40):
            # out = (self.conv(inp[:,i:i+39,j:j+39]))
            inputs = inp[:, i:i + 40, j:j + 40]

            x = Dense(64, activation='relu')(inputs)
            out = Dense(64, activation='relu')(x)
            print(out.shape)
            row_layers.append(out)
        col_layers.append(keras.layers.concatenate(row_layers, axis=2))
        print((len(col_layers)))
    merged = keras.layers.concatenate(col_layers, axis=1)
    print(merged.shape)
    con = Conv2D(1, kernel_size=5, strides=2, padding='same', activation='relu')(merged)
    print(con.shape)
    output = Flatten()(con)
    output = Dense(1)(output)
    print(output.shape)

    model = Model(inputs=inp, outputs=output)
    # plot_model(model,to_file='model.png')
    return model

I am getting an error NoneType object has no attribute _inbound_nodes.

I debug a little. And the error is becuase of this line.

inputs = inp[:,i:i+40,j:j+40]

Error:

Traceback (most recent call last):
  File "C:/Users/Todd Letcher/machine_learning_examples/unsupervised_class3/slicing_img.py", line 83, in <module>
    conv_net()
  File "C:/Users/Todd Letcher/machine_learning_examples/unsupervised_class3/slicing_img.py", line 80, in conv_net
    model = Model(inputs=inp, outputs = output)
  File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 91, in __init__
    self._init_graph_network(*args, **kwargs)
  File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 235, in _init_graph_network
    self.inputs, self.outputs)
  File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 1406, in _map_graph_network
    tensor_index=tensor_index)
  File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 1393, in build_map
    node_index, tensor_index)
  File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 1393, in build_map
    node_index, tensor_index)
  File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 1393, in build_map
    node_index, tensor_index)
  File "C:\Users\Todd Letcher\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 1365, in build_map
    node = layer._inbound_nodes[node_index]
AttributeError: 'NoneType' object has no attribute '_inbound_nodes'

Help appreciated. Thank you

P.S.:I removed the slicing line inp[:,i:i+39,j:j+39] and it runs ok.

Image shows what I intend to do. The only difference is that I want to split the image into 9 tiles. Here the same image is fed to all the parallel Conv-nets.

[1]: https://i.stack.imgur.com/Z7nt0.png

  1 Answer  

        answered    Tammy     2018-07-10      

Finally arrived at an answer. Although I am still wondering why my previous code threw error, I just add lambda layers to split.

    def conv_net(self): # Add dropout if  Overfiting  
    input_shape = [120,120,1]

    inp=Input(shape=input_shape)
    col_layers  = []
    def sliced(x,i,j):
        return x[:,i:i+40,j:j+40]

    for i in range(0,120,40):
        row_layers  = []

        for j in range(0,120,40):

            #out = (self.conv(inp[:,i:i+39,j:j+39]))

            inputs = Lambda(sliced,arguments={'i':i,'j':j})(inp)


            #inputs = Input(shape=input_shape_small)

            out = (self.conv(inputs))
            print(out.shape)

            row_layers.append(out)
        col_layers.append(keras.layers.concatenate(row_layers, axis=2))
        print((len(col_layers)))

    merged = keras.layers.concatenate(col_layers,axis=1)
    print(merged.shape)


    #merged = Reshape((3,3,1))(merged)
    print(merged.shape)

    con = Conv2D(1,kernel_size=5,strides=2,padding='same',activation='relu')(merged)
    con = (BatchNormalization(momentum=0.8))(con)
    print(con.shape)
    #con = Conv2D(1,kernel_size=5,strides=2,padding='same',activation='relu')(inp)
    output = Flatten()(con)
    output = Dense(1)(output)
    print(output.shape)


    model = Model(inputs=inp, outputs = output)
    #plot_model(model,to_file='model.png')
    print(model.summary())

    plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
    return model

This works with no errors.





Your Answer





 2018-07-10         Julius

Pre-Trained model to extract the feature of the images tensorflow?

Could someone please provide details of model available to extract the feature of images model for tensorflow or Keras? I have been looking for pre-trained models that will extract the features of the image. And then I will create a vector of the images then apply the nearest neighbor to find out similar images. Any ordinary pre-trained classification model like vgg or resNet will extract different features of the image on each layer. While the earlier layers will respond to more basic and simple features like edges, the deeper layers will respond to more specific featur...
 image-processing                     1 answers                     89 view
 2018-07-10         Eden

Selection of activation function

I am making a AutoEncoder on Tensorflow which takes input as a 3 D Matrix whose value lie in the range of [-1,1]. What is the optimal activation function for this scenario?Also, what is the rule of thumb in selecting the activation function w.r.t to the input ranges? First of all, it is generally advisable to start the network with batch normalization, which would more or less confine the values between -1 and 1 anyway.The activation function of the hidden layers should have non-linearity to be able handle higher levels of complexity. So I'd choose relu or one of its var...
 tensorflow                     2 answers                     88 view
 2018-07-10         Dora

how can i use a deep network the same as other deep network?

I have an auto-encoder as we know this network is produced from 3 parts, Encoder, Decoder , latent space, I attached an image that shows my structure: https://i.stack.imgur.com/VqYvJ.jpgit has an auto-encoder in first part and after that I want to have an other encoder but the same as first encoder. now I want to know how can I force the structure that the second encoder was the same as first encoder ? please guide me about this problem. The encoder is defined by it's graph (the operations to do) and the weights (the matrices/biases, etc). One is stored in the graph and ...
 tensorflow                     1 answers                     89 view