webentwicklung-frage-antwort-db.com.de

Keras seq2seq - Word-Einbettung

Ich arbeite an einem generativen Chatbot, der auf seq2seq in Keras basiert. Ich habe Code von dieser Site verwendet: https://machinelearningmastery.com/develop-encoder-decoder-model-sequence-sequence-prediction-keras/

Meine Modelle sehen so aus:

# define training encoder
encoder_inputs = Input(shape=(None, n_input))
encoder = LSTM(n_units, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]

# define training decoder
decoder_inputs = Input(shape=(None, n_output))
decoder_lstm = LSTM(n_units, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(n_output, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# define inference encoder
encoder_model = Model(encoder_inputs, encoder_states)

# define inference decoder
decoder_state_input_h = Input(shape=(n_units,))
decoder_state_input_c = Input(shape=(n_units,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model([decoder_inputs] + decoder_states_inputs [decoder_outputs] + decoder_states)

Dieses neuronale Netzwerk ist so konzipiert, dass es mit einem heißen Vektor arbeitet. Die Eingabe in dieses Netzwerk sieht beispielsweise folgendermaßen aus:

[[[0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]
  [0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]
  [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]]
  [[0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]
  [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]
  [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]]]

Wie kann ich diese Modelle neu erstellen, um mit Wörtern zu arbeiten? Ich möchte die Word-Einbettungsebene verwenden, habe jedoch keine Ahnung, wie die Einbettungsebene mit diesen Modellen verbunden werden soll.

Meine Eingabe sollte [[1,5,6,7,4], [4,5,7,5,4], [7,5,4,2,1]] sein, wobei Int-Zahlen Darstellungen von Wörtern sind.

Ich habe alles versucht, aber ich bekomme immer noch Fehler. Kannst du mir bitte helfen?

5

Ich habe es endlich geschafft. Hier ist der Code:

Shared_Embedding = Embedding(output_dim=embedding, input_dim=vocab_size, name="Embedding")

encoder_inputs = Input(shape=(sentenceLength,), name="Encoder_input")
encoder = LSTM(n_units, return_state=True, name='Encoder_lstm') 
Word_embedding_context = Shared_Embedding(encoder_inputs) 
encoder_outputs, state_h, state_c = encoder(Word_embedding_context) 
encoder_states = [state_h, state_c] 
decoder_lstm = LSTM(n_units, return_sequences=True, return_state=True, name="Decoder_lstm")

decoder_inputs = Input(shape=(sentenceLength,), name="Decoder_input")
Word_embedding_answer = Shared_Embedding(decoder_inputs) 
decoder_outputs, _, _ = decoder_lstm(Word_embedding_answer, initial_state=encoder_states) 
decoder_dense = Dense(vocab_size, activation='softmax', name="Dense_layer") 
decoder_outputs = decoder_dense(decoder_outputs) 

model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

encoder_model = Model(encoder_inputs, encoder_states) 

decoder_state_input_h = Input(shape=(n_units,), name="H_state_input") 
decoder_state_input_c = Input(shape=(n_units,), name="C_state_input") 
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] 
decoder_outputs, state_h, state_c = decoder_lstm(Word_embedding_answer, initial_state=decoder_states_inputs) 
decoder_states = [state_h, state_c] 
decoder_outputs = decoder_dense(decoder_outputs)

decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)

"model" ist ein Trainingsmodell. encoder_model und decoder_model sind Inferenzmodelle

4

Unten im Abschnitt FAQ dieses Beispiels finden Sie ein Beispiel für die Verwendung von Einbettungen mit seq2seq. Ich bin gerade dabei, den Inferenzschritt selbst herauszufinden. Ich werde hier posten, wenn ich es bekomme. https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html

0
emericw