webentwicklung-frage-antwort-db.com.de

Wie werden benutzerdefinierte Bilder in TensorBoard mit Keras angezeigt?

Ich arbeite an einem Segmentierungsproblem in Keras und möchte am Ende jeder Trainingsepoche Segmentierungsergebnisse anzeigen.

Ich möchte etwas Ähnliches wie Tensorflow: Anzeigen benutzerdefinierter Bilder in Tensorboard (z. B. Matplotlib-Plots) , jedoch mit Keras. Ich weiß, dass Keras den TensorBoard callback hat, aber es scheint für diesen Zweck begrenzt zu sein.

Ich weiß, dass dies die Keras-Backend-Abstraktion zerstören würde, aber ich bin trotzdem daran interessiert, das TensorFlow-Backend zu verwenden.

Kann man das mit Keras + TensorFlow erreichen?

13
Fábio Perez

Die folgende Lösung funktioniert also gut für mich:

import tensorflow as tf

def make_image(tensor):
    """
    Convert an numpy representation image to Image protobuf.
    Copied from https://github.com/lanpa/tensorboard-pytorch/
    """
    from PIL import Image
    height, width, channel = tensor.shape
    image = Image.fromarray(tensor)
    import io
    output = io.BytesIO()
    image.save(output, format='PNG')
    image_string = output.getvalue()
    output.close()
    return tf.Summary.Image(height=height,
                         width=width,
                         colorspace=channel,
                         encoded_image_string=image_string)

class TensorBoardImage(keras.callbacks.Callback):
    def __init__(self, tag):
        super().__init__() 
        self.tag = tag

    def on_Epoch_end(self, Epoch, logs={}):
        # Load image
        img = data.astronaut()
        # Do something to the image
        img = (255 * skimage.util.random_noise(img)).astype('uint8')

        image = make_image(img)
        summary = tf.Summary(value=[tf.Summary.Value(tag=self.tag, image=image)])
        writer = tf.summary.FileWriter('./logs')
        writer.add_summary(summary, Epoch)
        writer.close()

        return

tbi_callback = TensorBoardImage('Image Example')

Übergeben Sie einfach den Rückruf an fit oder fit_generator.

Beachten Sie, dass Sie einige Vorgänge auch mit der Variable model im Callback ausführen können. Sie können das Modell beispielsweise auf einigen Bildern ausführen, um die Leistung zu überprüfen.

 screen

19
Fábio Perez

Auf ähnliche Weise möchten Sie vielleicht tf-matplotlib versuchen. Hier ist ein Streudiagramm

import tensorflow as tf
import numpy as np

import tfmpl

@tfmpl.figure_tensor
def draw_scatter(scaled, colors): 
    '''Draw scatter plots. One for each color.'''  
    figs = tfmpl.create_figures(len(colors), figsize=(4,4))
    for idx, f in enumerate(figs):
        ax = f.add_subplot(111)
        ax.axis('off')
        ax.scatter(scaled[:, 0], scaled[:, 1], c=colors[idx])
        f.tight_layout()

    return figs

with tf.Session(graph=tf.Graph()) as sess:

    # A point cloud that can be scaled by the user
    points = tf.constant(
        np.random.normal(loc=0.0, scale=1.0, size=(100, 2)).astype(np.float32)
    )
    scale = tf.placeholder(tf.float32)        
    scaled = points*scale

    # Note, `scaled` above is a tensor. Its being passed `draw_scatter` below. 
    # However, when `draw_scatter` is invoked, the tensor will be evaluated and a
    # numpy array representing its content is provided.   
    image_tensor = draw_scatter(scaled, ['r', 'g'])
    image_summary = tf.summary.image('scatter', image_tensor)      
    all_summaries = tf.summary.merge_all() 

    writer = tf.summary.FileWriter('log', sess.graph)
    summary = sess.run(all_summaries, feed_dict={scale: 2.})
    writer.add_summary(summary, global_step=0)

Wenn er ausgeführt wird, führt dies zu folgender Darstellung in Tensorboard

Beachten Sie, dass tf-matplotlib sich um die Auswertung aller Tensoreingaben kümmert, pyplot-Threading-Probleme vermeidet und das Blitting für kritische Laufzeitdarstellungen unterstützt.

2
cheind

Ich versuche, Matplotlib-Diagramme auf dem Tensorboard anzuzeigen (nützliche Beispiele für Diagrammstatistiken, Heatmaps usw.). Es kann auch für den allgemeinen Fall verwendet werden.

class AttentionLogger(keras.callbacks.Callback):
        def __init__(self, val_data, logsdir):
            super(AttentionLogger, self).__init__()
            self.logsdir = logsdir  # where the event files will be written 
            self.validation_data = val_data # validation data generator
            self.writer = tf.summary.FileWriter(self.logsdir)  # creating the summary writer

        @tfmpl.figure_tensor
        def attention_matplotlib(self, gen_images): 
            '''
            Creates a matplotlib figure and writes it to tensorboard using tf-matplotlib
            gen_images: The image tensor of shape (batchsize,width,height,channels) you want to write to tensorboard
            '''  
            r, c = 5,5  # want to write 25 images as a 5x5 matplotlib subplot in TBD (tensorboard)
            figs = tfmpl.create_figures(1, figsize=(15,15))
            cnt = 0
            for idx, f in enumerate(figs):
                for i in range(r):
                    for j in range(c):    
                        ax = f.add_subplot(r,c,cnt+1)
                        ax.set_yticklabels([])
                        ax.set_xticklabels([])
                        ax.imshow(gen_images[cnt])  # writes the image at index cnt to the 5x5 grid
                        cnt+=1
                f.tight_layout()
            return figs

        def on_train_begin(self, logs=None):  # when the training begins (run only once)
                image_summary = [] # creating a list of summaries needed (can be scalar, images, histograms etc)
                for index in range(len(self.model.output)):  # self.model is accessible within callback
                    img_sum = tf.summary.image('img{}'.format(index), self.attention_matplotlib(self.model.output[index]))                    
                    image_summary.append(img_sum)
                self.total_summary = tf.summary.merge(image_summary)

        def on_Epoch_end(self, Epoch, logs = None):   # at the end of each Epoch run this
            logs = logs or {} 
            x,y = next(self.validation_data)  # get data from the generator
            # get the backend session and Sun the merged summary with appropriate feed_dict
            sess_run_summary = K.get_session().run(self.total_summary, feed_dict = {self.model.input: x['encoder_input']})
            self.writer.add_summary(sess_run_summary, global_step =Epoch)  #finally write the summary!

Dann müssen Sie es als Argument an fit/fit_generator Übergeben.

#val_generator is the validation data generator
callback_image = AttentionLogger(logsdir='./tensorboard', val_data=val_generator)
... # define the model and generators

# autoencoder is the model, note how callback is suppiled to fit_generator
autoencoder.fit_generator(generator=train_generator,
                    validation_data=val_generator,
                    callbacks=callback_image)

In meinem Fall, in dem ich Aufmerksamkeitskarten (als Heatmaps) auf Tensorboard anzeige, ist dies die Ausgabe.

tensorboard

2
Lokesh Kumar

Basierend auf den obigen Antworten und meiner eigenen Suche gebe ich den folgenden Code, um die folgenden Dinge mit TensorBoard in Keras zu beenden:


  • problemstellung: Vorhersagen der Disparitätskarte beim binokularen Stereoabgleich;
  • um das Modell mit der Eingabe linkes Bild x und der Boden-Wahrheit-Disparitätskarte gt zuzuführen;
  • um die Eingabe x und die Bodenwahrheit 'gt' zu einem bestimmten Zeitpunkt anzuzeigen;
  • um die Ausgabe y Ihres Modells zu einem bestimmten Zeitpunkt anzuzeigen.

  1. Zuerst müssen Sie Ihre kostümierte Callback-Klasse mit Callback erstellen. Note, dass ein Rückruf über die Klasseneigenschaft self.model auf das zugehörige Modell zugreifen kann. Auch Note: müssen Sie die Eingabe mit feed_dict dem Modell zuführen, wenn Sie die Ausgabe Ihres Modells abrufen und anzeigen möchten. 

    from keras.callbacks import Callback
    import numpy as np
    from keras import backend as K
    import tensorflow as tf
    
    # make the 1 channel input image or disparity map look good within this color map. This function is not necessary for this Tensorboard problem shown as above. Just a function used in my own research project.
    def colormap_jet(img):
        return cv2.cvtColor(cv2.applyColorMap(np.uint8(img), 2), cv2.COLOR_BGR2RGB)
    
    class customModelCheckpoint(Callback):
        def __init__(self, log_dir = './logs/tmp/', feed_inputd_display = None):
              super(customModelCheckpoint, self).__init__()
              self.seen = 0
              self.feed_inputs_display = feed_inputs_display
              self.writer = tf.summary.FileWriter(log_dir)
    
        # this function will return the feeding data for TensorBoard visualization;
        # arguments:
        #  * feed_input_display : [(input_yourModelNeed, left_image, disparity_gt ), ..., (input_yourModelNeed, left_image, disparity_gt), ...], i.e., the list of tuples of Numpy Arrays what your model needs as input and what you want to display using TensorBoard. Note: you have to feed the input to the model with feed_dict, if you want to get and display the output of your model. 
        def custom_set_feed_input_to_display(self, feed_inputs_display):
              self.feed_inputs_display = feed_inputs_display
    
        # copied from the above answers;
        def make_image(self, numpy_img):
              from PIL import Image
              height, width, channel = numpy_img.shape
              image = Image.fromarray(numpy_img)
              import io
              output = io.BytesIO()
              image.save(output, format='PNG')
              image_string = output.getvalue()
              output.close()
              return tf.Summary.Image(height=height, width=width, colorspace= channel, encoded_image_string=image_string)
    
    
        # A callback has access to its associated model through the class property self.model.
        def on_batch_end(self, batch, logs = None):
              logs = logs or {} 
              self.seen += 1
              if self.seen % 200 == 0: # every 200 iterations or batches, plot the costumed images using TensorBorad;
                  summary_str = []
                  for i in range(len(self.feed_inputs_display)):
                      feature, disp_gt, imgl = self.feed_inputs_display[i]
                      disp_pred = np.squeeze(K.get_session().run(self.model.output, feed_dict = {self.model.input : feature}), axis = 0)
                      #disp_pred = np.squeeze(self.model.predict_on_batch(feature), axis = 0)
                      summary_str.append(tf.Summary.Value(tag= 'plot/img0/{}'.format(i), image= self.make_image( colormap_jet(imgl)))) # function colormap_jet(), defined above;
                      summary_str.append(tf.Summary.Value(tag= 'plot/disp_gt/{}'.format(i), image= self.make_image( colormap_jet(disp_gt))))
                      summary_str.append(tf.Summary.Value(tag= 'plot/disp/{}'.format(i), image= self.make_image( colormap_jet(disp_pred))))
    
                  self.writer.add_summary(tf.Summary(value = summary_str), global_step =self.seen)
    
  2. Als nächstes übergeben Sie dieses Callback-Objekt für Ihr Modell an fit_generator(), z.

       feed_inputs_4_display = some_function_you_wrote()
       callback_mc = customModelCheckpoint( log_dir = log_save_path, feed_inputd_display = feed_inputs_4_display)
       # or 
       callback_mc.custom_set_feed_input_to_display(feed_inputs_4_display)
       yourModel.fit_generator(... callbacks = callback_mc)
       ...
    
  3. Jetzt können Sie den Code ausführen und den TensorBoard-Host aufrufen, um die kostümierte Bildanzeige anzuzeigen. Zum Beispiel habe ich den oben genannten Code verwendet:  enter image description here


    Erledigt! Genießen!

1
ccj5351
class customModelCheckpoint(Callback):
def __init__(self, log_dir='../logs/', feed_inputs_display=None):
      super(customModelCheckpoint, self).__init__()
      self.seen = 0
      self.feed_inputs_display = feed_inputs_display
      self.writer = tf.summary.FileWriter(log_dir)


def custom_set_feed_input_to_display(self, feed_inputs_display):
      self.feed_inputs_display = feed_inputs_display


# A callback has access to its associated model through the class property self.model.
def on_batch_end(self, batch, logs = None):
      logs = logs or {}
      self.seen += 1
      if self.seen % 8 == 0: # every 200 iterations or batches, plot the costumed images using TensorBorad;
          summary_str = []
          feature = self.feed_inputs_display[0][0]
          disp_gt = self.feed_inputs_display[0][1]
          disp_pred = self.model.predict_on_batch(feature)

          summary_str.append(tf.summary.image('disp_input/{}'.format(self.seen), feature, max_outputs=4))
          summary_str.append(tf.summary.image('disp_gt/{}'.format(self.seen), disp_gt, max_outputs=4))
          summary_str.append(tf.summary.image('disp_pred/{}'.format(self.seen), disp_pred, max_outputs=4))

          summary_st = tf.summary.merge(summary_str)
          summary_s = K.get_session().run(summary_st)
          self.writer.add_summary(summary_s, global_step=self.seen)
          self.writer.flush()
callback_mc = customModelCheckpoint(log_dir='../logs/',  feed_inputs_display=[(a, b)])
callback_tb = TensorBoard(log_dir='../logs/', histogram_freq=0, write_graph=True, write_images=True)
callback = []
def data_gen(fr1, fr2):
while True:
    hdr_arr = []
    ldr_arr = []
    for i in range(args['batch_size']):
        try:
            ldr = pickle.load(fr2)           
            hdr = pickle.load(fr1)               
        except EOFError:
            fr1 = open(args['data_h_hdr'], 'rb')
            fr2 = open(args['data_h_ldr'], 'rb')
        hdr_arr.append(hdr)
        ldr_arr.append(ldr)
    hdr_h = np.array(hdr_arr)
    ldr_h = np.array(ldr_arr)
    gen = aug.flow(hdr_h, ldr_h, batch_size=args['batch_size'])
    out = gen.next()
    a = out[0]
    b = out[1]
    callback_mc.custom_set_feed_input_to_display(feed_inputs_display=[(a, b)])
    yield [a, b]

callback.append(callback_tb)
callback.append(callback_mc)
H = model.fit_generator(data_gen(fr1, fr2), steps_per_Epoch=100,   epochs=args['Epoch'], callbacks=callback)

Bild

0
ziyi liu

Hier ein Beispiel, wie Landmarken auf dem Bild gezeichnet werden:

class CustomCallback(keras.callbacks.Callback):
    def __init__(self, model, generator):
        self.generator = generator
        self.model = model

    def tf_summary_image(self, tensor):
        import io
        from PIL import Image

        tensor = tensor.astype(np.uint8)

        height, width, channel = tensor.shape
        image = Image.fromarray(tensor)
        output = io.BytesIO()
        image.save(output, format='PNG')
        image_string = output.getvalue()
        output.close()
        return tf.Summary.Image(height=height,
                             width=width,
                             colorspace=channel,
                             encoded_image_string=image_string)

    def on_Epoch_end(self, Epoch, logs={}):
        frames_arr, landmarks = next(self.generator)

        # Take just 1st sample from batch
        frames_arr = frames_arr[0:1,...]

        y_pred = self.model.predict(frames_arr)

        # Get last frame for which we have done predictions
        img = frames_arr[0,-1,:,:]

        img = img * 255
        img = img[:, :, ::-1]
        img = np.copy(img)

        landmarks_gt = landmarks[-1].reshape(-1,2)
        landmarks_pred = y_pred.reshape(-1,2)

        img = draw_landmarks(img, landmarks_gt, (0,255,0))
        img = draw_landmarks(img, landmarks_pred, (0,0,255))

        image = self.tf_summary_image(img)
        summary = tf.Summary(value=[tf.Summary.Value(image=image)])
        writer = tf.summary.FileWriter('./logs')
        writer.add_summary(summary, Epoch)
        writer.close()
        return
0
mrgloom

Ich glaube, ich habe einen besseren Weg gefunden, solche benutzerdefinierten Bilder mit der tf-matplotlib auf Tensorboard zu protokollieren. Hier ist, wie...

class TensorBoardDTW(tf.keras.callbacks.TensorBoard):
    def __init__(self, **kwargs):
        super(TensorBoardDTW, self).__init__(**kwargs)
        self.dtw_image_summary = None

    def _make_histogram_ops(self, model):
        super(TensorBoardDTW, self)._make_histogram_ops(model)
        tf.summary.image('dtw-cost', create_dtw_image(model.output))

Man muss nur die _make_histogram_ops-Methode aus der TensorBoard-Callback-Klasse überschreiben, um die benutzerdefinierte Zusammenfassung hinzuzufügen. In meinem Fall ist create_dtw_image eine Funktion, die ein Bild mit der tf-matplotlib erstellt.

Grüße,.