1

When training my CNN model, based on the random initialization of weights, i get the prediction results. In other words, with the same training and test data i get different results every time when i run the code. When tracking the loss, i can know if the result would be acceptable or not. Based on this, i want to know if there is a way to stop the training if the loss begins by a value superior to a desired one in order to re-run it. The min_delta of the EarlyStopping does not treat this case.

Thanks in advance

1 Answers1

1

You can extend the base Keras implementation of callbacks with a custom on_epoch_end method which compares your metric of interest against a threshold for early stopping.

From the linked article they provide a code sample with a custom callback class + calling that class during model.fit:

import tensorflow as tf

Implement callback function to stop training

when accuracy reaches ACCURACY_THRESHOLD

ACCURACY_THRESHOLD = 0.95

class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if(logs.get('acc') > ACCURACY_THRESHOLD): print("\nReached %2.2f%% accuracy, so stopping training!!" %(ACCURACY_THRESHOLD*100)) self.model.stop_training = True

Instantiate a callback object

callbacks = myCallback()

Load fashion mninst dataset

mnist = tf.keras.datasets.fashion_mnist (x_train, y_train),(x_test, y_test) = mnist.load_data()

Scale data

x_train, x_test = x_train / 255.0, x_test / 255.0

Build a conv dnn model

model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ])

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

model.fit(x_train, y_train, epochs=20, callbacks=[callbacks])

Check the linked article for more details:

https://towardsdatascience.com/neural-network-with-tensorflow-how-to-stop-training-using-callback-5c8d575c18a9

Brandon Donehoo
  • 376
  • 1
  • 8