You can extend the base Keras implementation of callbacks with a custom on_epoch_end method which compares your metric of interest against a threshold for early stopping.
From the linked article they provide a code sample with a custom callback class + calling that class during model.fit:
import tensorflow as tf
Implement callback function to stop training
when accuracy reaches ACCURACY_THRESHOLD
ACCURACY_THRESHOLD = 0.95
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc') > ACCURACY_THRESHOLD):
print("\nReached %2.2f%% accuracy, so stopping training!!" %(ACCURACY_THRESHOLD*100))
self.model.stop_training = True
Instantiate a callback object
callbacks = myCallback()
Load fashion mninst dataset
mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
Scale data
x_train, x_test = x_train / 255.0, x_test / 255.0
Build a conv dnn model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=20, callbacks=[callbacks])
Check the linked article for more details:
https://towardsdatascience.com/neural-network-with-tensorflow-how-to-stop-training-using-callback-5c8d575c18a9