2

In my research on image encryption, I use the metrics below to evaluate the level of distortion between the original image ( $I$ ) and its encrypted version ( $I'$ ):

  • MSE (Mean Squared Error)
  • MAE (Mean Absolute Error)
  • PSNR (Peak Signal-to-Noise Ratio)

I’m aware that in the context of image compression or reconstruction, the goal is to minimize MSE and MAE, and maximize PSNR to ensure fidelity.

However, in the context of encryption, the aim is quite the opposite: the encrypted image should be visually and statistically as different as possible from the original image. Therefore, we typically expect:

  • High values of MSE and MAE,
  • Low values of PSNR (generally well below $20 dB$).

Here are the mathematical definitions of the metrics I am using:

$$\mathrm{MAE} = \frac{1}{NM} \sum_{i=1}^{N} \sum_{j=1}^{M} \left| I(i,j) - I'(i,j) \right| \\ \mathrm{PSNR} = 10 \cdot \log_{10} \left( \frac{255^2}{\mathrm{MSE}} \right)$$

My question is: Are there any commonly accepted or theoretically justified ranges for these metrics (MSE, MAE, PSNR) when evaluating image encryption algorithms?

For instance, would a PSNR below $10 dB$ or an MSE above $7000$ be considered appropriate or even desirable in encryption quality evaluation?

  • 6
    I'm not sure if I understand your question. Properly encrypted data more or less resemble random data, no matter if the input was text, image, video, compressed data, uncompressed data, ... . For this content-agnostic encryption what you ask makes no sense. If you refer to an encryption which is instead specific to images and does not have these properties, then please make clear what encryption algorithm you refer to. – Steffen Ullrich Apr 12 '25 at 17:49
  • 2
    If this goes very deep into cryptography, it might also be a good idea to move this question to the Crypto Stack Exchange. – Ja1024 Apr 12 '25 at 18:57
  • 2
    Consensus on Crypto-SE is that "image encryption" (on digital computers, with the objective of making the image unintelligible) needs not be studied as a distinct subject, because modern encryption encrypts any digital data securely. Here are questions about that. Accordingly, reputable cryptography journals (including those of IACR) do not publish articles on image encryption. See this (self ref.) for a discussion on why there are so many articles on image encryption elsewhere. – fgrieu Apr 15 '25 at 19:58
  • @fgrieu Apart from trash publications, OP is researching image encryption and wants to measure how "different" an encrypted image is from the original image. Note that: In typical image processing (like compression or reconstruction), we want the image to stay as close as possible to the original — so we minimize errors.

    But in encryption, it's the opposite: we want the encrypted image to be totally scrambled, looking nothing like the original — so we maximize errors.

    – Mario Apr 17 '25 at 13:58
  • OP's original question is about acceptable metric ranges in image encryption, imagine $I$ is the original image (a cat ) and $I'$ is the encrypted image (a jumble of noise ). – Mario Apr 17 '25 at 13:58

4 Answers4

2

In my research on image encryption, I use the metrics below to evaluate…

You can stop right there: the entire field of "image encryption" is a gigantic scam, and unfortunately whoever misled you into it is in on the scam and probably preying on you (possibly through a cycle of abuse and scamming that they were led into by someone else).

The only legitimate research on "image encryption" is on the sociology and incentive structures and citation metrics that lead to vast swaths of fundamentally bullshit publications in a field that has no technical reason whatsoever to exist.

Modern encryption works on any type of data as the plaintext—text, image, audio, code, or any sequence of bits. There is never any need for encryption algorithms specialized to images.

If there are any patterns in the plaintext that could be detected through the ciphertexts of a putative encryption scheme, such as images versus text, the putative encryption scheme would be broken. The modern standard of security* is ciphertext indistinguishability: adversaries cannot distinguish such patterns in the ciphertext, even if they control the plaintexts of their choice.

Every author, journal, publisher, or editor who writes or endorses anything about "image encryption" with a straight face is fundamentally clueless about cryptography. You can immediately discount anything they say about the entire field of cryptography and move on. Journals that publish papers on "image encryption" are practically guaranteed to be predatory scams, publication mills that exist only to create the simulacrum of academic publications with no substance or editorial merit.

Unfortunately, many of these authors are academics at institutions whose funding depends on churning out publications and citation counts. And you may be a student of such an academic, or you may be an academic yourself at such an institution, who is impelled by these incentives to contribute to the bullshit. But a modicum of critical thinking can cut through the bullshit. Don't be a chump: get away from "image encryption" as soon as you can.


* This is the standard for confidentiality of an encryption scheme; there are many types of cryptosystems other than encryption schemes, like authenticated encryption schemes or signature schemes, and different standards of security for them like IND-CCA2 and EUF-CMA and so on.

2

No comparison of the encrypted and decrypted image is sensible. If the encryption scheme is at all decent, the encryption will be indistinguishable from random noise no matter what the image is.

Furthermore, it is impossible to make an image encryption algorithm that is better than generic encryption. Suppose that there existed a function C which encrypted images. We can use this function to encrypt generic data D with no overhead by using C to encrypt the grayscale image in where the nth pixel of the image is the nth byte of D.

Oscar Smith
  • 391
  • 1
  • 11
2

My question is: Are there any commonly accepted or theoretically justified ranges for these metrics (MSE, MAE, PSNR) when evaluating image encryption algorithms?

The problem is that these are emergent properties of the output rather than valid metrics to check if image encryption has succeeded or not. It is expected that any ciphertext to be indistinguishable from random within the output domain. However, that doesn't mean that a random-looking ciphertext is necessarily secure - many insecure ciphers produce a well-distributed output.

At the very best it would be possible to distinguish a bad cipher when looking at these values. You'd however expect those values to be produced by having a function map the output of a known-good cipher to the output domain required for images. So even if this emergent property doesn't look good it may be caused by the mapping function rather than the cipher.

Maarten Bodewes
  • 96,351
  • 14
  • 169
  • 323
-3

In the context of image encryption, high distortion between the original image ($I$) and its encrypted version ($I'$) is desired in image encryption. Therefore:

  • ✅ High MSE (e.g., $> 6000$)

  • ✅ High MAE (e.g., $> 100$)

  • ✅ Low PSNR (e.g., $< 10–15 dB$ )

These values suggest that the encrypted image is statistically and visually very different from the original — which is exactly what you want for strong/good encryption.

References Supporting These Ranges:

  • Al-Husainy (2009) – "PSNR below 10 dB and high MSE (≥ 6000) are desirable for secure encryption."

  • Khan et al. (2014) – "Low PSNR (< 10–15 dB) and high MSE indicate effective image encryption."

  • Patidar et al. (2009) – Used MSE ≈ 8000–10000 and PSNR < 10 dB in encrypted images.


Are there any commonly accepted or theoretically justified ranges for these metrics (MSE, MAE, PSNR) when evaluating image encryption algorithms?

There are no strict standards (at least I'm aware of it), but these ranges are commonly reported in the literature as signs of good encryption.

...For instance, would a PSNR below 10 dB or an MSE above 7000 be considered appropriate or even desirable in encryption quality evaluation?

# Example: Ideal for encryption (grayscale, 8-bit)
MSE = 7000         # high → good
MAE = 120          # high → good
PSNR = 9.6         # low  → good

Note: These metrics are not sufficient alone to assess encryption security. They should be complemented with statistical tests (entropy, histogram analysis, correlation) and cryptographic analyses (key sensitivity, brute-force resistance).

However, I experimented with this over a similar task in the below image and calculated the metrics using a Pythonic solution:

from PIL import Image
import numpy as np
import cv2

Load the image

image_path = "/content/task2.PNG"

Convert to grayscale for simplicity

original_img = Image.open(image_path).convert("L")
original_array = np.array(original_img)

Simple pixel manipulation for "encryption": invert pixel values

encrypted_array = 255 - original_array # basic encryption by inversion

Save the encrypted image

encrypted_image_path = "/content/encrypted_task2.png" encrypted_img.save(encrypted_image_path)

Display the encrypted image

encrypted_img.show()

encrypted_image_path


Edit:

Clarifying the Use of "Encryption"

@Steffen Ullrich is right — the example using pixel inversion.

The pixel inversion example (255 - I) — not secure and off-topic.

encrypted_array = 255 - original_array  # basic inversion

...is not true encryption, but rather a simple deterministic transformation used here for demonstration purposes only.

  • It uses no key, so it's not secure.

  • It's reversible without any secret.

  • ️ It may produce high distortion, but does not offer any confidentiality.

What Proper Image Encryption Involves

Real image encryption should involve:

  • A secret key

  • Nonlinear operations (e.g., substitution-diffusion)

  • Possibly chaotic maps or cryptographic primitives (e.g., AES)

  • Resistance to attacks (statistical, differential, brute-force)

Revised Example (Optional) If you want to demonstrate real encryption-like behavior with a key, even a basic XOR with a pseudorandom key is a better toy example:

np.random.seed(42)  # for reproducibility
key = np.random.randint(0, 256, size=original_array.shape, dtype=np.uint8)
encrypted_array = np.bitwise_xor(original_array, key)

This:

  • Uses a key

  • Reversible only with the key

  • Better resembles real encryption behavior

TL;DR:

Pixel inversion is not only encryption — it was used to highlight the behavior of distortion metrics. For actual encryption evaluation, use key-dependent, secure transformations. Some authors demonstrate these metrics using example images, but it’s essential to use key-based transformations for any realistic evaluation.


outputs:

Fig. 1: Original input image ($I$). Fig. 2: After pixel manipulation [encrypted image ($I'$)].
Mario
  • 103
  • 4