4

I have typically heard computer performance discussed in terms of FLOPS.

However, I have recently seen multiple references instead using OPS i.e. operations per second, typically in the context of Big Data.

What is the difference between FLOPS and OPS? Why use one over the other?

I appreciate that they are both metrics for measuring performance - are there cases where the non-floating point operations have large overheads or are non negligible compared to the float operations? What is an example of a non-floating point operation?

Thanks

user1887919
  • 141
  • 1
  • 3

1 Answers1

5

What is the difference between FLOPS and OPS?

  • FLOPS is floating-point operations per second
  • OPS is operations per second

The difference should be obvious from the name: one is the number of operations per second, the other is the number of floating-point operations per second.

Why use one over the other?

If you want to know the floating-point performance, you would measure FLOPS, if you want to know the performance over all kinds of operations, you would measure OPS.

Floating-point operations are just not terribly interesting for most use cases. In fact, in the past, floating-point operations used to be implemented on a separate chip sitting in a separate socket on the motherboard. This was done for two reasons: floating-point operations are pretty complex, slow, and power-hungry, so it was simply not physically possible to have the complex Floating-Point Unit (FPU) on the same die as the CPU. And second, only few people need high floating-point performance, so this made it possible for people to only buy an FPU if they actually needed it, and everybody else avoided wasting money, complexity, and power on an FPU they rarely used.

FLOPS are just not a terribly interesting metric for most use cases. Both parts of the metric, actually: the FLO part (floating-point) and the PS part (time).

If you are building a supercomputer for military applications, then yes, FLOPS is interesting to you. However, if you are not building a supercomputer, then it is highly likely that you don't actually care about floating-point operations at all. And even if you are building a supercomputer for a company, then you do care about floating-point operations, but you actually care more about floating-point operations per dollar (cost), per watt (not just energy cost, but also thermal management, cooling, waste heat, etc.), and per cubic meter (rack space, real estate, property taxes, etc.)

Really, only the military cares about brute-force performance with no regard to cost, energy, or size.

For my mobile phone, I care about the performance-per-cost, performance-per-Watt (both battery life and heat), and of course size. For my desktop, size is a little less important, but cost and energy still are. (And who has desktops anymore?) Even extreme gamers care about waste heat and thermal management!

Crypto miners are all about performance per Watt, since energy dominates the cost for mining. That's why regions with lots of wind, solar, hydro, and geothermal energy are popular with miners. (Or, regions with less than strict environmental laws – apparently, miners have bought or leased and reactivated coal and gas plants that were in the process of being shut down in favor of alternative energy sources.)

What is an example of a non-floating point operation?

  • Integer operations
  • Fixed-point operations
  • Rational operations
  • Complex operations
  • Decimal operations
  • Money operations (nobody in their right mind would use floating-point for money)
  • [literally every single kind of number that is not a floating-point number] operations
  • text operations
  • boolean operations
  • binary operations
  • cryptographic operations

Basically, most of the operations we use in our everyday usage of computers.

Jörg W Mittag
  • 6,663
  • 27
  • 26