Imagine the following situation. There is an application for a microcontroller-driven system. The profits are made from manufacturing and selling the device itself. However, it is expected, that the users can later download software-updates from a public website, and install it (using a USB-connection, for example).
Now, the problem is, if the updates were released as raw compiled binaries, anyone could just copy the hardware, and spare themselves the costs of a long software development.
Now, let the update be encrypted in some way, and the bootloader in the microcontroller (which is of course not part of the update, and will never be changed) The whole program sits in the internal flash which is protected against reading.
The problem is, there is so little space (measured in a few dozens of words or in best case up to maybe a hundred or so) in the bootloader that no cryptographically secure decryption algorithm can be implemented there.
So, let's release an update which has valid but random and nonsensical instructions inserted at random but pre-determined places. The bootloader knows it, and will remove them. The idea is, that even if an attacker knew this method (but did not knew the positions where these instructions are inserted), it would be near impossible to get the real program, as you cannot know you got the right one until you tested it (and even then, some hidden bugs might be still lurking if the "cracking" was not perfect).
The main objective in this scenario would be, to make it more time-consuming for the attacker to steal the software, than it would be to develop it on its own.
Is there an obvious flaw in this way of thinking?