When developing algorithms in quantum computing, I've noticed that there are two primary models in which this is done. Some algorithms - such as for the Hamiltonian NAND tree problem (Farhi, Goldstone, Guttman) - work by designing a Hamiltonian and some initial state, and then letting the system evolve according to the Schrödinger equation for some time $t$ before performing a measurement.
Other algorithms - such as Shor's Algorithm for factoring - work by designing a sequence of Unitary transformations (analogous to gates) and applying these transformations one at a time to some initial state before performing a measurement.
My question is, as a novice in quantum computing, what is the relationship between the Hamiltonian model and the Unitary transformation model? Some algorithms, like for the NAND tree problem, have since been adapted to work with a sequence of Unitary transformations (Childs, Cleve, Jordan, Yonge-Mallo). Can every algorithm in one model be transformed into a corresponding algorithm in the other? For example, given a sequence of Unitary transformations to solve a particular problem, is it possible to design a Hamiltonian and solve the problem in that model instead? What about the other direction? If so, what is the relationship between the time in which the system must evolve and the number of unitary transformations (gates) required to solve the problem?
I have found several other problems for which this seems to be the case, but no clear cut argument or proof that would indicate that this is always possible or even true. Perhaps it's because I don't know what this problem is called, so I am unsure what to search for.