The consensus is: a hash generally expects binary bits as input (practically, most implementations therefore handle it using binary bytes, aka 8-bit unsigned chars in the range 0x00-0xFF) and it will generally output binary bits (which most implementations output as a series of binary bytes, aka 8-bit unsigned chars) as well.
Now, since a hash generally does not handle any text encoding, this means you’ll practically have to convert things like UTF-16 (which is a multi-byte representation) and UTF-8 (which also partly contains multi-byte chars) to “raw binary bytes” accordingly so that you can feed the binary representation to the hashing function. Not doing this can and will result in glitches in your implementation, as one can frequently find in online tools which try to offer – for example – Javascript SHA-256 and alike hashing, but fail to produce the correct hash values due to wrong or missing conversion of the text encoding. You have to feed your hash a series of binary bits. Practically, the majority of implementations handles those input and output bits using 8-bit binary bytes (aka 8-bit unsigned chars, aka UINT8) as that’s what most modern-day devices work with natively/internally.
Same goes for the output. Any hex representation of a hash is the result of the hash output being converted from an set of bits (mostly implemented as an output of 8-bit unsigned chars) to their equivalent hex representation.
As for the programming part of your question: Hashes do not care what you hash... at all. How you process data before or after hashing is a purely programming-related question and out of the scope of this site.
As for text encoding to choose — that's entirely up to you and strictly depends on your individual scenario. From a programmatic point of view, I would point at UTF8, but one might equally argue that UTF-16 is the way to go, as it offers enhanced support for (eg) Asian languages. Also, it's what languages like Javascript and Java use internally. In the end, it depends on your project and how you want to handle your data... but that decision is — as I noted — not a cryptographically one.
TL;DR: Formally, almost all cryptographic hashes work on one or more binary bits. This is why hash functions like MD5, SHA-1, the SHA-2 family, and SHA-3 all take binary input, work binary internally, and produce a binary output. How you handle your (text) data before or after hashing it is up to you and your programming goals or the individual standards you follow (web standards mostly point to UTF-8 for example), but how you handle your strings in your program is not unrelated to cryptography.