When I use SHA-256 to hash '103' and '1000' I get the following hexadecimal digests:
Using Python, SHA-256('103') > SHA-256('1000') = True.
I understand why subtle differences in input messages create vastly different digests, that the output lengths of all SHA-256 hashes are fixed, and that in this example I'm inputting strings – not numbers.
My question is why is one digest is 'greater' than another, and how is this determined? I stumbled across this problem when looking at the target hash value that a Node on the bitcoin blockchain is required to produce (or undershoot) in order to demonstrate Proof of Work.
Any advice is really appreciated!