Is this definition of Difficulty correct?
Yes.
If yes, is it accurate to think about Difficulty as a type of proportion or ratio?
Yes, it is how many times harder current blocks are to mine than blocks in the first difficulty epoch were.
For reference, difficulty 1 corresponds to (very close to) 4295032833 hash attempts per block. It has a target of (exactly) 26959535291011309493156476344723991336010898738574164086137773096960.
Block 822627 has a target of (exactly) 374406027949793378682501776760424667692437142927048704.
The ratio between these two huge numbers is (approximately) 72006146478567.1. That is the difficulty of block 822627. It is roughly 72006 billion times harder to mine a block today than it was in January 2009.
And if it is a floating point number, what is its precision?
Whatever precision you calculate it to. Typical implementations will use a floating-point data type in the computation, which necessary limits its precision.
But it’s really just the ratio of two huge numbers. You can calculate it to much higher precision too. Block 822627 has difficulty
72006146478567.09941095874369093485414495333341140374503764945604440…
That much precision isn’t needed, as the difficulty is really just used for presenting the hardness of mining a block to humans. Internally the software uses the target, which is exact.
Lastly, could someone please confirm the formula used to compare a hash against the Target, to see if it mines a valid block?
The requirement is: block_hash ≤ target (where block_hash is the block hash interpreted as 256-bit little-endian number). The lower the target is, the harder it is to land below it. In a way, the difficulty is a measure for “target lowness”.
For more practical use, how would you represent this formula using Difficulty instead of the Target?
Perhaps something like block_hash_diff ≥ difficulty, where block_hash_diff is the ratio between the first block’s target (given above) and the block’s actual target?