In the problem from currently running Huawei contest (Accuracy-Preserving Summation Algorithm) a part of the task is to choose between IEEE-754 binary64, binary32 and binary16 floating point formats to use for number summation.
Apparently, people in charge of the contest don't know that both Intel and AMD support fp16 since 2011-2012 (AMD Bulldozer and Intel Ivy Bridge), it's supported by GCC 12+ and Clang 15+ as _Float16 and since C++23 as std::float16_t.
So for the checker they wrote their own implementation, which differs from IEEE in two places:
the exponent range is [-16; +16] instead of [-15; +15];
during conversion from fp64 least significant bits are just thrown away instead of being rounded.
Considering it's importance in the problem, I decided to check what's the range of fp64 that doesn't overflow when converted to fp16 and the difference that aforementioned differences make. Therefore I wrote a little program which prints the ranges and decided to share results here with anyone interested. It shows ranges for both "Huawei FP16", IEEE and "Corrected Huawei" — without rounding, but with correct exponent range.
Here are the results:
Huawei FP16
MAX
fp64: 131071.99999999999 (0x1.fffffffffffffp+16)
fp16: 131008 (0x1.ffcp+16)
OVERFLOW
fp64: 131072 (0x1p+17)
fp16: inf
RANGE
fp64: (-131072; 131072)
fp16: [-131008; 131008]
IEEE-754 FP16
MAX
fp64: 65519.999999999993 (0x1.ffdffffffffffp+15)
fp16: 65504 (0x1.ffcp+15)
OVERFLOW
fp64: 65520 (0x1.ffep+15)
fp16: inf
RANGE
fp64: (-65520; 65520)
fp16: [-65504; 65504]
Huawei FP16 with correct range
MAX
fp64: 65535.999999999993 (0x1.fffffffffffffp+15)
fp16: 65504 (0x1.ffcp+15)
OVERFLOW
fp64: 65536 (0x1p+16)
fp16: inf
RANGE
fp64: (-65536; 65536)
fp16: [-65504; 65504]
https://godbolt.org/z/3s8GoKe8j
I think it's kinda sloppy that the problem statement is inconsistent with the checker, but at least they've shared the checker's code.