Description: Half precision is an IEEE 754 floating point format that has been widely used recently, especially in machine learning and AI. It has been standardized as _Float16 in the latest C23 standard, bringing its support to the same level as float or double data types. The goal for this project is to implement C23 half precision math functions in the LLVM libc library.
Expected Results:
- Setup the generated headers properly so that the type and the functions can be used with various compilers (+versions) and architectures.
- Implement generic basic math operations supporting half precision data types that work on supported architectures: x86_64, arm (32 + 64), risc-v (32 + 64), and GPUs.
- Implement specializations using compiler builtins or special hardware instructions to improve their performance whenever possible.
- If time permits, we can start investigating higher math functions for half precision.
Project Size: Large
Requirement: Basic C & C++ skills + Interest in knowing / learning more about the delicacy of floating point formats.
Difficulty: Easy/Medium
Confirm Mentor: Tue Ly, Joseph Huber