LLVM, the open source compiler framework that powers everything from Mozilla’s Rust language to Apple’s Swift, is emerging in yet another powerful role: an enabler of code deployment systems that target multiple classes of hardware for speeding up jobs like machine learning.To write code that can run on CPUs, GPUs, ASICs, and FPGAs alike—something hugely useful with machine learning apps—it’s best to use something like OpenCL, which allows a program to be written once and then automatically deployed across all those different types of hardware.To read this article in full or to leave a comment, please click here

Leave a Reply