AI inference – efficiently executing pre-trained neural networks.
an AI inference ASIC – ala something like Google’s TPU family –
accurate inferencing can be performed with relatively low-precision
datatypes like INT8 (and sometimes lower), currently most training
requires FP16 or more.
Intel’s forthcoming Xe GPUs, which are coming out of the
company’s recently rebuilt GPU division