FEATURE : EDGE COMPUTING based on new data inputs . This inference stage often happens in the cloud and requires significant but less extreme computing power than the training phase .
Finally , to serve end-users worldwide with minimal latency , the inferenced model outputs need to be distributed globally . Extensive content delivery networks ( CDNs ) with Points-of-Presence ( PoP ) across the world ( our own has more than 180 PoPs ) are best positioned to assist with this , delivering close to endusers using AI at the Edge . The closer an end customer is to a PoP , the faster they will be able to interact with the AI model .
THE PRESSURE IS NOW ON CIOS TO EMBRACE AI MORE BROADLY ACROSS
APPLICATIONS AND BUSINESS PROCESSES .
www . intelligentcio . com INTELLIGENTCIO EUROPE 39