1、智能边缘计算:让智能无处不在刘云新清华大学智能产业研究院国强教授、首席研究员MainframePersonal computingIntelligent cloudIntelligent cloud+edgeCentralizedDistributedCentralizedDistributedComputing paradigm shiftsSmart City250 PB per day Smart Home50 GB per daySmart Devices20B IoT devicesStadium200 TB per gameConnected Factory1 PB per day
2、People1.5 GB per daySmart Office150 GB per dayAutonomous Vehicle5 TB per day68Distributed devices and data Data explosion from fast growing edge devices E.g.,smart surveillance cameras,self-driving cars Strong needs of on-device intelligence Low latency High availability and reliability Strong priva
3、cy protection Low cost Edge devices becoming increasingly powerful Emerging high-perf,low-power,low-cost AI ASICIntelligent CloudIntelligent Edge68The call for intelligence(DL)on the edgeAffordable AI models tailored for diverse hardware68Highly-optimized software stack&efficient hardware for AISecu
4、rity&privacy,model protection,explainable AI,debuggingOn-device,continuous,collaborative learning loopAI-empowered diverse devices and applications everywhereEmpower every app&device with AI/DLEdgeTPUVPUNPUKPUHPUAI ChipsEfficient neural network(NN)designEdge NN FrameworksInnovations of on-device DL
5、stackManual Design NASPruningNN Design Design Space:#of layers,op structure,channel,constraints(e.g.,FLOPs)Model DeploymentModel Framework opt.e.g.op fusionConvBNReLuRe-quantizeRe-quantizeRe-quantizeQuantizationDequantizationCPUGPUDSPTPUNPUConvBNReLuCurrent NN design does not consider platform featu
6、resGapNN design and deploymentEdgeTPU209M FLOPs990M FLOPs MobileNetV3Latency:4 msModel accuracy:74.7%MobileNetEdgeTPULatency:3.6 msModel accuracy:75.6%Less FLOPs less latency,but can harm model accuracy.Does less FLOPs mean less latency?CortexA76 CPUVPUMobileNetV3MobileNetV225%fasterMobileNetV3Mobil
7、eNetV271%fasterDoes a fast model run fast on every hardware?To Bridge Neural Network Design and Real-World Performance:A Behavior Study for Neural NetworksPaper published at MLSys 2021 Measurement study to answer the following 3 questions:1.What are the behavior characteristics that show an inconsis
8、tent latency response to the change of OPs and memory accesses of a configuration in the design space?2.What are the root causes for these unexpected characteristics?3.What are the implications of these characteristics for efficient-NN design?Goal Profiling on 7 edge AI platforms:Measurement Tool:DS
9、PNPURKNNKPUNNCASECortexCPUTFLiteAdreno GPUTFLiteDSPSNPEEdge TPUTFLiteVPUOpenVINOGenerate single block model in TF Convert to target graph and precisionProfile on target deviceCollect timing resultsMethodology The scaling of each NN design dimension:Operator/block type():Normal operator:Conv,FC.Eleme
10、ntwise:Add,Pooling.Activations:ReLU,Sigmoid,Swish.Blocks:MobileNet/ShuffleNet block,.Kernel size():1,3,5,7 Stride():1,2 Height()/width():3,.,224#of Conv channels(/):3,.,1000 Precision():INT8,FP16,.Covered design dimensions Finding 1:The latency of Conv increases in a step pattern rather than linear
11、with the number of output channelsX axis:output channel number,Y axis:latencyInput feature map:28x28;Input channel number:320;Kernel:3x3;Stride:1 Do more Conv channels increase latency?Cause:The input tensors are padded to fully utilize the hardware data-level parallelism SIMD unit on CPU;Vector uni
12、t on DSP;SIMT on GPU etc.Matrix multiplication implementation8,1x1,8 basic blockK2 x CinPad to 8 nCoutK2 x CinH x WCout+padH x W+padConvolution KernelInput feature mapOutput feature mapPadPad to 8 nSIMD units on CPUDo more Conv channels increase latency?Implication:For potential higher accuracy,it i
13、s encouraged to keep the largest number of channels in each latency step in the NN design space and skip the other ones.68101214161820.68101214161820.Previous Channel Number Choices:Reduced Channel Number Choices:E.g.MetaPruningChannel search space:from 3014to414(14 layers,each layer has 30 channel
14、candidates)Do more Conv channels increase latency?01020304050FLOPsDataCPUGPUVPUDSPTPUKPURelative Latency/MobileNetV1DenseBlockMobileNetV2Block+SEMobileNetV2BlockShufflenetV2Block318.95 Finding 2:The relative latency of a building block varies greatly on different platformsDoes a building block have
15、similar relative latency on different NN platforms?Cause:1.The mismatch of computation and memory bandwidth is severe2.The support for non-Conv operators is weak on the NN platforms except CPUSnapdragon 855 on Mi 9Memory bandwidth 23 GFloat/sCPU22.7GFLOP/sGPU508 GFLOP/s0.81ShuffleNetBlock4.73Mobilen
16、etV2Block7.58MobilenetV2Block+SE44.51DenseBlockData reuse rateDoes a building block have similar relative latency on different NN platforms?Cause:1.The mismatch of computation and memory bandwidth is severe2.The support for non-Conv operators is weak on the NN platforms except CPUPooling takes 70%ti
17、meSqueeze&Excitement blockGlobal PoolingMultiplyFC ReLUFC Sigmoid3x3 DWConv,BN,ReLU6 11 speedup,while CPU only achieves 3.6 INT8 can dramatically decrease inference accuracy of various models General:Considering the general support,accuracy,and latency,the CPU is still a good choice for inferenceSum
18、mary of major findingsHow to get a good model?Efficient NN design must consider hardware characteristics.EdgeTPUVPUNPUKPUHPUHW-specific predictorsof latency and energyProfiling and modelingManual Design NASPruningNN Design Design Space:#of layers,op structure,channel,constraints(e.g.,FLOPs)Models Ed
19、geTPUVPUNPUKPUHPUModel deploymentlatency,energyEfficient NN design for diverse edge hardwarenn-Meter:Towards Accurate Latency Prediction of Deep-Learning Model Inference on Diverse Edge DevicesCortexCPUAdreno GPUVPUPaper published at MobiSys 2021(Best Paper Award)FLOPs-based prediction Pros:very sim
20、ple Cons:not a direct metric of inference latency Operator-level prediction Pros:stable primitive operators(conv2d,pooling,activations.)Cons:unaware of graph-level optimizations Model-level prediction Pros:learn graph-level optimization automatically Cons:cannot generalize to unseen model structures
21、 nn-Meter:build accurate latency predictor Take graph-level optimizations into consideration Generalization abilityExisting work on latency prediction Backend-independent opt.Constant folding Common expression elimination.Backend-dependent opt.Operator fusion.Designed modelBackend independent opt.Ba
22、ckend dependent opt.CPU backend1(eg Eigen lib.)CPU backend2(eg NNPack lib.)GPU backend1(eg OpenCL)Movidiusbackend ConvActive func._kernel conv_2d_1x1()for(i=0;iout.row;i+)for(j=0;jout.col;j+)for(cout=0;coutout.chan;cout+)for(cin=0;cinin.chan;cin+)outijcout+=inijcin*filtercoutcin;_kernel active()for(
23、i=0;iout.row;i+)for(j=0;jout.col;j+)for(c=0;cout.chan;c+)outijc=active(inijc);Conv+Active_kernel conv_2d_1x1_active()for(i=0;iout.row;i+)for(j=0;jout.col;j+)for(cout=0;coutout.chan;cout+)for(cin=0;cinin.chan;cin+)outijcout+=inijcin*filtercoutcin;outijcout=active(outijcout);Model graphBackend impleme
24、ntationOperator fusionChallenge:framework optimizations Operator fusion has a great impact on inference latencyConvActive_kernel conv_2d_1x1()for(i=0;iout.row;i+)for(j=0;jout.col;j+)for(cout=0;coutout.chan;cout+)for(cin=0;cinin.chan;cin+)outijcout+=inijcin*filtercoutcin;Conv+Active_kernel conv_2d_1x
25、1_active()for(i=0;iout.row;i+)for(j=0;jout.col;j+)for(cout=0;coutout.chan;cout+)for(cin=0;cinin.chan;cin+)outijcout+=inijcin*filtercoutcin;outijcout=active(outijcout);Model graphBackend implementationOperator fusion_kernel active()for(i=0;iout.row;i+)for(j=0;jout.col;j+)for(c=0;c min(1,2)measured la
26、tency:Op1Op2Op1Op2test cases:121,2nn-Meter tech#1:automatic kernel detectorFusion rule detection for black-box devices A set of test cases:For every two operators,we generate 3 graphs Compare the latency differenceKernel search by the fusion rules Apply the fusion rules to search maximum fused opera
27、tors in target modelA resnet18 block examplenn-Meter tech#1:Automatic kernel detector Large sample space,e.g.,ConvCollected from 24 widely used CNN models from PyTorch model zoo,Conv has of configurations to sample!Kernel-latency prediction:challenges Non-linear latency on edge devices Random sampli
28、ng misses crucial data pointsKernel-latency prediction:challengesSample the most beneficial data(kernel configuration)instead of random sampling Sample configurations that are likely to be considered in model design Prior possibility distribution:learned from model zoo Fine-grained sampling around i
29、naccurate prediction dataPrior possibility distributionRegression modelFine-grained data samplerdata with large errorsdata and measured latencyconsidered configsin model design12nn-Meter tech#2:adaptive data sampler Prediction accuracy:99.0%(CPU),99.1%(Adreno640 GPU),99.0%(Adreno630 GPU)and 83.4%(In
30、tel VPU)Generalization performance on unseen model graphs Comparison baselines:FLOPs,FLOPs+MAC,BRP-NAS(GCN),On average:nn-Meter achieves 89.2%,significantly better than FLOPs(22.1%),FLOPs+MAC(17.1%),and BRP-NAS(8.5%)nn-Meter EvaluationEdgeTPUVPUNPUKPUHPUHW-specific predictorsof latency and energyPro
31、filing and modelingManual Design NASPruningNN Design Design Space:#of layers,op structure,channel,constraints(e.g.,FLOPs)Models EdgeTPUVPUNPUKPUHPUModel deploymentlatency,energyEfficient NN design for diverse edge hardwareWe got a good model.How does it run on real devices?0%20%40%60%80%100%Average
32、CPU usageARM CPU utilization%for CNNBig coreLittle core30%90%0%20%40%60%80%100%Adreno GPU ALU utilization%for CNN 84%Low hardware utilization results in poor inference speed.Are computing resources fully utilized?AsyMo:Scalable and Efficient Deep-Learning Inference on Asymmetric Mobile CPUsPaper pub
33、lished at MobiCom 20210%20%40%60%80%100%Average CPU usageCPU utilization%for CNNBig coreLittle core30%90%Unbalanced task distribution by OS inter and intra core clustersB0B1B2B3L0L1L2L3Big core clusterLittle core clusterComputation tasksWhy is utilization low on the CPU?Execution flow of matrix mult
34、iplication 1)Block partition for parallelism2)Copy blocks into continuous memory spaceMKKmckcnckc3)Schedule tasks to thread queuestaskThread poolQ0tttQ1tttQ#ttmc x kcParams Feature mapkc x ncNIgnore hardware asymmetry Redundant data copy Ignore hardware asymmetry Ignore data locality Ignore resource
35、 constraints Ignore the interference-prone environment Why is distribution unbalanced on the CPU?Accelerate edge DL inference with lower energy costInferenceOne-run initializationCNN/RNN modelCost-model directed block partitionData-reuse based frequency settingPrearranged memory layout for paramsPar
36、tition strategyAsymmetry-aware schedulingPartition strategyMemory handleEfficientfrequencyIntra-op thread pool Taskthread IDAsyMo:optimize DL inference on big.Little CPUCost for a task:computation+memory Cost for a sequential unit:Cost for parallel calculation:parallel task number x CostseqOther cos
37、t:unparallel+task schedule+framework Total cost:Computation and memory access costDegree of parallelismTask scheduling and framework costCost-model-based block partitionMKKNbigCore3tttCore2tttCore1tttCore0tttttCore3tttCore2tttCore1tttCore0ttttOne-run initializationInference runBlock partitionParams
38、layoutCopy featuresTasks scheduling and run Big core cluster Little core cluster Pin thread on coreMKKNlittleNo work stealing from big to littleBetter data localityOptimized execution flow of matrix multiplication 1.851.331.01.21.41.61.82.0Relative to TF(max freq)PerformanceEnergy efficiency9.8718.5
39、1135791113151719Both max CPU frequency Asymo vs TensorFlow on Kirin 970+Android 9 Pie1.631.721.01.21.41.61.82.0Relative to TF(schedutil)PerformanceEnergy efficiency135791113151719TensorFlow OS frequency settingAsymo picked efficient CPU frequencyPre-copy params enable parallel implementationTotal pe
40、rformance and energy improvementSparseflow:unleash full potential of sparsity in deep learningJoint work with Chen Zhang et al.GPT-3175B parameters$12M training costMT-NLG530B parametersTrained by 560 DGX A100 serversTodays DNN model is huge19602019CPUMoores law108x19701980199020002010ENIAC5 Kops500
41、 GopsXeon E5DedicatedHardware 105xGPUTPUTPUv190 TopsV100125 TopsTPUv3360 Tops?Performance(Op/Sec)Computation is the engine behind AIs success&still need more0.11101001000199520002005201020152020CPU energy-efficiency wallGPU energy-efficiency wallTPU energy-efficiency wallGiga-operations per JouleYea
42、rMoores lawDedicate?Piling up hardware is not sustainable:energy-efficiency wallSparsity is the key to human brains efficiencyWe do not look at everything in our visual scopeSparsity is the key to human brains efficiencySimple geometric shapes are enough for us to recognize a catHan,Song,et al.Learn
43、ing both Weights and Connections for Efficient Neural Networks,NIPS15Unstructured sparse matricesMxV SpMxVPrune away small weights Difficult to accelerateWeight PruningPros:High model accuracy High compression ratioCons:Irregular pattern Difficult to accelerateCons:Low model accuracy Low compression
44、 ratioPros:Regular pattern Easy to accelerateFine-grained/IrregularCoarse-grained/RegularAccuracy and Speedup Trade off Model accuracy Add few constraints on the sparsity pattern Speedup Matrix partitioning for parallel computing Eliminating irregular computation and memory accessS.Cao et al.,“Effic
45、ient and Effective Sparse LSTM on FPGA with Bank-Balanced Sparsity”,FPGA19.How to Achieve Both?DenseMatrixBBSMatrix RowBank Split0.81.51.0-1.42.00.9-1.32.1DenseMatrix Row0.8-0.10.21.51.00.3-0.4-1.40.72.00.9-0.51.2-1.32.10.20123456712131415891011Traverse all rowsFine-grained pruning inside each bankT
46、hreshold percentage to obtain identical sparsity ratio among banksBank-Balanced PruningBank partitioning for parallel computingFine-grained pruning inside each bank for maintaining accuracyBank-Balanced Sparsity(BBS)V0V1V2V3V4V5V6V7V8V9V10V11Dense vectorBank 0Bank 1Bank 2Bank 3Both inter-row and int
47、er-bank parallelismA0BCD00EFG0HIJ0K0LMN0OP0Row 0Row 1Bank 0Bank 1Bank 2Bank 3Load balancing across rows and banksConflict-free vector accessesSparse MV Multiplication(SpMxV)0123456789101112131415ACEGBDFHIKMOJLNP0001223200131231CSBVALUESBANK INTERNALINDICESData rearrangement for inter-bank paralleliz
48、ationPhysical BRAM addresses0123012301230123Specifically designed for BBS to eliminate decoding overheadsOur CSB(Compressed Sparse Banks)FPGASpMxV PE *.*+EWOPACT+ControllerInstruction BufferDMA*Private Vector BufferOutput+DRAMCntlrPCIeCntlrOff-chipDRAMHostServerVector MemoryMatrixMemoryIndicesValues
49、Accelerator OverviewSpeech Recognition on TIMIT datasetLanguage model PTB datasetVery closeModel AccuracyHardware Efficiency34x7xHardware EfficiencySeerNet:Predicting CNN Feature-Map Sparsity through Low-Bit QuantizationS.Cao et al.,“SeerNet:Predicting Convolutional Neural Network Feature-Map Sparsi
50、ty through Low-Bit Quantization”,CVPR19.ConvolutionWFReLUMax-poolingorConvConvSoftmaxcatdogpigcowboyF ReLUy=max(0,x)Max-poolingy=max(xi|i=1,2,n)1-1-52-32-3-65-42476-1-21002020050247600Accelerate model inference by feature-mapsparsity Sparsity:45%95%Convolving for ReLUs zero output pixels results in