-
-
Notifications
You must be signed in to change notification settings - Fork 205
Open
Description
I try to run gpu model like this:
python get_face_boxes_gluoncv.py --gpus 0
I get this:
[19:46:58] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
Inference time:16.246319ms
,my gpu is rtx2060. and I try to use default cpu (inter i5),I got this:
python get_face_boxes_gluoncv.py
Inference time:18.076420ms
so ,why it is no difference between them ?
Metadata
Metadata
Assignees
Labels
No labels