**4. Results**

For the training and evaluating the final build we used a subset of a state-of-the-art public dataset, the COCO dataset. COCO dataset is collection of 100 K images with diverse instances. We have used a subset of those person instances with annotated key points. We have trained our model on 3 K images, cross validated on 1100 images and tested on 568 images. The metric used for evaluation is OKS stands for Object key point similarity. The COCO evaluation is based on mean average precision calculates over different OKS threshold. The minimum OKS value that can have is 0.5. We are only interested in key points that lie within 2.77 of the standard deviation (**Figure 5**).

Above table compares the performance of our model with the other state of the art model. **Table 1** shows the mAP performance comparison of our model with others on a testing dataset of 568 images. We can see clearly our novel approach outperforms the previous key point benchmarks. We can also see our model achieved a significant

### **Figure 5.**

*Convergence of training losses for both the heat maps (L) and greedy part vectors (R).*


### **Table 1.**

*mAP performance comparison of our model with others on a testing dataset of 568 images.*


**Table 2.**

*Performance comparison on a complete testing dataset of 1000 images.*

rise in mean average precision of 6.5%. Our inference time is 3 order less. **Table 2** presents the performance comparison on a complete testing dataset of 1000 images. Here again we can see our model outperforming the rest. Our model achieved a rise of almost 2.5% in mean average precision as compare to other models. The above comparison of our model with earlier state of the art bottom up approaches presents the significance of our model.
