提交 09133f79 编写于 作者: M MaoXianxin

3D AffordanceNet: A Benchmark for Visual Object Affordance Understanding

上级 40347e32
......@@ -61,4 +61,10 @@
链接:https://pan.baidu.com/s/15LQPvcW0EkEEjN_2Lu2T3g
提取码:956t
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603140942.png)
\ No newline at end of file
# 论文
## 3D AffordanceNet: A Benchmark for Visual Object Affordance Understanding
The ability to understand the ways to interact with objects from visual cues, a.k.a. visual affordance, is essential to vision-guided robotic research. This involves categorizing, segmenting and reasoning of visual affordance. Relevant studies in 2D and 2.5D image domains have been made previously, however, a truly functional understanding of object affordance requires learning and prediction in the 3D physical domain, which is still absent in the community. In this work, we present a 3D AffordanceNet dataset, a benchmark of 23k shapes from 23 semantic object categories, annotated with 18 visual affordance categories. Based on this dataset, we provide three benchmarking tasks for evaluating visual affordance understanding, including full-shape, partial-view and rotation-invariant affordance estimations. Three state-of-the-art point cloud deep learning networks are evaluated on all tasks. In addition we also investigate a semi-supervised learning setup to explore the possibility to benefit from unlabeled data. Comprehensive results on our contributed dataset show the promise of visual affordance understanding as a valuable yet challenging benchmark.
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210608112105.png)
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册