https://scholars.lib.ntu.edu.tw/handle/123456789/638057
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Liu, Chia Sheng | en_US |
dc.contributor.author | Yeh, Jia Fong | en_US |
dc.contributor.author | Hsu, Hao | en_US |
dc.contributor.author | Su, Hung Ting | en_US |
dc.contributor.author | MING-SUI LEE | en_US |
dc.contributor.author | WINSTON HSU | en_US |
dc.date.accessioned | 2023-12-21T01:44:56Z | - |
dc.date.available | 2023-12-21T01:44:56Z | - |
dc.date.issued | 2023-01-01 | - |
dc.identifier.isbn | 9781728163277 | - |
dc.identifier.issn | 15206149 | - |
dc.identifier.uri | https://scholars.lib.ntu.edu.tw/handle/123456789/638057 | - |
dc.description.abstract | The large amount of data collected by LiDAR sensors brings the issue of LiDAR point cloud compression (PCC). Previous works on LiDAR PCC have used range image representations and followed the predictive coding paradigm to create a basic prototype of a coding framework. However, their prediction methods give an inaccurate result due to the negligence of invalid pixels in range images and the omission of future frames in the time step. Moreover, their handcrafted design of residual coding methods could not fully exploit spatial redundancy. To remedy this, we propose a coding framework BIRD-PCC. Our prediction module is aware of the coordinates of invalid pixels in range images and takes a bidirectional scheme. Also, we introduce a deep-learned residual coding module that can further exploit spatial redundancy within a residual frame. Experiments conducted on SemanticKITTI and KITTI-360 datasets show that BIRD-PCC outperforms other methods in most bitrate conditions and generalizes well to unseen environments. | en_US |
dc.relation.ispartof | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings | en_US |
dc.subject | Compression | Deep Learning | LiDAR | Point Clouds | Range Image | en_US |
dc.title | BIRD-PCC: Bi-Directional Range Image-Based Deep Lidar Point Cloud Compression | en_US |
dc.type | conference paper | en_US |
dc.identifier.doi | 10.1109/ICASSP49357.2023.10095458 | - |
dc.identifier.scopus | 2-s2.0-85177602138 | - |
dc.identifier.url | https://api.elsevier.com/content/abstract/scopus_id/85177602138 | - |
dc.relation.journalvolume | 2023-June | en_US |
dc.relation.pageend | 5 | en_US |
item.fulltext | no fulltext | - |
item.openairecristype | http://purl.org/coar/resource_type/c_5794 | - |
item.cerifentitytype | Publications | - |
item.openairetype | conference paper | - |
item.grantfulltext | none | - |
crisitem.author.dept | Networking and Multimedia | - |
crisitem.author.dept | Computer Science and Information Engineering | - |
crisitem.author.dept | Networking and Multimedia | - |
crisitem.author.dept | Computer Science and Information Engineering | - |
crisitem.author.dept | MediaTek-NTU Research Center | - |
crisitem.author.orcid | 0000-0002-6699-6694 | - |
crisitem.author.orcid | 0000-0002-3330-0638 | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | Others: University-Level Research Centers | - |
顯示於: | 資訊工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。