• Complex
  • Title
  • Author
  • Keyword
  • Abstract
  • Scholars
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:薛建儒

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Language

Submit

Clean All

Export Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 5 >
Temporality-enhanced knowledge memory network for factoid question answering EI SCIE Scopus CSCD
期刊论文 | 2018 , 19 (1) , 104-115 | FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING
WoS CC Cited Count: 1 SCOPUS Cited Count: 2
Abstract&Keyword Cite

Abstract :

Question answering is an important problem that aims to deliver specific answers to questions posed by humans in natural language. How to efficiently identify the exact answer with respect to a given question has become an active line of research. Previous approaches in factoid question answering tasks typically focus on modeling the semantic relevance or syntactic relationship between a given question and its corresponding answer. Most of these models suffer when a question contains very little content that is indicative of the answer. In this paper, we devise an architecture named the temporality-enhanced knowledge memory network (TE-KMN) and apply the model to a factoid question answering dataset from a trivia competition called quiz bowl. Unlike most of the existing approaches, our model encodes not only the content of questions and answers, but also the temporal cues in a sequence of ordered sentences which gradually remark the answer. Moreover, our model collaboratively uses external knowledge for a better understanding of a given question. The experimental results demonstrate that our method achieves better performance than several state-of-the-art methods.

Keyword :

Temporality interaction Question answering Knowledge memory

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Duan, Xin-yu , Tang, Si-liang , Zhang, Sheng-yu et al. Temporality-enhanced knowledge memory network for factoid question answering [J]. | FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING , 2018 , 19 (1) : 104-115 .
MLA Duan, Xin-yu et al. "Temporality-enhanced knowledge memory network for factoid question answering" . | FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING 19 . 1 (2018) : 104-115 .
APA Duan, Xin-yu , Tang, Si-liang , Zhang, Sheng-yu , Zhang, Yin , Zhao, Zhou , Xue, Jian-ru et al. Temporality-enhanced knowledge memory network for factoid question answering . | FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING , 2018 , 19 (1) , 104-115 .
Export to NoteExpress RIS BibTex
Large-scale vocabularies with local graph diffusion and mode seeking EI SCIE Scopus
期刊论文 | 2018 , 63 , 1-8 | SIGNAL PROCESSING-IMAGE COMMUNICATION
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

In this work, we propose a large-scale clustering method that captures the intrinsic manifold structure of local features by graph diffusion for image retrieval. The proposed method is a mode seeking like algorithm, and it finds the mode of each data point with the defined stochastic matrix resulted by a same local graph diffusion process. While mode seeking algorithms are normally costly, our method is efficient to generate large-scale vocabularies as it is not iterative, and the major computational steps ere done in parallel. Furthermore, unlike other clustering methods, such as k-means and spectral clustering, the proposed clustering algorithm does not need to empirically appoint the number of clusters beforehand, and its time complexity is independent on the number of clusters. Experimental results on standard image retrieval datasets demonstrate that the proposed method compaies favorably to previous large-scale clustering methods. (C) 2018 Elsevier B.V. All rights reserved.

Keyword :

Image retrieval Large-scale clustering Local graph diffusion Mode-seeking

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Pang, Shanmin , Xue, Jianru , Gao, Zhanning et al. Large-scale vocabularies with local graph diffusion and mode seeking [J]. | SIGNAL PROCESSING-IMAGE COMMUNICATION , 2018 , 63 : 1-8 .
MLA Pang, Shanmin et al. "Large-scale vocabularies with local graph diffusion and mode seeking" . | SIGNAL PROCESSING-IMAGE COMMUNICATION 63 (2018) : 1-8 .
APA Pang, Shanmin , Xue, Jianru , Gao, Zhanning , Zheng, Lihong , Zhu, Li . Large-scale vocabularies with local graph diffusion and mode seeking . | SIGNAL PROCESSING-IMAGE COMMUNICATION , 2018 , 63 , 1-8 .
Export to NoteExpress RIS BibTex
Adding attentiveness to the neurons in recurrent neural networks EI Scopus
会议论文 | 2018 , 11213 LNCS , 136-152 | 15th European Conference on Computer Vision, ECCV 2018
Abstract&Keyword Cite

Abstract :

Recurrent neural networks (RNNs) are capable of modeling the temporal dynamics of complex sequential information. However, the structures of existing RNN neurons mainly focus on controlling the contributions of current and historical information but do not explore the different importance levels of different elements in an input vector of a time slot. We propose adding a simple yet effective Element-wise-Attention Gate (EleAttG) to an RNN block (e.g., all RNN neurons in a network layer) that empowers the RNN neurons to have the attentiveness capability. For an RNN block, an EleAttG is added to adaptively modulate the input by assigning different levels of importance, i.e., attention, to each element/dimension of the input. We refer to an RNN block equipped with an EleAttG as an EleAtt-RNN block. Specifically, the modulation of the input is content adaptive and is performed at fine granularity, being element-wise rather than input-wise. The proposed EleAttG, as an additional fundamental unit, is general and can be applied to any RNN structures, e.g., standard RNN, Long Short-Term Memory (LSTM), or Gated Recurrent Unit (GRU). We demonstrate the effectiveness of the proposed EleAtt-RNN by applying it to the action recognition tasks on both 3D human skeleton data and RGB videos. Experiments show that adding attentiveness through EleAttGs to RNN blocks significantly boosts the power of RNNs. © Springer Nature Switzerland AG 2018.

Keyword :

Action recognition Element-wise-Attention Gate (EleAttG) Historical information Recurrent neural network (RNNs) RGB video Sequential information Skeleton Temporal dynamics

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xue, Jianru , Lan, Cuiling , Zeng, Wenjun et al. Adding attentiveness to the neurons in recurrent neural networks [C] . 2018 : 136-152 .
MLA Xue, Jianru et al. "Adding attentiveness to the neurons in recurrent neural networks" . (2018) : 136-152 .
APA Xue, Jianru , Lan, Cuiling , Zeng, Wenjun , Gao, Zhanning , Zheng, Nanning . Adding attentiveness to the neurons in recurrent neural networks . (2018) : 136-152 .
Export to NoteExpress RIS BibTex
Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization EI
会议论文 | 2018 , 2018-June , 734-739 | 2018 IEEE Intelligent Vehicles Symposium, IV 2018
Abstract&Keyword Cite

Abstract :

In this paper, we propose a robust point set registration algorithm which combines correntropy and point-to-plane distance, which can register rigid point sets with noises and outliers. Firstly, as correntropy performs well in handling data with non-Gaussian noises, we introduce it to model rigid point set registration problem based on point-to-plane distance; Secondly, we propose an iterative algorithm to solve this problem, which repeats to compute correspondence and transformation parameters respectively in closed form solutions. Simulated experimental results demonstrate the high precision and robustness of the proposed algorithm. In addition, LiDAR based localization experiments on automated vehicle performs satisfactory for localization accuracy and time consumption. © 2018 IEEE.

Keyword :

Automated vehicles Closed form solutions Iterative algorithm Localization accuracy Non-Gaussian noise Point-set registrations Time consumption Transformation parameters

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Guanglin , Du, Shaoyi , Cui, DIxiao et al. Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization [C] . 2018 : 734-739 .
MLA Xu, Guanglin et al. "Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization" . (2018) : 734-739 .
APA Xu, Guanglin , Du, Shaoyi , Cui, DIxiao , Zhang, Sirui , Chen, Badong , Zhang, Xuetao et al. Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization . (2018) : 734-739 .
Export to NoteExpress RIS BibTex
Tunnel Brightness Compensation With Spatial-Temporal Visual-Content Preservation EI SCIE Scopus
期刊论文 | 2018 , 65 (11) , 9005-9015 | IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
WoS CC Cited Count: 1 SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

Tunnels cause many traffic accidents in China every year, which is related to the inappropriate driving behaviors combined with less visual comfortability when driving across tunnels. The most common phenomenon when driving across the inlet and outlet sections of a tunnel is the abruptly varying visual brightness, causing a short-term blindness for a driver, known as the "black hole" and "white hole" effects, respectively. In this paper, we propose a tunnel brightness compensation model with spatial-temporal visual-content preservation. The highlights of this paper are twofold. 1) We analyze the visual brightness variation by introducing spatio-temporal orientation energy for a stable characterization of the visual comfortability of drivers in view of the distinguishable visual content caused by a sequential brightness variation. 2) We construct a tunnel brightness compensation model that preserves the spatial-temporal visual content by visual-content matching with multiple frames. Our method can manifestly improve the brightness quality and maintain the scene content simultaneously. Extensive visual brightness compensation experiments on 60 visual clips of inlet and outlet sections of tunnels demonstrate that the proposed method generates a state-of-the-art performance.

Keyword :

tunnel safety Brightness compensation spatio-temporal content-preserving

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Fang, Jianwu , Xue, Jianru . Tunnel Brightness Compensation With Spatial-Temporal Visual-Content Preservation [J]. | IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS , 2018 , 65 (11) : 9005-9015 .
MLA Fang, Jianwu et al. "Tunnel Brightness Compensation With Spatial-Temporal Visual-Content Preservation" . | IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS 65 . 11 (2018) : 9005-9015 .
APA Fang, Jianwu , Xue, Jianru . Tunnel Brightness Compensation With Spatial-Temporal Visual-Content Preservation . | IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS , 2018 , 65 (11) , 9005-9015 .
Export to NoteExpress RIS BibTex
Video Object Discovery and Co-Segmentation with Extremely Weak Supervision EI SCIE Scopus
期刊论文 | 2017 , 39 (10) , 2074-2088 | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE | IF: 9.455
WoS CC Cited Count: 5
Abstract&Keyword Cite

Abstract :

We present a spatio-temporal energy minimization formulation for simultaneous video object discovery and co-segmentation across multiple videos containing irrelevant frames. Our approach overcomes a limitation that most existing video co-segmentation methods possess, i.e., they perform poorly when dealing with practical videos in which the target objects are not present in many frames. Our formulation incorporates a spatio-temporal auto-context model, which is combined with appearance modeling for superpixel labeling. The superpixel-level labels are propagated to the frame level through a multiple instance boosting algorithm with spatial reasoning, based on which frames containing the target object are identified. Our method only needs to be bootstrapped with the frame-level labels for a few video frames (e.g., usually 1 to 3) to indicate if they contain the target objects or not. Extensive experiments on four datasets validate the efficacy of our proposed method: 1) object segmentation from a single video on the SegTrack dataset, 2) object co-segmentation from multiple videos on a video co-segmentation dataset, and 3) joint object discovery and co-segmentation from multiple videos containing irrelevant frames on the MOViCS dataset and XJTU-Stevens, a new dataset that we introduce in this paper. The proposed method compares favorably with the state-of-the-art in all of these experiments.

Keyword :

Video object discovery Spatial-MILBoost spatio-temporal auto-context model video object co-segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Le , Hua, Gang , Sukthankar, Rahul et al. Video Object Discovery and Co-Segmentation with Extremely Weak Supervision [J]. | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2017 , 39 (10) : 2074-2088 .
MLA Wang, Le et al. "Video Object Discovery and Co-Segmentation with Extremely Weak Supervision" . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 39 . 10 (2017) : 2074-2088 .
APA Wang, Le , Hua, Gang , Sukthankar, Rahul , Xue, Jianru , Niu, Zhenxing , Zheng, Nanning . Video Object Discovery and Co-Segmentation with Extremely Weak Supervision . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2017 , 39 (10) , 2074-2088 .
Export to NoteExpress RIS BibTex
View Adaptive Recurrent Neural Networks for High Performance Human Action Recognition from Skeleton Data EI CPCI-S Scopus
会议论文 | 2017 , 2136-2145 | 16th IEEE International Conference on Computer Vision (ICCV)
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Skeleton-based human action recognition has recently attracted increasing attention due to the popularity of 3D skeleton data. One main challenge lies in the large view variations in captured human actions. We propose a novel view adaptation scheme to automatically regulate observation viewpoints during the occurrence of an action. Rather than re-positioning the skeletons based on a human defined prior criterion, we design a view adaptive recurrent neural network (RNN) with LSTM architecture, which enables the network itself to adapt to the most suitable observation viewpoints from end to end. Extensive experiment analyses show that the proposed view adaptive RNN model strives to (1) transform the skeletons of various views to much more consistent viewpoints and (2) maintain the continuity of the action rather than transforming every frame to the same position with the same body orientation. Our model achieves significant improvement over the state-of-the-art approaches on three benchmark datasets.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Pengfei , Lan, Cuiling , Xing, Junliang et al. View Adaptive Recurrent Neural Networks for High Performance Human Action Recognition from Skeleton Data [C] . 2017 : 2136-2145 .
MLA Zhang, Pengfei et al. "View Adaptive Recurrent Neural Networks for High Performance Human Action Recognition from Skeleton Data" . (2017) : 2136-2145 .
APA Zhang, Pengfei , Lan, Cuiling , Xing, Junliang , Zeng, Wenjun , Xue, Jianru , Zheng, Nanning . View Adaptive Recurrent Neural Networks for High Performance Human Action Recognition from Skeleton Data . (2017) : 2136-2145 .
Export to NoteExpress RIS BibTex
Multi-Saliency Detection via Instance Specific Element Homology EI CPCI-S Scopus
会议论文 | 2017 , 661-668 | International Conference on Digital Image Computing - Techniques and Applications (DICTA)
Abstract&Keyword Cite

Abstract :

Saliency detection aims to find the useful and attractive regions from an image. In many situations, there may be multiple objects in the image, and these objects may have equal attractiveness. Moreover, the appearance of pixels in one object may demonstrate large difference, which could lead to lose the object integrality when detecting saliency. To this end, this paper proposes a multi-saliency detection model via Instance Specific Element Homology (ISEH), where the integrality of an object is also considered. The ISEH is mainly formulated as a belongingness probability computation of two elements (e.g., pixels or superpixels) relative to the same object, which exploits the linking of elements within certain proposal, as well as the objectness of the proposal simultaneously. This work takes ISEH into account in the foreground and background estimation and a final optimization, and generates a superior performance for multi-saliency detection than the state-of-the-arts.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Li , Fang, Jianwu , Ju, Yongfeng et al. Multi-Saliency Detection via Instance Specific Element Homology [C] . 2017 : 661-668 .
MLA Zhou, Li et al. "Multi-Saliency Detection via Instance Specific Element Homology" . (2017) : 661-668 .
APA Zhou, Li , Fang, Jianwu , Ju, Yongfeng , Xue, Jianru . Multi-Saliency Detection via Instance Specific Element Homology . (2017) : 661-668 .
Export to NoteExpress RIS BibTex
Boosting CNN-based pedestrian detection via 3d lidar fusion in autonomous driving EI Scopus
会议论文 | 2017 , 10667 LNCS , 3-13 | 9th International Conference on Image and Graphics,ICIG 2017
Abstract&Keyword Cite

Abstract :

Robust pedestrian detection has been treated as one of the main pursuits for excellent autonomous driving. Recently, some convolutional neural networks (CNN) based detectors have made large progress for this goal, such as Faster R-CNN. However, the performance of them still needs a large space to be boosted, even owning the complex learning architectures. In this paper, we novelly introduce the 3D LiDAR sensor to boost the CNN-based pedestrian detection. Facing the heterogeneous and asynchronous properties of two different sensors, we firstly introduce an accurate calibration method for visual and LiDAR sensors. Then, some physically geometrical clues acquired by 3D LiDAR are explored to eliminate the erroneous pedestrian proposals generated by the state-of-the-art CNN-based detectors. Exhaustive experiments verified the superiority of the proposed method. © Springer International Publishing AG 2017.

Keyword :

Autonomous driving Calibration method Complex learning Convolutional Neural Networks (CNN) Large spaces LIDAR sensors Pedestrian detection State of the art

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Dou, Jian , Fang, Jianwu , Li, Tao et al. Boosting CNN-based pedestrian detection via 3d lidar fusion in autonomous driving [C] . 2017 : 3-13 .
MLA Dou, Jian et al. "Boosting CNN-based pedestrian detection via 3d lidar fusion in autonomous driving" . (2017) : 3-13 .
APA Dou, Jian , Fang, Jianwu , Li, Tao , Xue, Jianru . Boosting CNN-based pedestrian detection via 3d lidar fusion in autonomous driving . (2017) : 3-13 .
Export to NoteExpress RIS BibTex
Online high-accurate calibration of RGB+3D-LiDAR for autonomous driving EI Scopus
会议论文 | 2017 , 10668 LNCS , 254-263 | 9th International Conference on Image and Graphics, ICIG 2017
Abstract&Keyword Cite

Abstract :

Vision+X has become the promising tendency for scene understanding in autonomous driving, where X may be the other non-vision sensors. However, it is difficult to utilize all the superiority of different sensors, mainly because of the heterogenous, asynchronous properties. To this end, this paper calibrates the commonly used RGB+3D-LiDAR data by synchronization and an online spatial structure alignment, and obtains a high-accurate calibration performance. The main highlights are that (1) we rectify the 3D points with the aid of differential inertial measurement unit (IMU), and increase the frequency of 3D laser data as the same as the ones of RGB data, and (2) this work can online high-accurately updates the external parameters of calibration by a more reliable spatial-structure matching of RGB and 3D-LiDAR data. By experimentally in-depth analysis, the superiority of the proposed method is validated. © Springer International Publishing AG 2017.

Keyword :

3-d lidars Autonomous driving In-depth analysis Inertial Measurement Unit (IMU) Laser data Non-vision sensors Scene understanding Spatial structure

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Tao , Fang, Jianwu , Zhong, Yang et al. Online high-accurate calibration of RGB+3D-LiDAR for autonomous driving [C] . 2017 : 254-263 .
MLA Li, Tao et al. "Online high-accurate calibration of RGB+3D-LiDAR for autonomous driving" . (2017) : 254-263 .
APA Li, Tao , Fang, Jianwu , Zhong, Yang , Wang, Di , Xue, Jianru . Online high-accurate calibration of RGB+3D-LiDAR for autonomous driving . (2017) : 254-263 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 5 >

Export

Results:

Selected

to

Format:
FAQ| About| Online/Total:2258/51351140
Address:XI'AN JIAOTONG UNIVERSITY LIBRARY(No.28, Xianning West Road, Xi'an, Shaanxi Post Code:710049) Contact Us:029-82667865
Copyright:XI'AN JIAOTONG UNIVERSITY LIBRARY Technical Support:Beijing Aegean Software Co., Ltd.