• Complex
  • Title
  • Author
  • Keyword
  • Abstract
  • Scholars
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:王进军

Refining:

Indexed by

Submit Unfold

Language

Submit

Clean All

Export Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 3 >
Semi-supervised person Re-Identification using multi-view clustering EI
期刊论文 | 2019 , 88 , 285-297 | Pattern Recognition
Abstract&Keyword Cite

Abstract :

Person Re-Identification (Re-Id) is a challenging task focusing on identifying the same person among disjoint camera views. A number of deep learning algorithms have been reported for this task in fully-supervised fashion which requires a large amount of labeled training data, while obtaining high quality labels for Re-Id is extremely time consuming. To address this problem, we propose a semi-supervised Re-Id framework by using only a small portion of labeled data and some additional unlabeled samples. This paper approaches the problem by constructing a set of heterogeneous Convolutional Neural Networks (CNNs) fine-tuned using the labeled portion, and then propagating the labels to the unlabeled portion for further fine-tuning the overall system. In this work, label estimation is a key component during the propagation process. We propose a novel multi-view clustering method, which integrates features of multiple heterogeneous CNNs to cluster and generate pseudo labels for unlabeled samples. Then we fine-tune each of the multiple heterogeneous CNNs by minimizing an identification loss and a verification loss simultaneously, using training data with both true labels and pseudo labels. The procedure is iterated until the estimation of pseudo labels no longer changes. Extensive experiments on three large-scale person Re-Id datasets demonstrate the effectiveness of the proposed method. © 2018 Elsevier Ltd

Keyword :

Convolutional neural network Labeled training data Multi-view clustering Person re identifications Propagation process Semi- supervised learning Semi-supervised Unlabeled samples

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xin, Xiaomeng , Wang, Jinjun , Xie, Ruji et al. Semi-supervised person Re-Identification using multi-view clustering [J]. | Pattern Recognition , 2019 , 88 : 285-297 .
MLA Xin, Xiaomeng et al. "Semi-supervised person Re-Identification using multi-view clustering" . | Pattern Recognition 88 (2019) : 285-297 .
APA Xin, Xiaomeng , Wang, Jinjun , Xie, Ruji , Zhou, Sanping , Huang, Wenli , Zheng, Nanning . Semi-supervised person Re-Identification using multi-view clustering . | Pattern Recognition , 2019 , 88 , 285-297 .
Export to NoteExpress RIS BibTex
Improving CNN Performance Accuracies With Min-Max Objective EI SCIE Scopus
期刊论文 | 2018 , 29 (7) , 2872-2885 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
WoS CC Cited Count: 1 SCOPUS Cited Count: 3
Abstract&Keyword Cite

Abstract :

We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds. The Min-Max objective is general and able to be applied to different CNNs with insignificant increases in computation cost. Moreover, an incremental minibatch training procedure is also proposed in conjunction with the Min-Max objective to enable the handling of large-scale training data. Comprehensive experimental evaluations on several benchmark data sets with both the image classification and face verification tasks reveal that employing the proposed Min-Max objective in the training process can remarkably improve performance accuracies of a CNN model in comparison with the same model trained without using this objective.

Keyword :

image classification Convolutional neural network (CNN) incremental minibatch training procedure face verification min-max objective

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shi, Weiwei , Gong, Yihong , Tao, Xiaoyu et al. Improving CNN Performance Accuracies With Min-Max Objective [J]. | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2018 , 29 (7) : 2872-2885 .
MLA Shi, Weiwei et al. "Improving CNN Performance Accuracies With Min-Max Objective" . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 29 . 7 (2018) : 2872-2885 .
APA Shi, Weiwei , Gong, Yihong , Tao, Xiaoyu , Wang, Jinjun , Zheng, Nanning . Improving CNN Performance Accuracies With Min-Max Objective . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2018 , 29 (7) , 2872-2885 .
Export to NoteExpress RIS BibTex
Large Margin Learning in Set-to-Set Similarity Comparison for Person Reidentification EI SCIE Scopus
期刊论文 | 2018 , 20 (3) , 593-604 | IEEE TRANSACTIONS ON MULTIMEDIA
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Person reidentification aims at matching images of the same person across disjoint camera views, which is a challenging problem in multimedia analysis, multimedia editing, and content-based media retrieval communities. The major challenge lies in how to preserve similarity of the same person across video footages with large appearance variations, while discriminating different individuals. To address this problem, conventional methods usually consider the pairwise similarity between persons by only measuring the point-to-point distance. In this paper, we propose using a deep learning technique to model a novel set-to-set (S2S) distance, in which the underline objective focuses on preserving the compactness of intraclass samples for each camera view, while maximizing the margin between the intraclass set and interclass set. The S2S distance metric consists of three terms, namely, the class-identity term, the relative distance term, and the regularization term. The class-identity term keeps the intraclass samples within each camera view gathering together, the relative distance term maximizes the distance between the intraclass class set and interclass set across different camera views, and the regularization term smoothes the parameters of the deep convolutional neural network. As a result, the final learned deep model can effectively find out the matched target to the probe object among various candidates in the video gallery by learning discriminative and stable feature representations. Using the CUHK01, CUHK03, PRID2011, and Market1501 benchmark datasets, we extensively conducted comparative evaluations to demonstrate the advantages of our method over the state-of-the-art approaches.

Keyword :

deep learning metric learning Person re-identification set to set similarity comparison

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Sanping , Wang, Jinjun , Shi, Rui et al. Large Margin Learning in Set-to-Set Similarity Comparison for Person Reidentification [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2018 , 20 (3) : 593-604 .
MLA Zhou, Sanping et al. "Large Margin Learning in Set-to-Set Similarity Comparison for Person Reidentification" . | IEEE TRANSACTIONS ON MULTIMEDIA 20 . 3 (2018) : 593-604 .
APA Zhou, Sanping , Wang, Jinjun , Shi, Rui , Hou, Qiqi , Gong, Yihong , Zheng, Nanning . Large Margin Learning in Set-to-Set Similarity Comparison for Person Reidentification . | IEEE TRANSACTIONS ON MULTIMEDIA , 2018 , 20 (3) , 593-604 .
Export to NoteExpress RIS BibTex
Deep self-paced learning for person re-identification EI SCIE Scopus
期刊论文 | 2018 , 76 , 739-751 | PATTERN RECOGNITION
WoS CC Cited Count: 1 SCOPUS Cited Count: 2
Abstract&Keyword Cite

Abstract :

Person re-identification (Re-ID) usually suffers from noisy samples with background clutter and mutual occlusion, which makes it extremely difficult to distinguish different individuals across the disjoint camera views. In this paper, we propose a novel deep self-paced learning (DSPL) algorithm to alleviate this problem, in which we apply a self-paced constraint and symmetric regularization to help the relative distance metric training the deep neural network, so as to learn the stable and discriminative features for person Re-ID. Firstly, we propose a soft polynomial regularizer term which can derive the adaptive weights to samples based on both the training loss and model age. As a result, the high-confidence fidelity samples will be emphasized and the low-confidence noisy samples will be suppressed at early stage of the whole training process. Such a learning regime is naturally implemented under a self-paced learning (SPL) framework, in which samples weights are adaptively updated based on both model age and sample loss using an alternative optimization method. Secondly, we introduce a symmetric regularizer term to revise the asymmetric gradient back-propagation derived by the relative distance metric, so as to simultaneously minimize the intra-class distance and maximize the inter-class distance in each triplet unit. Finally, we build a part-based deep neural network, in which the features of different body parts are first discriminately learned in the lower convolutional layers and then fused in the higher fully connected layers. Experiments on several benchmark datasets have demonstrated the superior performance of our method as compared with the state-of-the-art approaches. (C) 2017 Published by Elsevier Ltd.

Keyword :

Self-paced learning Convolutional neural network Person re-identification Metric learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Sanping , Wang, Jinjun , Meng, Deyu et al. Deep self-paced learning for person re-identification [J]. | PATTERN RECOGNITION , 2018 , 76 : 739-751 .
MLA Zhou, Sanping et al. "Deep self-paced learning for person re-identification" . | PATTERN RECOGNITION 76 (2018) : 739-751 .
APA Zhou, Sanping , Wang, Jinjun , Meng, Deyu , Xin, Xiaomeng , Li, Yubing , Gong, Yihong et al. Deep self-paced learning for person re-identification . | PATTERN RECOGNITION , 2018 , 76 , 739-751 .
Export to NoteExpress RIS BibTex
Deep ranking model by large adaptive margin learning for person re-identification EI SCIE Scopus
期刊论文 | 2018 , 74 , 241-252 | PATTERN RECOGNITION
WoS CC Cited Count: 1 SCOPUS Cited Count: 3
Abstract&Keyword Cite

Abstract :

Person re-identification aims to match images of the same person across disjoint camera views, which is a challenging problem in video surveillance. The major challenge of this task lies in how to preserve the similarity of the same person against large variations caused by complex backgrounds, mutual occlusions and different illuminations, while discriminating the different individuals. In this paper, we present a novel deep ranking model with feature learning and fusion by learning a large adaptive margin between the intra-class distance and inter-class distance to solve the person re-identification problem. Specifically, we organize the training images into a batch of pairwise samples. Treating these pairwise samples as inputs, we build a novel part-based deep convolutional neural network (CNN) to learn the layered feature representations by preserving a large adaptive margin. As a result, the final learned model can effectively find out the matched target to the anchor image among a number of candidates in the gallery image set by learning discriminative and stable feature representations. Overcoming the weaknesses of conventional fixed-margin loss functions, our adaptive margin loss function is more appropriate for the dynamic feature space. On four benchmark datasets, PRID2011, Market1501, CUHK01 and 3DPeS, we extensively conduct comparative evaluations to demonstrate the advantages of the proposed method over the state-of-the-art approaches in person re-identification. (C) 2017 Published by Elsevier Ltd.

Keyword :

Deep ranking model Person re-identification Metric learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Jiayun , Zhou, Sanping , Wang, Jinjun et al. Deep ranking model by large adaptive margin learning for person re-identification [J]. | PATTERN RECOGNITION , 2018 , 74 : 241-252 .
MLA Wang, Jiayun et al. "Deep ranking model by large adaptive margin learning for person re-identification" . | PATTERN RECOGNITION 74 (2018) : 241-252 .
APA Wang, Jiayun , Zhou, Sanping , Wang, Jinjun , Hou, Qiqi . Deep ranking model by large adaptive margin learning for person re-identification . | PATTERN RECOGNITION , 2018 , 74 , 241-252 .
Export to NoteExpress RIS BibTex
TEXTURE-CENTRALIZED DEEP CONVOLUTIONAL NEURAL NETWORK FOR SINGLE IMAGE SUPER RESOLUTION EI CPCI-S Scopus
会议论文 | 2017 , 3707-3710 | Chinese Automation Congress (CAC)
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

There have been significant progresses in single image super-resolution (SR) using deep convolutional neural network. In this paper, we propose a modified deep convolutional neural network model incorporated with image texture priors for single image SR. The model consist of a particular feature extraction layer followed by image reconstruction process, aiming to centralize on the image texture information so as to make the overall SR task more effective. This proposal is compared with current state-of-the-art methods on standard images. Our experimental results confirmed that, incorporating image texture prior information with conventional high-resolution image reconstruction process can lead to better performance and faster convergence speed simultaneously.

Keyword :

Super-resolution image texture prior deep convolutional neural network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Chengqi , Ren, Zhigang , Yang, Bo et al. TEXTURE-CENTRALIZED DEEP CONVOLUTIONAL NEURAL NETWORK FOR SINGLE IMAGE SUPER RESOLUTION [C] . 2017 : 3707-3710 .
MLA Li, Chengqi et al. "TEXTURE-CENTRALIZED DEEP CONVOLUTIONAL NEURAL NETWORK FOR SINGLE IMAGE SUPER RESOLUTION" . (2017) : 3707-3710 .
APA Li, Chengqi , Ren, Zhigang , Yang, Bo , Wan, Xingyu , Wang, Jinjun . TEXTURE-CENTRALIZED DEEP CONVOLUTIONAL NEURAL NETWORK FOR SINGLE IMAGE SUPER RESOLUTION . (2017) : 3707-3710 .
Export to NoteExpress RIS BibTex
Adaptive Level Set Model for Image Segmentation Based on Tensor Diffusion EI CPCI-S Scopus
会议论文 | 2017 , 5026-5030 | Chinese Automation Congress (CAC)
Abstract&Keyword Cite

Abstract :

According to the tensor diffusion theory, this paper proposes a novel active contour model for image segmentation in the level set formulation. Firstly, we define an external energy term using the trace-based tensor diffusion equation, which can locate the evolution curve to the neighborhood of one target boundaries adaptively. So it will enhance the robustness of the model to the initialization of contour. Secondly, an internal energy term is established based on the eigenvalues' information of image structure tensor, which can push the evolution curve to the target boundaries, so as to further improve image segmentation accuracy. Experimental results on both synthetic and real images show desirable performances of our method.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Chengqi , Ren, Zhigang , Yang, Bo et al. Adaptive Level Set Model for Image Segmentation Based on Tensor Diffusion [C] . 2017 : 5026-5030 .
MLA Li, Chengqi et al. "Adaptive Level Set Model for Image Segmentation Based on Tensor Diffusion" . (2017) : 5026-5030 .
APA Li, Chengqi , Ren, Zhigang , Yang, Bo , Chen, Chuyang , Wang, Jinjun . Adaptive Level Set Model for Image Segmentation Based on Tensor Diffusion . (2017) : 5026-5030 .
Export to NoteExpress RIS BibTex
Single Image Super-Resolution with a Parameter Economic Residual-Like Convolutional Neural Network EI CPCI-S Scopus
会议论文 | 2017 , 10132 , 353-364 | 23rd International Conference on MultiMedia Modeling (MMM)
WoS CC Cited Count: 1 SCOPUS Cited Count: 4
Abstract&Keyword Cite

Abstract :

Recent years have witnessed great success of convolutional neural network (CNN) for various problems both in low and high level visions. Especially noteworthy is the residual network which was originally proposed to handle high-level vision problems and enjoys several merits. This paper aims to extend the merits of residual network, such as skip connection induced fast training, for a typical low-level vision problem, i.e., single image super-resolution. In general, the two main challenges of existing deep CNN for supper-resolution lie in the gradient exploding/vanishing problem and large amount of parameters or computational cost as CNN goes deeper. Correspondingly, the skip connections or identity mapping shortcuts are utilized to avoid gradient exploding/vanishing problem. To tackle with the second problem, a parameter economic CNN architecture which has carefully designed width, depth and skip connections was proposed. Experimental results have demonstrated that the proposed CNN model can not only achieve state-of-theart PSNR and SSIM results for single image super-resolution but also produce visually pleasant results.

Keyword :

Super-resolution The mount of parameters Skip connections Deep residual-like convolutional neural network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Ze , Zhang, Kai , Liang, Yudong et al. Single Image Super-Resolution with a Parameter Economic Residual-Like Convolutional Neural Network [C] . 2017 : 353-364 .
MLA Yang, Ze et al. "Single Image Super-Resolution with a Parameter Economic Residual-Like Convolutional Neural Network" . (2017) : 353-364 .
APA Yang, Ze , Zhang, Kai , Liang, Yudong , Wang, Jinjun . Single Image Super-Resolution with a Parameter Economic Residual-Like Convolutional Neural Network . (2017) : 353-364 .
Export to NoteExpress RIS BibTex
Constructing Deep Sparse Coding Network for image classification EI SCIE Scopus
期刊论文 | 2017 , 64 , 130-140 | PATTERN RECOGNITION | IF: 3.962
WoS CC Cited Count: 13 SCOPUS Cited Count: 13
Abstract&Keyword Cite

Abstract :

This paper introduces a deep model called Deep Sparse-Coding Network (DeepSCNet) to combine the advantages of Convolutional Neural Network (CNN) and sparse-coding techniques for image feature representation. DeepSCNet consists of four type of basic layers:" The sparse-coding layer performs generalized linear coding for local patch within the receptive field by replacing the convolution operation in CNN into sparse-coding. The Pooling layer and the Normalization layer perform identical operations as that in CNN. And finally the Map reduction layer reduces CPU/memory consumption by reducing the number of feature maps before stacking with the following layers. These four type of layers can be easily stacked to construct a deep model for image feature learning. The paper further discusses the multi-scale, multi-locality extension to the basic DeepSCNet, and the overall approach is fully unsupervised. Compared to CNN, training DeepSCNet is relatively easier even with training set of moderate size. Experiments show that DeepSCNet can automatically discover highly discriminative feature directly from raw image pixels.

Keyword :

Image classification Multi-locality Deep Model Multi-scale Sparse Coding

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Shizhou , Wang, Jinjun , Tao, Xiaoyu et al. Constructing Deep Sparse Coding Network for image classification [J]. | PATTERN RECOGNITION , 2017 , 64 : 130-140 .
MLA Zhang, Shizhou et al. "Constructing Deep Sparse Coding Network for image classification" . | PATTERN RECOGNITION 64 (2017) : 130-140 .
APA Zhang, Shizhou , Wang, Jinjun , Tao, Xiaoyu , Gong, Yihong , Zheng, Nanning . Constructing Deep Sparse Coding Network for image classification . | PATTERN RECOGNITION , 2017 , 64 , 130-140 .
Export to NoteExpress RIS BibTex
Combining local and global hypotheses in deep neural network for multi-label image classification EI SCIE Scopus
期刊论文 | 2017 , 235 , 38-45 | NEUROCOMPUTING | IF: 3.241
WoS CC Cited Count: 5 SCOPUS Cited Count: 7
Abstract&Keyword Cite

Abstract :

Multi-label image classification is a challenging problem in computer vision. Motivated by the recent development in image classification performance using Deep Neural Networks, in this work, we propose a flexible deep Convolutional Neural Network (CNN) framework, called Local-Global-CNN (LGC), to improve multi-label image classification performance. LGC consists of firstly a local level multi-label classifier which takes object segment hypotheses as inputs to a local CNN. The output results of these local hypotheses are aggregated together with max-pooling and then re-weighted to consider the label co-occurrence or inter-dependencies information by using a graphical model in the label space. LGC also utilizes a global CNN that is trained by multi-label images to directly predict the multiple labels from the input. The predictions of local and global level classifiers are finally fused together to obtain MAP estimation of the final multi-label prediction. The above LGC framework could benefit from a pre-train process with a large-scale single-label image dataset, e.g., ImageNet. Experimental results have shown that the proposed framework could achieve promising performance on Pascal VOC2007 and VOC2012 multi-label image dataset.

Keyword :

Multi-label classification Deep learning Convolutional neural network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yu, Qinghua , Wang, Jinjun , Zhang, Shizhou et al. Combining local and global hypotheses in deep neural network for multi-label image classification [J]. | NEUROCOMPUTING , 2017 , 235 : 38-45 .
MLA Yu, Qinghua et al. "Combining local and global hypotheses in deep neural network for multi-label image classification" . | NEUROCOMPUTING 235 (2017) : 38-45 .
APA Yu, Qinghua , Wang, Jinjun , Zhang, Shizhou , Gong, Yihong , Zhao, Jizhong . Combining local and global hypotheses in deep neural network for multi-label image classification . | NEUROCOMPUTING , 2017 , 235 , 38-45 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 3 >

Export

Results:

Selected

to

Format:
FAQ| About| Online/Total:2254/51351261
Address:XI'AN JIAOTONG UNIVERSITY LIBRARY(No.28, Xianning West Road, Xi'an, Shaanxi Post Code:710049) Contact Us:029-82667865
Copyright:XI'AN JIAOTONG UNIVERSITY LIBRARY Technical Support:Beijing Aegean Software Co., Ltd.