• Complex
  • Title
  • Author
  • Keyword
  • Abstract
  • Scholars
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:徐宗本

Refining:

Source

Submit Unfold

Language

Submit

Clean All

Export Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 41 >
Model-driven deep-learning SCIE Scopus CSCD
期刊论文 | 2018 , 5 (1) , 22-24 | NATIONAL SCIENCE REVIEW
WoS CC Cited Count: 1 SCOPUS Cited Count: 2
Abstract&Keyword Cite

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Zongben , Sun, Jian . Model-driven deep-learning [J]. | NATIONAL SCIENCE REVIEW , 2018 , 5 (1) : 22-24 .
MLA Xu, Zongben 等. "Model-driven deep-learning" . | NATIONAL SCIENCE REVIEW 5 . 1 (2018) : 22-24 .
APA Xu, Zongben , Sun, Jian . Model-driven deep-learning . | NATIONAL SCIENCE REVIEW , 2018 , 5 (1) , 22-24 .
Export to NoteExpress RIS BibTex
Learning Through Deterministic Assignment of Hidden Parameters EI
期刊论文 | 2018 | IEEE Transactions on Cybernetics
Abstract&Keyword Cite

Abstract :

Supervised learning frequently boils down to determining hidden and bright parameters in a parameterized hypothesis space based on finite input-output samples. The hidden parameters determine the nonlinear mechanism of an estimator, while the bright parameters characterize the linear mechanism. In a traditional learning paradigm, hidden and bright parameters are not distinguished and trained simultaneously in one learning process. Such a one-stage learning (OSL) brings a benefit of theoretical analysis but suffers from the high computational burden. In this paper, we propose a two-stage learning scheme, learning through deterministic assignment of hidden parameters (LtDaHPs), suggesting to deterministically generate the hidden parameters by using minimal Riesz energy points on a sphere and equally spaced points in an interval. We theoretically show that with such a deterministic assignment of hidden parameters, LtDaHP with a neural network realization almost shares the same generalization performance with that of OSL. Then, LtDaHP provides an effective way to overcome the high computational burden of OSL. We present a series of simulations and application examples to support the outperformance of LtDaHP. IEEE

Keyword :

Application examples Bright parameters Generalization performance hidden parameters Learning rates Nonlinear mechanisms Traditional learning Uncertainty

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Fang, Jian , Lin, Shaobo , Xu, Zongben . Learning Through Deterministic Assignment of Hidden Parameters [J]. | IEEE Transactions on Cybernetics , 2018 .
MLA Fang, Jian 等. "Learning Through Deterministic Assignment of Hidden Parameters" . | IEEE Transactions on Cybernetics (2018) .
APA Fang, Jian , Lin, Shaobo , Xu, Zongben . Learning Through Deterministic Assignment of Hidden Parameters . | IEEE Transactions on Cybernetics , 2018 .
Export to NoteExpress RIS BibTex
Robust subspace clustering via penalized mixture of Gaussians EI SCIE Scopus
期刊论文 | 2018 , 278 , 4-11 | NEUROCOMPUTING
Abstract&Keyword Cite

Abstract :

Many problems in computer vision and pattern recognition can be posed as learning low-dimensional subspace structures from high-dimensional data. Subspace clustering represents a commonly utilized subspace learning strategy. The existing subspace clustering models mainly adopt a deterministic loss function to describe a certain noise type between an observed data matrix and its self-expressed form. However, the noises embedded in practical high-dimensional data are generally non-Gaussian and have much more complex structures. To address this issue, this paper proposes a robust subspace clustering model by embedding the Mixture of Gaussians (MoG) noise modeling strategy into the low-rank representation (LRR) subspace clustering model. The proposed MoG-LRR model is capitalized on its adapting to a wider range of noise distributions beyond current methods due to the universal approximation capability of MoG. Additionally, a penalized likelihood method is encoded into this model to facilitate selecting the number of mixture components automatically. A modified Expectation Maximization (EM) algorithm is also designed to infer the parameters involved in the proposed PMoG-LRR model. The superiority of our method is demonstrated by extensive experiments on face clustering and motion segmentation datasets. (C) 2017 Elsevier B. V. All rights reserved.

Keyword :

Mixture of Gaussians Subspace clustering Expectation maximization Low-rank representation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yao, Jing , Cao, Xiangyong , Zhao, Qian et al. Robust subspace clustering via penalized mixture of Gaussians [J]. | NEUROCOMPUTING , 2018 , 278 : 4-11 .
MLA Yao, Jing et al. "Robust subspace clustering via penalized mixture of Gaussians" . | NEUROCOMPUTING 278 (2018) : 4-11 .
APA Yao, Jing , Cao, Xiangyong , Zhao, Qian , Meng, Deyu , Xu, Zongben . Robust subspace clustering via penalized mixture of Gaussians . | NEUROCOMPUTING , 2018 , 278 , 4-11 .
Export to NoteExpress RIS BibTex
Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery EI SCIE Scopus
期刊论文 | 2018 , 40 (8) , 1888-1902 | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Abstract&Keyword Cite

Abstract :

As a promising way for analyzing data, sparse modeling has achieved great success throughout science and engineering. It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number (l(0) norm)/nonzero-singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.

Keyword :

tensor completion Tensor sparsity CANDECOMP/PARAFAC decomposition tucker decomposition multi-spectral image restoration

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xie, Qi , Zhao, Qian , Meng, Deyu et al. Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery [J]. | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2018 , 40 (8) : 1888-1902 .
MLA Xie, Qi et al. "Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery" . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 40 . 8 (2018) : 1888-1902 .
APA Xie, Qi , Zhao, Qian , Meng, Deyu , Xu, Zongben . Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery . | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2018 , 40 (8) , 1888-1902 .
Export to NoteExpress RIS BibTex
Convergence of multi-block Bregman ADMM for nonconvex composite problems EI SCIE Scopus CSCD
期刊论文 | 2018 , 61 (12) | SCIENCE CHINA-INFORMATION SCIENCES
Abstract&Keyword Cite

Abstract :

The alternating direction method with multipliers (ADMM) is one of the most powerful and successful methods for solving various composite problems. The convergence of the conventional ADMM (i.e., 2-block) for convex objective functions has been stated for a long time, and its convergence for nonconvex objective functions has, however, been established very recently. The multi-block ADMM, a natural extension of ADMM, is a widely used scheme and has also been found very useful in solving various nonconvex optimization problems. It is thus expected to establish the convergence of the multi-block ADMM under nonconvex frameworks. In this paper, we first justify the convergence of 3-block Bregman ADMM. We next extend these results to the N-block case (N >= 3), which underlines the feasibility of multi-block ADMM applications in nonconvex settings. Finally, we present a simulation study and a real-world application to support the correctness of the obtained theoretical assertions.

Keyword :

subanalytic function Bregman distance alternating direction method nonconvex regularization K-L inequality

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Fenghui , Cao, Wenfei , Xu, Zongben . Convergence of multi-block Bregman ADMM for nonconvex composite problems [J]. | SCIENCE CHINA-INFORMATION SCIENCES , 2018 , 61 (12) .
MLA Wang, Fenghui et al. "Convergence of multi-block Bregman ADMM for nonconvex composite problems" . | SCIENCE CHINA-INFORMATION SCIENCES 61 . 12 (2018) .
APA Wang, Fenghui , Cao, Wenfei , Xu, Zongben . Convergence of multi-block Bregman ADMM for nonconvex composite problems . | SCIENCE CHINA-INFORMATION SCIENCES , 2018 , 61 (12) .
Export to NoteExpress RIS BibTex
Greedy Criterion in Orthogonal Greedy Learning EI SCIE Scopus
期刊论文 | 2018 , 48 (3) , 955-966 | IEEE TRANSACTIONS ON CYBERNETICS
Abstract&Keyword Cite

Abstract :

Orthogonal greedy learning (OGL) is a stepwise learning scheme that starts with selecting a new atom from a specified dictionary via the steepest gradient descent (SGD) and then builds the estimator through orthogonal projection. In this paper, we found that SGD is not the unique greedy criterion and introduced a new greedy criterion, called as "delta-greedy threshold" for learning. Based on this new greedy criterion, we derived a straightforward termination rule for OGL. Our theoretical study shows that the new learning scheme can achieve the existing (almost) optimal learning rate of OGL. Numerical experiments are also provided to support that this new scheme can achieve almost optimal generalization performance while requiring less computation than OGL.

Keyword :

Generalization performance orthogonal greedy learning (OGL) supervised learning greedy criterion greedy algorithms

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Lin , Lin, Shaobo , Zeng, Jinshan et al. Greedy Criterion in Orthogonal Greedy Learning [J]. | IEEE TRANSACTIONS ON CYBERNETICS , 2018 , 48 (3) : 955-966 .
MLA Xu, Lin et al. "Greedy Criterion in Orthogonal Greedy Learning" . | IEEE TRANSACTIONS ON CYBERNETICS 48 . 3 (2018) : 955-966 .
APA Xu, Lin , Lin, Shaobo , Zeng, Jinshan , Liu, Xia , Fang, Yi , Xu, Zongben . Greedy Criterion in Orthogonal Greedy Learning . | IEEE TRANSACTIONS ON CYBERNETICS , 2018 , 48 (3) , 955-966 .
Export to NoteExpress RIS BibTex
Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network EI SCIE Scopus
期刊论文 | 2018 , 27 (5) , 2354-2367 | IEEE TRANSACTIONS ON IMAGE PROCESSING
WoS CC Cited Count: 3 SCOPUS Cited Count: 6
Abstract&Keyword Cite

Abstract :

This paper presents a new supervised classification algorithm for remotely sensed hyperspectral image (HSI) which integrates spectral and spatial information in a unified Bayesian framework. First, we formulate the HSI classification problem from a Bayesian perspective. Then, we adopt a convolutional neural network (CNN) to learn the posterior class distributions using a patch-wise training strategy to better use the spatial information. Next, spatial information is further considered by placing a spatial smoothness prior on the labels. Finally, we iteratively update the CNN parameters using stochastic gradient decent and update the class labels of all pixel vectors using alpha-expansion min-cut-based algorithm. Compared with the other state-of-the-art methods, the classification method achieves better performance on one synthetic data set and two benchmark HSI data sets in a number of experimental settings.

Keyword :

Hyperspectral image classification Markov random fields convolutional neural networks

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cao, Xiangyong , Zhou, Feng , Xu, Lin et al. Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network [J]. | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2018 , 27 (5) : 2354-2367 .
MLA Cao, Xiangyong et al. "Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network" . | IEEE TRANSACTIONS ON IMAGE PROCESSING 27 . 5 (2018) : 2354-2367 .
APA Cao, Xiangyong , Zhou, Feng , Xu, Lin , Meng, Deyu , Xu, Zongben , Paisley, John . Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network . | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2018 , 27 (5) , 2354-2367 .
Export to NoteExpress RIS BibTex
PolSAR Image Classification using Discriminative Clustering EI CPCI-S Scopus
会议论文 | 2017 | International Workshop on Remote Sensing with Intelligent Processing (RSIP)
Abstract&Keyword Cite

Abstract :

This paper presents a novel unsupervised image classification method for polarimetric synthetic aperture radar (PolSAR) data. The proposed method is based on a discriminative clustering framework that explicitly relies on a discriminative supervised classification technique to perform unsupervised clustering. To implement this idea, we design an energy function for unsupervised PolSAR image classification by combining a supervised softmax regression model with a Markov Random Field (MRF) smoothness constraint. In this model, both the pixel-wise class labels and classifiers are taken as unknown variables to be optimized. Starting from the initialized class labels generated by Cloude-Pottier decomposition and K-Wishart distribution hypothesis, we iteratively optimize the classifiers and class labels by alternately minimizing the energy function w.r.t. them. Finally, the optimized class labels are taken as the classification result, and the classifiers for different classes are also derived as a side effect. We apply this approach to real PolSAR benchmark data. Extensive experiments justify that our approach can effectively classify the PolSAR image in an unsupervised way, and produce higher accuracies than the compared state-of-the-art methods.

Keyword :

PolSAR image classification softmax regression model MRF discriminative clustering

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Bi, Haixia , Sun, Jian , Xu, Zongben . PolSAR Image Classification using Discriminative Clustering [C] . 2017 .
MLA Bi, Haixia et al. "PolSAR Image Classification using Discriminative Clustering" . (2017) .
APA Bi, Haixia , Sun, Jian , Xu, Zongben . PolSAR Image Classification using Discriminative Clustering . (2017) .
Export to NoteExpress RIS BibTex
Multimodal 2D+3D Facial Expression Recognition With Deep Fusion Convolutional Neural Network EI SCIE Scopus
期刊论文 | 2017 , 19 (12) , 2816-2831 | IEEE TRANSACTIONS ON MULTIMEDIA | IF: 3.977
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

This paper presents a novel and efficient deep fusion convolutional neural network (DF-CNN) for multimodal 2D+3D facial expression recognition (FER). DF-CNN comprises a feature extraction subnet, a feature fusion subnet, and a softmax layer. In particular, each textured three-dimensional (3D) face scan is represented as six types of 2D facial attribute maps (i.e., geometry map, three normal maps, curvature map, and texture map), all of which are jointly fed into DF-CNN for feature learning and fusion learning, resulting in a highly concentrated facial representation (32-dimensional). Expression prediction is performed by two ways: 1) learning linear support vector machine classifiers using the 32-dimensional fused deep features, or 2) directly performing softmax prediction using the six-dimensional expression probability vectors. Different from existing 3D FER methods, DF-CNN combines feature learning and fusion learning into a single end-to-end training framework. To demonstrate the effectiveness of DF-CNN, we conducted comprehensive experiments to compare the performance of DF-CNN with handcrafted features, pre-trained deep features, fine-tuned deep features, and state-of-the-art methods on three 3D face datasets (i.e., BU-3DFE Subset I, BU-3DFE Subset II, and Bosphorus Subset). In all cases, DF-CNN consistently achieved the best results. To the best of our knowledge, this is the first work of introducing deep CNN to 3D FER and deep learning-based feature level fusion for multimodal 2D+3D FER.

Keyword :

facial expression recognition (FER) multimodal Deep fusion convolutional neural network (DF-CNN) textured three-dimensional (3D) face scan

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Huibin , Sun, Jian , Xu, Zongben et al. Multimodal 2D+3D Facial Expression Recognition With Deep Fusion Convolutional Neural Network [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2017 , 19 (12) : 2816-2831 .
MLA Li, Huibin et al. "Multimodal 2D+3D Facial Expression Recognition With Deep Fusion Convolutional Neural Network" . | IEEE TRANSACTIONS ON MULTIMEDIA 19 . 12 (2017) : 2816-2831 .
APA Li, Huibin , Sun, Jian , Xu, Zongben , Chen, Liming . Multimodal 2D+3D Facial Expression Recognition With Deep Fusion Convolutional Neural Network . | IEEE TRANSACTIONS ON MULTIMEDIA , 2017 , 19 (12) , 2816-2831 .
Export to NoteExpress RIS BibTex
Shrinkage Degree in L-2-Rescale Boosting for Regression EI SCIE Scopus
期刊论文 | 2017 , 28 (8) , 1851-1864 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS | IF: 7.982
Abstract&Keyword Cite

Abstract :

L-2-rescale boosting (L-2-RBoosting) is a variant of L-2-Boosting, which can essentially improve the generalization performance of L-2-Boosting. The key feature of L2-RBoosting lies in introducing a shrinkage degree to rescale the ensemble estimate in each iteration. Thus, the shrinkage degree determines the performance of L-2-RBoosting. The aim of this paper is to develop a concrete analysis concerning how to determine the shrinkage degree in L-2-RBoosting. We propose two feasible ways to select the shrinkage degree. The first one is to parameterize the shrinkage degree and the other one is to develop a data-driven approach. After rigorously analyzing the importance of the shrinkage degree in L-2-RBoosting, we compare the pros and cons of the proposed methods. We find that although these approaches can reach the same learning rates, the structure of the final estimator of the parameterized approach is better, which sometimes yields a better generalization capability when the number of sample is finite. With this, we recommend to parameterize the shrinkage degree of L-2-RBoosting. We also present an adaptive parameter-selection strategy for shrinkage degree and verify its feasibility through both theoretical analysis and numerical verification. The obtained results enhance the understanding of L-2-RBoosting and give guidance on how to use it for regression tasks.

Keyword :

shrinkage degree generalization capability L-2-rescale boosting (L-2-RBoosting) regression Boosting

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Lin , Lin, Shaobo , Wang, Yao et al. Shrinkage Degree in L-2-Rescale Boosting for Regression [J]. | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2017 , 28 (8) : 1851-1864 .
MLA Xu, Lin et al. "Shrinkage Degree in L-2-Rescale Boosting for Regression" . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 28 . 8 (2017) : 1851-1864 .
APA Xu, Lin , Lin, Shaobo , Wang, Yao , Xu, Zongben . Shrinkage Degree in L-2-Rescale Boosting for Regression . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2017 , 28 (8) , 1851-1864 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 41 >

Export

Results:

Selected

to

Format:
FAQ| About| Online/Total:3735/50773970
Address:XI'AN JIAOTONG UNIVERSITY LIBRARY(No.28, Xianning West Road, Xi'an, Shaanxi Post Code:710049) Contact Us:029-82667865
Copyright:XI'AN JIAOTONG UNIVERSITY LIBRARY Technical Support:Beijing Aegean Software Co., Ltd.