You are here: VIVA>Main Web>TableOfContents (22 May 2015, AndreaVaccari)

# Relevant TOCs

## IEEE Transactions on Image Processing - new TOC (2017 September 21) [Website]

Bin Liang;Lihong Zheng; "Specificity and Latent Correlation Learning for Action Recognition Using Synthetic Multi-View Data From Depth Maps," vol.26(12), pp.5560-5574, Dec. 2017. This paper presents a novel approach to action recognition using synthetic multi-view data from depth maps. Specifically, multiple views are first generated by rotating 3D point clouds from depth maps. A pyramid multi-view depth motion template is then adopted for multi-view action representation, characterizing the multi-scale motion and shape patterns in 3D. Empirically, despite the view-specific information, the latent information between multiple views often provides important cues for action recognition. Concentrating on this observation and motivated by the success of the dictionary learning framework, this paper proposes to explicitly learn a view-specific dictionary (called specificity) for each view, and simultaneously learn a latent dictionary (called latent correlation) across multiple views. Thus, a novel method, specificity and latent correlation learning, is put forward to learn the specificity that captures the most discriminative features of each view, and learn the latent correlation that contributes the inherent 3D information to multiple views. In this way, a compact and discriminative dictionary is constructed by specificity and latent correlation for feature representation of actions. The proposed method is evaluated on the MSR Action3D, the MSR Gesture3D, the MSR Action Pairs, and the ChaLearn multi-modal data sets, consistently achieving promising results compared with the state-of-the-art methods based on depth data.

Shan Gao;Qixiang Ye;Junliang Xing;Arjan Kuijper;Zhenjun Han;Jianbin Jiao;Xiangyang Ji; "Beyond Group: Multiple Person Tracking via Minimal Topology-Energy-Variation," vol.26(12), pp.5575-5589, Dec. 2017. Tracking multiple persons is a challenging task when persons move in groups and occlude each other. Existing group-based methods have extensively investigated how to make group division more accurately in a tracking-by-detection framework; however, few of them quantify the group dynamics from the perspective of targets’ spatial topology or consider the group in a dynamic view. Inspired by the sociological properties of pedestrians, we propose a novel socio-topology model with a topology-energy function to factor the group dynamics of moving persons and groups. In this model, minimizing the topology-energy-variance in a two-level energy form is expected to produce smooth topology transitions, stable group tracking, and accurate target association. To search for the strong minimum in energy variation, we design the discrete group-tracklet jump moves embedded in the gradient descent method, which ensures that the moves reduce the energy variation of group and trajectory alternately in the varying topology dimension. Experimental results on both RGB and RGB-D data sets show the superiority of our proposed model for multiple person tracking in crowd scenes.

Chao Chen;Alina Zare;Huy N. Trinh;Gbenga O. Omotara;James Tory Cobb;Timotius A. Lagaunne; "Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation," vol.26(12), pp.5590-5602, Dec. 2017. Topic models [e.g., probabilistic latent semantic analysis, latent Dirichlet allocation (LDA), and supervised LDA] have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership LDA (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery, where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.

Lazaros Zafeiriou;Yannis Panagakis;Maja Pantic;Stefanos Zafeiriou; "Nonnegative Decompositions for Dynamic Visual Data Analysis," vol.26(12), pp.5603-5617, Dec. 2017. The analysis of high-dimensional, possibly temporally misaligned, and time-varying visual data is a fundamental task in disciplines, such as image, vision, and behavior computing. In this paper, we focus on dynamic facial behavior analysis and in particular on the analysis of facial expressions. Distinct from the previous approaches, where sets of facial landmarks are used for face representation, raw pixel intensities are exploited for: 1) unsupervised analysis of the temporal phases of facial expressions and facial action units (AUs) and 2) temporal alignment of a certain facial behavior displayed by two different persons. To this end, the slow features nonnegative matrix factorization (SFNMF) is proposed in order to learn slow varying parts-based representations of time varying sequences capturing the underlying dynamics of temporal phenomena, such as facial expressions. Moreover, the SFNMF is extended in order to handle two temporally misaligned data sequences depicting the same visual phenomena. To do so, the dynamic time warping is incorporated into the SFNMF, allowing the temporal alignment of the data sets onto the subspace spanned by the estimated nonnegative shared latent features amongst the two visual sequences. Extensive experimental results in two video databases demonstrate the effectiveness of the proposed methods in: 1) unsupervised detection of the temporal phases of posed and spontaneous facial events and 2) temporal alignment of facial expressions, outperforming by a large margin the state-of-the-art methods that they are compared to.

Long Bao;Shuang Yi;Yicong Zhou; "Combination of Sharing Matrix and Image Encryption for Lossless $(k,n)$ -Secret Image Sharing," vol.26(12), pp.5618-5631, Dec. 2017. This paper first introduces a <inline-formula> <tex-math notation="LaTeX">$(k,n)$ </tex-math></inline-formula>-sharing matrix <inline-formula> <tex-math notation="LaTeX">$S^{(k, n)}$ </tex-math></inline-formula> and its generation algorithm. Mathematical analysis is provided to show its potential for secret image sharing. Combining sharing matrix with image encryption, we further propose a lossless <inline-formula> <tex-math notation="LaTeX">$(k,n)$ </tex-math></inline-formula>-secret image sharing scheme (SMIE-SIS). Only with no less than <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> shares, all the ciphertext information and security key can be reconstructed, which results in a lossless recovery of original information. This can be proved by the correctness and security analysis. Performance evaluation and security analysis demonstrate that the proposed SMIE-SIS with arbitrary settings of <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$n$ </tex-math></inline-formula> has at least five advantages: 1) it is able to fully recover the original image without any distortion; 2) it has much lower pixel expansion than many existing methods; 3) its computation cost is much lower than the polynomial-based secret image sharing methods; 4) it is able to verify and detect a fake share; and 5) even using the same original image with the same initial settings of parameters, every execution of SMIE-SIS is able to generate completely different secret shares that are unpredictable and non-repetitive. This property offers SMIE-SIS a high level of security to withstand many different attacks.

Tao Sun;Hao Jiang;Lizhi Cheng; "Convergence of Proximal Iteratively Reweighted Nuclear Norm Algorithm for Image Processing," vol.26(12), pp.5632-5644, Dec. 2017. The nonsmooth and nonconvex regularization has many applications in imaging science and machine learning research due to its excellent recovery performance. A proximal iteratively reweighted nuclear norm algorithm has been proposed for the nonsmooth and nonconvex matrix minimizations. In this paper, we aim to investigate the convergence of the algorithm. With the Kurdyka–Łojasiewicz property, we prove the algorithm globally converges to a critical point of the objective function. The numerical results presented in this paper coincide with our theoretical findings.

Wenguan Wang;Jianbing Shen;Fatih Porikli; "Selective Video Object Cutout," vol.26(12), pp.5645-5655, Dec. 2017. Conventional video segmentation approaches rely heavily on appearance models. Such methods often use appearance descriptors that have limited discriminative power under complex scenarios. To improve the segmentation performance, this paper presents a pyramid histogram-based confidence map that incorporates structure information into appearance statistics. It also combines geodesic distance-based dynamic models. Then, it employs an efficient measure of uncertainty propagation using local classifiers to determine the image regions, where the object labels might be ambiguous. The final foreground cutout is obtained by refining on the uncertain regions. Additionally, to reduce manual labeling, our method determines the frames to be labeled by the human operator in a principled manner, which further boosts the segmentation performance and minimizes the labeling effort. Our extensive experimental analyses on two big benchmarks demonstrate that our solution achieves superior performance, favorable computational efficiency, and reduced manual labeling in comparison to the state of the art.

Hongyang Xue;Zhou Zhao;Deng Cai; "Unifying the Video and Question Attentions for Open-Ended Video Question Answering," vol.26(12), pp.5656-5666, Dec. 2017. Video question answering is an important task toward scene understanding and visual data retrieval. However, current visual question answering works mainly focus on a single static image, which is distinct from the dynamic and sequential visual data in the real world. Their approaches cannot utilize the temporal information in videos. In this paper, we introduce the task of free-form open-ended video question answering. The open-ended answers enable wider applications compared with the common multiple-choice tasks in Visual-QA. We first propose a data set for open-ended Video-QA with the automatic question generation approaches. Then, we propose our sequential video attention and temporal question attention models. These two models apply the attention mechanism on videos and questions, while preserving the sequential and temporal structures of the guides. The two models are integrated into the model of unified attention. After the video and the question are encoded, the answers are generated wordwisely from our models by a decoder. In the end, we evaluate our models on the proposed data set. The experimental results demonstrate the effectiveness of our proposed model.

Min Yang;Yuwei Wu;Yunde Jia; "A Hybrid Data Association Framework for Robust Online Multi-Object Tracking," vol.26(12), pp.5667-5679, Dec. 2017. Global optimization algorithms have shown impressive performance in data-association-based multi-object tracking, but handling online data remains a difficult hurdle to overcome. In this paper, we present a hybrid data association framework with a min-cost multi-commodity network flow for robust online multi-object tracking. We build local target-specific models interleaved with global optimization of the optimal data association over multiple video frames. More specifically, in the min-cost multi-commodity network flow, the target-specific similarities are online learned to enforce the local consistency for reducing the complexity of the global data association. Meanwhile, the global data association taking multiple video frames into account alleviates irrecoverable errors caused by the local data association between adjacent frames. To ensure the efficiency of online tracking, we give an efficient near-optimal solution to the proposed min-cost multi-commodity flow problem, and provide the empirical proof of its sub-optimality. The comprehensive experiments on real data demonstrate the superior tracking performance of our approach in various challenging situations.

Zhen Qin;Christian R. Shelton; "Event Detection in Continuous Video: An Inference in Point Process Approach," vol.26(12), pp.5680-5691, Dec. 2017. We propose a novel approach toward event detection in real-world continuous video sequences. The method: 1) is able to model arbitrary-order non-Markovian dependences in videos to mitigate local visual ambiguities; 2) conducts simultaneous event segmentation and labeling; and 3) is time-window free. The idea is to represent a video as an event stream of both high-level semantic events and low-level video observations. In training, we learn a point process model called a piecewise-constant conditional intensity model (PCIM) that is able to capture complex non-Markovian dependences in the event streams. In testing, event detection can be modeled as the inference of high-level semantic events, given low-level image observations. We develop the first inference algorithm for PCIM and show it samples exactly from the posterior distribution. We then evaluate the video event detection task on real-world video sequences. Our model not only provides competitive results on the video event segmentation and labeling task, but also provides benefits, including being interpretable and efficient.

Qing Guo;Wei Feng;Ce Zhou;Chi-Man Pun;Bin Wu; "Structure-Regularized Compressive Tracking With Online Data-Driven Sampling," vol.26(12), pp.5692-5705, Dec. 2017. Being a powerful appearance model, compressive random projection derives effective Haar-like features from non-rotated 4-D-parameterized rectangles, thus supporting fast and reliable object tracking. In this paper, we show that such successful fast compressive tracking scheme can be further significantly improved by structural regularization and online data-driven sampling. Our major contribution is threefold. First, we find that superpixel-guided compressive projection can generate more discriminative features by sufficiently capturing rich local structural information of images. Second, we propose fast directional integration that enables low-cost extraction of feasible Haar-like features from arbitrarily rotated 5-D-parameterized rectangles to realize more accurate object localization. Third, beyond naive dense uniform sampling, we present two practical online data-driven sampling strategies to produce less yet more effective candidate and training samples for object detection and classifier updating, respectively. Extensive experiments on real-world benchmark data sets validate the superior performance, i.e., much better object localization ability and robustness, of the proposed approach over state-of-the-art trackers.

Peizhong Liu;Jing-Ming Guo;Chi-Yi Wu;Danlin Cai; "Fusion of Deep Learning and Compressed Domain Features for Content-Based Image Retrieval," vol.26(12), pp.5706-5717, Dec. 2017. This paper presents an effective image retrieval method by combining high-level features from convolutional neural network (CNN) model and low-level features from dot-diffused block truncation coding (DDBTC). The low-level features, e.g., texture and color, are constructed by vector quantization -indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate and average recall rate (ARR), are employed to examine various data sets. As documented in the experimental results, the proposed schemes can achieve superior performance compared with the state-of-the-art methods with either low-or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.

Feiping Nie;Jing Li;Xuelong Li; "Convex Multiview Semi-Supervised Classification," vol.26(12), pp.5718-5729, Dec. 2017. In many practical applications, there are a great number of unlabeled samples available, while labeling them is a costly and tedious process. Therefore, how to utilize unlabeled samples to assist digging out potential information about the problem is very important. In this paper, we study a multiclass semi-supervised classification task in the context of multiview data. First, an optimization method named Parametric multiview semi-supervised classification (PMSSC) is proposed, where the built classifier for each individual view is explicitly combined with a weight factor. By analyzing the weakness of it, a new adapted weight learning strategy is further formulated, and we come to the convex multiview semi-supervised classification (CMSSC) method. Comparing with the PMSSC, this method has two significant properties. First, without too much loss in performance, the newly used weight learning technique achieves eliminating a hyperparameter, and thus it becomes more compact in form and practical to use. Second, as its name implies, the CMSSC models a convex problem, which avoids the local-minimum problem. Experimental results on several multiview data sets demonstrate that the proposed methods achieve better performances than recent representative methods and the CMSSC is preferred due to its good traits.

Jiao Wu;Feilong Cao;Juncheng Yin; "Nonlocaly Multi-Morphological Representation for Image Reconstruction From Compressive Measurements," vol.26(12), pp.5730-5742, Dec. 2017. A novel multi-morphological representation model for solving the nonlocal similarity-based image reconstruction from compressed measurements is introduced in this paper. Under the probabilistic framework, the proposed approach provides the nonlocal similarity clustering for image patches by using the Gaussian mixture models, and endows a multi-morphological representation for image patches in each cluster by using the Gaussians that represent the different features to model the morphological components. Using the simple alternating iteration, the developed piecewise morphological diversity estimation (PMDE) algorithm can effectively estimate the MAP of morphological components, thus resulting in the nonlinear estimation for image patches. We extend the PMDE to a piecewise morphological diversity sparse estimation by using the constrained Gaussians with the low-rank covariance matrices, to gain the performance improvements. We report the experimental results on image compressed sensing in the case of sensing nonoverlapping patches with Gaussian random matrices. The results demonstrate that our algorithms can suppress undesirable block artifacts efficiently, and delivers reconstructed images with higher qualities than other state-of-the-art methods.

Lin Ding;Yonghong Tian;Hongfei Fan;Yaowei Wang;Tiejun Huang; "Rate-Performance-Loss Optimization for Inter-Frame Deep Feature Coding From Videos," vol.26(12), pp.5743-5757, Dec. 2017. With the explosion in the use of cameras in mobile phones or video surveillance systems, it is impossible to transmit a large amount of videos captured from a wide area into a cloud for big data analysis and retrieval. Instead, a feasible solution is to extract and compress features from videos and then transmit the compact features to the cloud. Meanwhile, many recent studies also indicate that the features extracted from the deep convolutional neural networks will lead to high performance for various analysis and recognition tasks. However, how to compress video deep features meanwhile maintaining the analysis or retrieval performance still remains open. To address this problem, we propose a high-efficiency deep feature coding (DFC) framework in this paper. In the DFC framework, we define three types of features in a group-of-features (GOFs) according to their coding modes (i.e., I-feature, P-feature, and S-feature). We then design two prediction structures for these features in a GOF, including a sequential prediction structure and an adaptive prediction structure. Similar to video coding, it is important for P-feature residual coding optimization to make a tradeoff between feature bitrate and analysis/retrieval performance when encoding residuals. To do so, we propose a rate-performance-loss optimization model. To evaluate various feature coding methods for large-scale video retrieval, we construct a video feature coding data set, called VFC-1M, which consists of uncompressed videos from different scenarios captured from real-world surveillance cameras, with totally 1M visual objects. Extensive experiments show that the proposed DFC can significantly reduce the bitrate of deep features in the videos while maintaining the retrieval accuracy.

Hao Sheng;Shuo Zhang;Xiaochun Cao;Yajun Fang;Zhang Xiong; "Geometric Occlusion Analysis in Depth Estimation Using Integral Guided Filter for Light-Field Image," vol.26(12), pp.5758-5771, Dec. 2017. Unlike traditional multi-view images, sampling in angular domain of light field images is distributed in different directions. Therefore, an angular sampling image (ASI), comprising of possible matching points extracted from each view, is available for each point. In this paper, we analyze the geometric relationship between ASIs and reference sub-aperture images, and then prove the occlusion boundary similarity. Based on the geometric relationship in extreme cases, we show that some points in ASI have higher reliability than other points for depth calculation. An integral guided filter is then built based on the sub-aperture image to predict occlusion probabilities in ASIs. The filter is independent of ASIs and has no requirement for high angular resolution so that it is easy to apply to the cost volume calculation. We integrate the filter into our depth estimation framework and other state-of-the-art depth estimation frameworks. Experimental results demonstrate that the proposed filter is more effective to occluded point detection in ASIs than other methods. Results from different data sets show that our method outperforms the existing state-of-the-art depth estimation methods, especially along occlusion boundaries.

Qiegen Liu;Guangpu Shao;Yuhao Wang;Junbin Gao;Henry Leung; "Log-Euclidean Metrics for Contrast Preserving Decolorization," vol.26(12), pp.5772-5783, Dec. 2017. This paper presents a novel Log-Euclidean metric inspired color-to-gray conversion model for faithfully preserving the contrast details of color image, which differs from the traditional Euclidean metric approaches. In the proposed model, motivated by the fact that Log-Euclidean metric has promising invariance properties such as inversion invariant and similarity invariant, we present a Log-Euclidean metric-based maximum function to model the decolorization procedure. The Gaussian-like penalty function consisting of the Log-Euclidean metric between gradients of the input color image and transformed grayscale image is incorporated to better reflect the degree of preserving feature discriminability and color ordering in color-to-gray conversion. A discrete searching algorithm is employed to solve the proposed model with linear parametric and non-negative constraints. Extensive evaluation experiments show that the proposed method outperforms the state-of-the-art methods both quantitatively and qualitatively.

Bing Su;Jiahuan Zhou;Xiaoqing Ding;Ying Wu; "Unsupervised Hierarchical Dynamic Parsing and Encoding for Action Recognition," vol.26(12), pp.5784-5799, Dec. 2017. Generally, the evolution of an action is not uniform across the video, but exhibits quite complex rhythms and non-stationary dynamics. To model such non-uniform temporal dynamics, in this paper, we describe a novel hierarchical dynamic parsing and encoding method to capture both the locally smooth dynamics and globally drastic dynamic changes. It parses the dynamics of an action into different layers and encodes such multi-layer temporal information into a joint representation for action recognition. At the first layer, the action sequence is parsed in an unsupervised manner into several smooth-changing stages corresponding to different key poses or temporal structures by temporal clustering. The dynamics within each stage are encoded by mean-pooling or rank-pooling. At the second layer, the temporal information of the ordered dynamics extracted from the previous layer is encoded again by rank-pooling to form the overall representation. Extensive experiments on a gesture action data set (Chalearn Gesture) and three generic action data sets (Olympic Sports, Hollywood2, and UCF101) have demonstrated the effectiveness of the proposed method.

Lianghua Huang;Bo Ma;Jianbing Shen;Hui He;Ling Shao;Fatih Porikli; "Visual Tracking by Sampling in Part Space," vol.26(12), pp.5800-5810, Dec. 2017. In this paper, we present a novel part-based visual tracking method from the perspective of probability sampling. Specifically, we represent the target by a part space with two online learned probabilities to capture the structure of the target. The proposal distribution memorizes the historical performance of different parts, and it is used for the first round of part selection. The acceptance probability validates the specific tracking stability of each part in a frame, and it determines whether to accept its vote or to reject it. By doing this, we transform the complex online part selection problem into a probability learning one, which is easier to tackle. The observation model of each part is constructed by an improved supervised descent method and is learned in an incremental manner. Experimental results on two benchmarks demonstrate the competitive performance of our tracker against state-of-the-art methods.

## IEEE Transactions on Medical Imaging - new TOC (2017 September 21) [Website]

* "IEEE Transactions on Medical Imaging publication information," vol.36(9), pp.C2-C2, Sept. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Wolf-Dieter Vogl;Sebastian M. Waldstein;Bianca S. Gerendas;Ursula Schmidt-Erfurth;Georg Langs; "Predicting Macular Edema Recurrence from Spatio-Temporal Signatures in Optical Coherence Tomography Images," vol.36(9), pp.1773-1783, Sept. 2017. Prediction of treatment responses from available data is key to optimizing personalized treatment. Retinal diseases are treated over long periods and patients' response patterns differ substantially, ranging from a complete response to a recurrence of the disease and need for re-treatment at different intervals. Linking observable variables in high-dimensional observations to outcome is challenging. In this paper, we present and evaluate two different data-driven machine learning approaches operating in a high-dimensional feature space: sparse logistic regression and random forests-based extra trees (ET). Both identify spatio-temporal signatures based on retinal thickness features measured in longitudinal spectral-domain optical coherence tomography (OCT) imaging data and predict individual patient outcome using these quantitative characteristics. We demonstrate on a data set of monthly SD-OCT scans of 155 patients with central retinal vein occlusion (CRVO) and 92 patients with branch retinal vein occlusion (BRVO) followed over one year that we can predict from initial three observations if the treated disease will recur within the covered interval. ET predicts the outcome on fivefold cross-validation with an area under the receiver operating characteristic curve (AuC) of 0.83 for BRVO and 0.76 for CRVO. Logistic regression achieved an AuC of 0.78 and 0.79, respectively. At the same time, the methods identified stable predictive signatures in the longitudinal imaging data that are the basis for accurate prediction. Furthermore, our results show that taking spatio-temporal features into account improves accuracy compared with features extracted at a single time-point. Our results demonstrate the feasibility of mining longitudinal data for predictive signatures, and building predictive models based on observed data.

Silvia Pani;Sarene C. Saifuddin;Filipa I.M. Ferreira;Nicholas Henthorn;Paul Seller;Paul J. Sellin;Philipp Stratmann;Matthew C. Veale;Matthew D. Wilson;Robert J. Cernik; "High Energy Resolution Hyperspectral X-Ray Imaging for Low-Dose Contrast-Enhanced Digital Mammography," vol.36(9), pp.1784-1795, Sept. 2017. Contrast-enhanced digital mammography (CEDM) is an alternative to conventional X-ray mammography for imaging dense breasts. However, conventional approaches to CEDM require a double exposure of the patient, implying higher dose, and risk of incorrect image registration due to motion artifacts. A novel approach is presented, based on hyperspectral imaging, where a detector combining positional and high-resolution spectral information (in this case based on Cadmium Telluride) is used. This allows simultaneous acquisition of the two images required for CEDM. The approach was tested on a custom breast-equivalent phantom containing iodinated contrast agent (Niopam 150®). Two algorithms were used to obtain images of the contrast agent distribution: K-edge subtraction (KES), providing images of the distribution of the contrast agent with the background structures removed, and a dual-energy (DE) algorithm, providing an iodine-equivalent image and a water-equivalent image. The high energy resolution of the detector allowed the selection of two close-by energies, maximising the signal in KES images, and enhancing the visibility of details with the low surface concentration of contrast agent. DE performed consistently better than KES in terms of contrast-to-noise ratio of the details; moreover, it allowed a correct reconstruction of the surface concentration of the contrast agent in the iodine image. Comparison with CEDM with a conventional detector proved the superior performance of hyperspectral CEDM in terms of the image quality/dose tradeoff.

Abd-Krim Seghouane;Asif Iqbal; "Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis," vol.36(9), pp.1796-1807, Sept. 2017. Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

Taly Gilat Schmidt;Rina Foygel Barber;Emil Y. Sidky; "A Spectral CT Method to Directly Estimate Basis Material Maps From Experimental Photon-Counting Data," vol.36(9), pp.1808-1819, Sept. 2017. The proposed spectral CT method solves the constrained one-step spectral CT reconstruction (cOSSCIR) optimization problem to estimate basis material maps while modeling the nonlinear X-ray detection process and enforcing convex constraints on the basis map images. In order to apply the optimization-based reconstruction approach to experimental data, the presented method empirically estimates the effective energy-window spectra using a calibration procedure. The amplitudes of the estimated spectra were further optimized as part of the reconstruction process to reduce ring artifacts. A validation approach was developed to select constraint parameters. The proposed spectral CT method was evaluated through simulations and experiments with a photon-counting detector. Basis material map images were successfully reconstructed using the presented empirical spectral modeling and cOSSCIR optimization approach. In simulations, the cOSSCIR approach accurately reconstructed the basis map images (<;1% error). In experiments, the proposed method estimated the low-density polyethylene region of the basis maps with 0.5% error in the PMMA image and 4% error in the aluminum image. For the Teflon region, the experimental results demonstrated 8% and 31% error in the PMMA and aluminum basis material maps, respectively, compared with -24% and 126% error without estimation of the effective energy window spectra, with residual errors likely due to insufficient modeling of detector effects. The cOSSCIR algorithm estimated the material decomposition angle to within 1.3 degree error, where, for reference, the difference in angle between PMMA and muscle tissue is 2.1 degrees. The joint estimation of spectral-response scaling coefficients and basis material maps was found to reduce ring artifacts in both a phantom and tissue specimen. The presented validation procedure demonstrated feasibility for the automated determination of algorithm constraint parameters.

Chumin Zhao;Jerzy Kanicki; "Task-Based Modeling of a 5k Ultra-High-Resolution Medical Imaging System for Digital Breast Tomosynthesis," vol.36(9), pp.1820-1831, Sept. 2017. High-resolution, low-noise X-ray detectors based on CMOS active pixel sensor (APS) technology have demonstrated superior imaging performance for digital breast tomosynthesis (DBT). This paper presents a task-based model for a high-resolution medical imaging system to evaluate its ability to detect simulated microcalcifications and masses as lesions for breast cancer. A 3-D cascaded system analysis for a 50-μm pixel pitch CMOS APS X-ray detector was integrated with an object task function, a medical imaging display model, and the human eye contrast sensitivity function to calculate the detectability index and area under the ROC curve (AUC). It was demonstrated that the display pixel pitch and zoom factor should be optimized to improve the AUC for detecting small microcalcifications. In addition, detector electronic noise of smaller than 300 e- and a high display maximum luminance (>1000 cd/cm2) are desirable to distinguish microcalcifications of 150 μm in size. For low contrast mass detection, a medical imaging display with a minimum of 12-bit gray levels is recommended to realize accurate luminance levels. A wide projection angle range of greater than ±30° in combination with the image gray level magnification could improve the mass detectability especially when the anatomical background noise is high. On the other hand, a narrower projection angle range below ±20° can improve the small, high contrast object detection. Due to the low mass contrast and luminance, the ambient luminance should be controlled below 5 cd/m2. Task-based modeling provides important firsthand imaging performance of the high-resolution CMOS-based medical imaging system that is still at early stage development for DBT. The modeling results could guide the prototype design and clinical studies in the future.

Bo Gong;Benjamin Schullcke;Sabine Krueger-Ziolek;Marko Vauhkonen;Gerhard Wolf;Ullrich Mueller-Lisse;Knut Moeller; "EIT Imaging Regularization Based on Spectral Graph Wavelets," vol.36(9), pp.1832-1844, Sept. 2017. The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

Daniele Ravì;Himar Fabelo;Gustavo Marrero Callic;Guang-Zhong Yang; "Manifold Embedding and Semantic Segmentation for Intraoperative Guidance With Hyperspectral Brain Imaging," vol.36(9), pp.1845-1857, Sept. 2017. Recent advances in hyperspectral imaging have made it a promising solution for intra-operative tissue characterization, with the advantages of being non-contact, non-ionizing, and non-invasive. Working with hyperspectral images in vivo, however, is not straightforward as the high dimensionality of the data makes real-time processing challenging. In this paper, a novel dimensionality reduction scheme and a new processing pipeline are introduced to obtain a detailed tumor classification map for intra-operative margin definition during brain surgery. However, existing approaches to dimensionality reduction based on manifold embedding can be time consuming and may not guarantee a consistent result, thus hindering final tissue classification. The proposed framework aims to overcome these problems through a process divided into two steps: dimensionality reduction based on an extension of the T-distributed stochastic neighbor approach is first performed and then a semantic segmentation technique is applied to the embedded results by using a Semantic Texton Forest for tissue classification. Detailed in vivo validation of the proposed method has been performed to demonstrate the potential clinical value of the system.

Lu Ding;Xosé Luís Deán-Ben;Daniel Razansky; "Efficient 3-D Model-Based Reconstruction Scheme for Arbitrary Optoacoustic Acquisition Geometries," vol.36(9), pp.1858-1867, Sept. 2017. Optimal optoacoustic tomographic sampling is often hindered by the frequency-dependent directivity of ultrasound sensors, which can only be accounted for with an accurate 3-D model. Herein, we introduce a 3-D model-based reconstruction method applicable to optoacoustic imaging systems employing detection elements with arbitrary size and shape. The computational complexity and memory requirements are mitigated by introducing an efficient graphic processing unit (GPU)-based implementation of the iterative inversion. On-the-fly calculation of the entries of the model-matrix via a small look-up table avoids otherwise unfeasible storage of matrices typically occupying more than 300GB of memory. Superior imaging performance of the suggested method with respect to standard optoacoustic image reconstruction methods is first validated quantitatively using tissue-mimicking phantoms. Significant improvements in the spatial resolution, contrast to noise ratio and overall 3-D image quality are also reported in real tissues by imaging the finger of a healthy volunteer with a hand-held volumetric optoacoustic imaging system.

Peter A. Muller;Jennifer L. Mueller;Michelle M. Mellenthin; "Real-Time Implementation of Calderón’s Method on Subject-Specific Domains," vol.36(9), pp.1868-1875, Sept. 2017. A real-time implementation of Calderón's method for the reconstruction of a 2-D conductivity from electrical impedance tomography data is presented, in which domain-specific modeling is taken into account. This is the first implementation of Calderón's method that accounts for correct modeling of non-symmetric domain boundaries in image reconstruction. The domain-specific Calderón's method is derived and reconstructions from experimental tank data are presented, quantifying the distortion when correct modeling is not included in the reconstruction algorithm. Reconstructions from human subject volunteers are presented, demonstrating the method's effectiveness for imaging changes due to ventilation and perfusion in the human thorax.

Yading Yuan;Ming Chao;Yeh-Chi Lo; "Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance," vol.36(9), pp.1876-1886, Sept. 2017. Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.

Zixu Yan;Feng Chen;Dexing Kong; "Liver Venous Tree Separation via Twin-Line RANSAC and Murray’s Law," vol.36(9), pp.1887-1900, Sept. 2017. It is essential for physicians to obtain the accurate venous tree from abdominal CT angiography (CTA) series in order to carry out the preoperative planning and intraoperative navigation for hepatic surgery. In this process, one of the important tasks is to separate the given liver venous mask into its hepatic and portal parts. In this paper, we present a novel method for liver venous tree separation. The proposed method first concentrates on extracting potential vessel intersection points between hepatic and portal venous systems. Then, the proposed method focuses on modeling the vessel intersection neighborhoods with a robust twin-line random sample consensus (RANSAC) shape detector. Finally, the proposed method conducts the venous tree separation based on the results of the twin-line RANSAC as well as physical constraints posed by Murray's Law. We test our method on 22 clinical CTA series and demonstrate its effectiveness.

Charles Tremblay-Darveau;Avinoam Bar-Zion;Ross Williams;Paul S. Sheeran;Laurent Milot;Thanasis Loupas;Dan Adam;Matthew Bruce;Peter N. Burns; "Improved Contrast-Enhanced Power Doppler Using a Coherence-Based Estimator," vol.36(9), pp.1901-1911, Sept. 2017. While plane-wave imaging can improve the performance of power Doppler by enabling much longer ensembles than systems using focused beams, the long-ensemble averaging of the zero-lag autocorrelation R(0) estimates does not directly decrease the mean noise level, but only decreases its variance. Spatial variation of the noise due to the time-gain compensation and the received beamforming aperture ultimately limits sensitivity. In this paper, we demonstrate that the performance of power Doppler imaging can be improved by leveraging the higher lags of the autocorrelation [e.g., R(1), R(2),...] instead of the signal power (R(0)). As noise is completely uncorrelated from pulse-to-pulse while the flow signal remains correlated significantly longer, weak signals just above the noise floor can be made visible through the reduction of the noise floor. Finally, as coherence decreases proportionally with respect to velocity, we demonstrate how signal coherence can be targeted to separate flows of different velocities. For instance, we show how long-time-range coherence of microbubble contrast-enhanced flow specifically isolates slow capillary perfusion (as opposed to conduit flow).

Sara Park;Jongseong Jang;Jeesu Kim;Young Soo Kim;Chulhong Kim; "Real-time Triple-modal Photoacoustic, Ultrasound, and Magnetic Resonance Fusion Imaging of Humans," vol.36(9), pp.1912-1921, Sept. 2017. Imaging that fuses multiple modes has become a useful tool for diagnosis and therapeutic monitoring. As a next step, real-time fusion imaging has attracted interest as for a tool to guide surgery. One widespread fusion imaging technique in surgery combines real-time ultrasound (US) imaging and pre-acquired magnetic resonance (MR) imaging. However, US imaging visualizes only structural information with relatively low contrast. Here, we present a photoacoustic (PA), US, and MR fusion imaging system which integrates a clinical PA and US imaging system with an optical tracking-based navigation sub-system. Through co-registration of pre-acquired MR and real-time PA/US images, overlaid PA, US, and MR images can be concurrently displayed in real time. We successfully acquired fusion images from a phantom and a blood vessel in a human forearm. This fusion imaging can complementarily delineate the morphological and vascular structure of tissues with good contrast and sensitivity, has a well-established user interface, and can be flexibly integrated with clinical environments. As a novel fusion imaging, the proposed triple-mode imaging can provide comprehensive image guidance in real time, and can potentially assist various surgeries.

Xingying Wang;Vipin Seetohul;Ruimin Chen;Zhiqiang Zhang;Ming Qian;Zhehao Shi;Ge Yang;Peitian Mu;Congzhi Wang;Zhihong Huang;Qifa Zhou;Hairong Zheng;Sandy Cochran;Weibao Qiu; "Development of a Mechanical Scanning Device With High-Frequency Ultrasound Transducer for Ultrasonic Capsule Endoscopy," vol.36(9), pp.1922-1929, Sept. 2017. Wireless capsule endoscopy has opened a new era by enabling remote diagnostic assessment of the gastrointestinaltract in a painless procedure. Video capsule endoscopy is currently commercially available worldwide. However, it is limited to visualization of superficial tissue. Ultrasound (US) imaging is a complementary solution as it is capable of acquiring transmural information from the tissue wall. This paper presents a mechanical scanning device incorporating a high-frequency transducer specifically as a proof of concept for US capsule endoscopy (USCE), providing information that may usefully assist future research. A rotary solenoid-coil-based motor was employed to rotate the US transducer with sectional electronic control. A set of gears was used to convert the sectional rotation to circular rotation. A single-element focused US transducer with 39-MHz center frequency was used for high-resolution US imaging, connected to an imaging platform for pulse generation and image processing. Key parameters of US imaging for USCE applications were evaluated. Wire phantom imaging and tissue phantom imaging have been conducted to evaluate the performance of the proposed method. A porcine small intestine specimen was also used for imaging evaluation in vitro. Test results demonstrate that the proposed device and rotation mechanism are able to offer good image resolution ( ~60 μm) of the lumen wall, and they, therefore, offer a viable basis for the fabrication of a USCE device.

Huazhu Fu;Yanwu Xu;Stephen Lin;Xiaoqin Zhang;Damon Wing Kee Wong;Jiang Liu;Alejandro F. Frangi;Mani Baskaran;Tin Aung; "Segmentation and Quantification for Angle-Closure Glaucoma Assessment in Anterior Segment OCT," vol.36(9), pp.1930-1938, Sept. 2017. Angle-closure glaucoma is a major cause of irreversible visual impairment and can be identified by measuring the anterior chamber angle (ACA) of the eye. The ACA can be viewed clearly through anterior segment optical coherence tomography (AS-OCT), but the imaging characteristics and the shapes and locations of major ocular structures can vary significantly among different AS-OCT modalities, thus complicating image analysis. To address this problem, we propose a data-driven approach for automatic AS-OCT structure segmentation, measurement, and screening. Our technique first estimates initial markers in the eye through label transfer from a hand-labeled exemplar data set, whose images are collected over different patients and AS-OCT modalities. These initial markers are then refined by using a graph-based smoothing method that is guided by AS-OCT structural information. These markers facilitate segmentation of major clinical structures, which are used to recover standard clinical parameters. These parameters can be used not only to support clinicians in making anatomical assessments, but also to serve as features for detecting anterior angle closure in automatic glaucoma screening algorithms. Experiments on Visante AS-OCT and Cirrus high-definition-OCT data sets demonstrate the effectiveness of our approach.

Jian Wang;Roman Schaffert;Anja Borsdorf;Benno Heigl;Xiaolin Huang;Joachim Hornegger;Andreas Maier; "Dynamic 2-D/3-D Rigid Registration Framework Using Point-To-Plane Correspondence Model," vol.36(9), pp.1939-1954, Sept. 2017. In image-guided interventional procedures, live 2-D X-ray images can be augmented with preoperative 3-D computed tomography or MRI images to provide planning landmarks and enhanced spatial perception. An accurate alignment between the 3-D and 2-D images is a prerequisite for fusion applications. This paper presents a dynamic rigid 2-D/3-D registration framework, which measures the local 3-D-to-2-D misalignment and efficiently constrains the update of both planar and non-planar 3-D rigid transformations using a novel point-to-plane correspondence model. In the simulation evaluation, the proposed method achieved a mean 3-D accuracy of 0.07 mm for the head phantom and 0.05 mm for the thorax phantom using single-view X-ray images. In the evaluation on dynamic motion compensation, our method significantly increases the accuracy comparing with the baseline method. The proposed method is also evaluated on a publicly-available clinical angiogram data set with “gold-standard” registrations. The proposed method achieved a mean 3-D accuracy below 0.8 mm and a mean 2-D accuracy below 0.3 mm using single-view X-ray images. It outperformed the state-of-the-art methods in both accuracy and robustness in single-view registration. The proposed method is intuitive, generic, and suitable for both initial and dynamic registration scenarios.

Ali Behrooz;Peet Kask;Jeff Meganck;Joshua Kempner; "Automated Quantitative Bone Analysis in In Vivo X-ray Micro-Computed Tomography," vol.36(9), pp.1955-1965, Sept. 2017. Measurement and analysis of bone morphometry in 3D micro-computed tomography volumes using automated image processing and analysis improve the accuracy, consistency, reproducibility, and speed of preclinical osteological research studies. Automating segmentation and separation of individual bones in 3D microcomputed tomography volumes of murine models presents significant challenges considering partial volume effects and joints with thin spacing, i.e., 50 to 100 μm. In this paper, novel hybrid splitting filters are presented to overcome the challenge of automated bone separation. This is achieved by enhancing joint contrast using rotationally invariant second-derivative operators. These filters generate split components that seed marker-controlled watershed segmentation. In addition, these filters can be used to separate metaphysis and epiphysis in long bones, e.g., femur, and remove the metaphyseal growth plate from the detected bone mask in morphometric measurements. Moreover, for slice-by-slice stereological measurements of long bones, particularly curved bones, such as tibia, the accuracy of the analysis can be improved if the planar measurements are guided to follow the longitudinal direction of the bone. In this paper, an approach is presented for characterizing the bone medial axis using morphological thinning and centerline operations. Building upon the medial axis, a novel framework is presented to automatically guide stereological measurements of long bones and enhance measurement accuracy and consistency. These image processing and analysis approaches are combined in an automated streamlined software workflow and applied to a range of in vivo micro-computed tomography studies for validation.

Jwala Dhamala;Hermenegild J. Arevalo;John Sapp;Milan Horacek;Katherine C. Wu;Natalia A. Trayanova;Linwei Wang; "Spatially Adaptive Multi-Scale Optimization for Local Parameter Estimation in Cardiac Electrophysiology," vol.36(9), pp.1966-1978, Sept. 2017. To obtain a patient-specific cardiac electro-physiological (EP) model, it is important to estimate the 3-D distributed tissue properties of the myocardium. Ideally, the tissue property should be estimated at the resolution of the cardiac mesh. However, such high-dimensional estimation faces major challenges in identifiability and computation. Most existing works reduce this dimension by partitioning the cardiac mesh into a pre-defined set of segments. The resulting low-resolution solutions have a limited ability to represent the underlying heterogeneous tissue properties of varying sizes, locations, and distributions. In this paper, we present a novel framework that, going beyond a uniform low-resolution approach, is able to obtain a higher resolution estimation of tissue properties represented by spatially non-uniform resolution. This is achieved by two central elements: 1) a multi-scale coarse-to-fine optimization that facilitates higher resolution optimization using the lower resolution solution and 2) a spatially adaptive decision criterion that retains lower resolution in homogeneous tissue regions and allows higher resolution in heterogeneous tissue regions. The presented framework is evaluated in estimating the local tissue excitability properties of a cardiac EP model on both synthetic and real data experiments. Its performance is compared with optimization using pre-defined segments. Results demonstrate the feasibility of the presented framework to estimate local parameters and to reveal heterogeneous tissue properties at a higher resolution without using a high number of unknowns.

Jaime Tierney;Crystal Coolbaugh;Theodore Towse;Brett Byram; "Adaptive Clutter Demodulation for Non-Contrast Ultrasound Perfusion Imaging," vol.36(9), pp.1979-1991, Sept. 2017. Conventional Doppler ultrasound is useful for visualizing fast blood flow in large resolvable vessels. However, frame rate and tissue clutter caused by movement of the patient or sonographer make visualizing slow flow with ultrasound difficult. Patient and sonographer motion causes spectral broadening of the clutter signal, which limits ultrasound's sensitivity to velocities greater than 5-10 mm/s for typical clinical imaging frequencies. To address this, we propose a clutter filtering technique that may increase the sensitivity of Doppler measurements to at least as low as 0.52 mm/s. The proposed technique uses plane wave imaging and an adaptive frequency and amplitude demodulation scheme to decrease the bandwidth of tissue clutter. To test the performance of the adaptive demodulation method at suppressing tissue clutter bandwidths due to sonographer hand motion alone, six volunteer subjects acquired data from a stationary phantom. Additionally, to test in vivo feasibility, arterial occlusion and muscle contraction studies were performed to assess the efficiency of the proposed filter at preserving signals from blood velocities 2 mm/s or greater at a 7.8 MHz center frequency. The hand motion study resulted in initial average bandwidths of 175 Hz (8.60 mm/s), which were decreased to 10.5 Hz (0.52 mm/s) at -60 dB using our approach. The in vivo power Doppler studies resulted in 4.73 dB and 4.80 dB dynamic ranges of the blood flow with the proposed filter and 0.15 dB and 0.16 dB dynamic ranges of the blood flow with a conventional 50 Hz high-pass filter for the occlusion and contraction studies, respectively.

A. H. Abdi;C. Luong;T. Tsang;G. Allan;S. Nouranian;J. Jue;D. Hawley;S. Fleming;K. Gin;J. Swift;R. Rohling;P. Abolmaesumi; "Correction to “Automatic Quality Assessment of Echocardiograms Using Convolutional Neural Networks: Feasibility on the Apical Four-Chamber View” [Jun 17 1221-1230[Name:_blank]]," vol.36(9), pp.1992-1992, Sept. 2017. In the above paper [1], the first footnote should have indicated the following information: A. H. Abdi and C. Luong are joint first authors.

* "BSN 2018 Body Sensor Networks Conference," vol.36(9), pp.1993-1993, Sept. 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "BHI-2018 IEEE International Conference on Biomedical and Helath Informatics," vol.36(9), pp.1994-1994, Sept. 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "IEEE Life Sciences Conference," vol.36(9), pp.1995-1995, Sept. 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "NIH-IEEE 2017 Special Topics Conference on Healthcare Innovations and Point of Care Technologies," vol.36(9), pp.1996-1996, Sept. 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "IEEE Transactions on Medical Imaging information for authors," vol.36(9), pp.C3-C3, Sept. 2017.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

## IET Image Processing - new TOC (2017 September 21) [Website]

Murat Alparslan Gungor;Irfan Karagoz; "Modified ultrasound despeckling assessment index for the Field II simulated cyst image," vol.11(9), pp.667-671, 9 2017. Various methods have been proposed to reduce speckle noise, which decreases image quality in ultrasound images. The Field II simulated cyst image consists of three classes and is used to compare a proposed despeckle filter with other well-known filters. The ultrasound despeckling assessment index (USDSAI) is a metric used to evaluate the proposed despeckling filters for the cyst image. This metric should be used when different regions are properly defined. In this study, the authors first analysed the performance of USDSAI for the cyst image. Then, the authors modified the USDSAI by proposing a new metric for the background class of the cyst image and evaluated its performance. The results show that the authors' proposed metric has better performance than USDSAI.

Michael Osadebey;Marius Pedersen;Douglas Arnold;Katrina Wendel-Mitoraj; "No-reference quality measure in brain MRI images using binary operations, texture and set analysis," vol.11(9), pp.672-684, 9 2017. The authors propose a new application-specific, post-acquisition quality evaluation method for brain magnetic resonance imaging (MRI) images. The domain of a MRI slice is regarded as universal set. Four feature images; greyscale, local entropy, local contrast and local standard deviation are extracted from the slice and transformed into the binary domain. Each feature image is regarded as a set enclosed by the universal set. Four qualities attribute; lightness, contrast, sharpness and texture details are described by four different combinations of feature sets. In an ideal MRI slice, the four feature sets are identically equal. Degree of distortion in real MRI slice is quantified by fidelity between the sets that describe a quality attribute. Noise is the fifth quality attribute and is described by the slice Euler number region property. Total quality score is the weighted sum of the five quality scores. The authors' proposed method addresses current challenges in image quality evaluation. It is simple, easy-to-use and easy-to-understand. Incorporation of binary transformation in the proposed method reduces computational and operational complexity of the algorithm. They provide experimental results that demonstrate efficacy of their proposed method on good quality images and on common distortions in MRI images of the brain.

Leibo Liu;Yingjie Chen;Chenchen Deng;Shouyi Yin;Shaojun Wei; "Implementation of in-loop filter for HEVC decoder on reconfigurable processor," vol.11(9), pp.685-692, 9 2017. The in-loop filter comprises deblocking filter and sample adaptive offset filter, which is an important module for improving image quality in a high-efficiency video coding (HEVC) decoder. The in-loop filter has a high computational complexity that accounts for ~20% of the HEVC decoding computing load. Furthermore, it is difficult to implement a high-performing in-loop filter due to its large conditional processing requirement. First, this study presents a novel reconfigurable HEVC in-loop filter implementation on a coarse-grained dynamically reconfigurable processing unit. Next, a repartition scheme is presented that allows the in-loop filter implementation at a coding tree unit along with the other decoding modules in the HEVC decoder, which satisfies requirements of low latency applications. Finally, a hierarchised-pipeline and synchronised-parallel technique is used to improve performance by eliminating data hazards in pipeline techniques and synchronisation problems in parallel techniques. Implementation results show that the presented HEVC in-loop filter performs up to 1920 × 1080@52 frames per second at 250 MHz. The throughput is 67.5 × 9 × more than solutions based on digital signal processor and general-purpose processor, respectively.

Tehmina Khalil;Muhammad Usman Akram;Samina Khalid;Amina Jameel; "Improved automated detection of glaucoma from fundus image using hybrid structural and textural features," vol.11(9), pp.693-700, 9 2017. Glaucoma is a group of eye disorders that damage the optic nerve. Considering a single eye condition for the diagnosis of glaucoma has failed to detect all glaucoma cases accurately. A reliable computer-aided diagnosis system is proposed based on a novel combination of hybrid structural and textural features. The system improves the decision-making process after analysing a variety of glaucoma conditions. It consists of two main modules hybrid structural feature-set (HSF) and hybrid texture feature-set (HTF). HSF module can classify a sample using support vector machine (SVM) from different structural glaucoma condition and the HTF module analyses the sample founded on various texture and intensity-based features and again using SVM makes a decision. In the case of any conflict in the results of both modules, a suspected class is introduced. A novel algorithm to compute the super-pixels has also been proposed to detect the damaged cup. This feature alone outperformed the current state-of-the-art methods with 94% sensitivity. Cup-to-disc ratio calculation method for cup and disc segmentation, involving two different channels has been introduced increasing the overall accuracy. The proposed system has given exceptional results with 100% accuracy for glaucoma referral.

D.M. Bappy;Insu Jeon; "Modified simultaneous iterative reconstruction technique for fast, high-quality CT reconstruction," vol.11(9), pp.701-708, 9 2017. Recent advances in computational power have made it possible to use iterative reconstruction (IR) algorithms in clinics for computed tomographic (CT) imaging. Many researchers prefer IR methods to analytical methods because they reduce radiation, image noise, and artefacts. Simultaneous Iterative Reconstruction Technique (SIRT) reduces the number of views needed for CT reconstruction. However, reconstructed images include ray artefacts that can make diagnosis difficult. This study proposes a modified IR algorithm for fast, high-quality CT reconstruction. The modified method incorporates geometric non-linear diffusion in the reconstruction estimate to minimise ray artefacts. This method also converges the algorithms into global minima much faster than other methods, using the minimum number of iterations. To meet the high computational demand of improved IR algorithms, a graphics processing unit was used in this study. The authors expect that the proposed technique can be used to reconstruct high-quality CT images faster and with minimal iterations.

Qing-Qiang Chen;Mao-Hsiung Hung;Fumin Zou; "Effective and adaptive algorithm for pepper-and-salt noise removal," vol.11(9), pp.709-716, 9 2017. According to the characteristic of pepper-and-salt noise, the authors first classify pixels in a polluted image into two classes: suspected noise and noise-free pixels. For a suspected noisy pixel, by counting the number of closed grey-level and noise-free pixels in a neighbourhood, one can correctly determine a noise or a noise-free pixel. Noise filtering does not process noise-free pixels. For the noisy pixels, an adaptive filtering algorithm with weighting mean based on Euler distance achieves excellent noise removal and good detail preservation. The algorithm can handle different noise levels, and the authors do not need to manually adjust the parameters and thresholds. The experimental results indicate that the authors' proposed method effectively filters pepper-and-salt noise. The authors note that when noise-free and noisy pixels with the same grey level appear in the polluted images, the noise-removal performance by the proposed method is much more excellent than those of the other existing methods.

Xuebin Sun;Xiaodong Chen;Yong Xu;Yi Wang;Daoyin Yu; "Fast CU partition strategy for HEVC based on Haar wavelet," vol.11(9), pp.717-723, 9 2017. As the latest video coding standard, high-efficiency video coding (HEVC) achieves better performance and supports higher resolution compared with the predecessor standard, H.264/advanced video coding (AVC). Intra-coding is an important feature in HEVC standard, which reduces the spatial redundancy significantly, due to the flexible coding structure, and high density of angular prediction modes. However, the improvement on coding efficiency is obtained at the expense of the extraordinary computation complexity. This study presents a novel coding unit (CU) partitioning technique for HEVC. By using a fast texture complexity detection method, which is based on two-dimensional Haar wavelet transform, texture complexity for each CU can be extracted. According to the Haar wavelet coefficients obtained, an early CU splitting termination is proposed to decide whether a CU should be decomposed into four lower dimensions CUs or not. Experimental results demonstrate that the fast CU partition strategy achieves better trade-off between rate-distortion performance and complexity reduction than the previous algorithms. Compared with the reference software HM16.7, the proposed algorithm can lessen the encoding time up to 46.22% on average, with a negligible bit rate increase of 0.45%, and quality losses lower than 0.04 dB, respectively.

Wangming Xu;Shiqian Wu;Meng Joo Er;Chaobing Zheng;Yimin Qiu; "New non-negative sparse feature learning approach for content-based image retrieval," vol.11(9), pp.724-733, 9 2017. One key issue in content-based image retrieval is to extract effective features so as to represent the visual content of an image. In this study, a new non-negative sparse feature learning approach to produce a holistic image representation based on low-level local features is presented. Specifically, a modified spectral clustering method is introduced to learn a non-negative visual dictionary from local features of training images. A non-negative sparse feature encoding method termed non-negative locality-constrained linear coding (NNLLC) is proposed to improve the popular locality-constrained linear coding method so as to obtain more meaningful and interpretable sparse codes for feature representation. Moreover, a new feature pooling strategy named kMaxSum pooling is proposed to alleviate the information loss of the sum pooling or max pooling strategy, which produces a more effective holistic image representation and can be viewed as a generalisation of the sum and max pooling strategies. The retrieval results carried out on two public image databases demonstrate the effectiveness of the proposed approach.

Xiuhong Yang;Baolong Guo; "Fractional-order tensor regularisation for image inpainting," vol.11(9), pp.734-745, 9 2017. Compared with classic integer-order calculus, fractional calculus is a more powerful mathematical method that non-linearly preserves and enhances image features in different frequency bands. In order to extend fractional-in-space diffusion scheme with matrix-valued diffusivity to perform superior image inpainting, the authors build the new fractional-order tensor regularisation (FTR) model by utilising the newly defined fractional-order structure tensor (FST) to control the regularisation process. The proposed model is derived as a process that minimises a functional proportional to the FST composed of the inner product of the fractional derivative vector and its transposition; hence, the new model not only inherits genuine anisotropism of tensor regularisation, but is also better equipped to handle subtle details and complex structures because of the characteristics of fractional calculus. To minimise the proposed functional, the corresponding Euler-Lagrange equation is deduced, and the anisotropism of the proposed model is analysed accordingly. Fractional-order derivative masks in positive x and y directions and negative x and y directions are implemented according to the shifted Grümwald-Letnikov definition, and a proper iterative numerical scheme is analysed. According to experimental results on various test images, the proposed FTR inpainting model demonstrates superior inpainting performance both in noiseless and noisy scenarios.

Rahul Dixit;Ruchira Naskar; "Review, analysis and parameterisation of techniques for copy–move forgery detection in digital images," vol.11(9), pp.746-759, 9 2017. Copy-move forgery is one of the most preliminary and prevalent forms of modification attack on digital images. In this form of forgery, region(s) of an image is(are) copied and pasted onto itself, and subsequently the forged image is processed appropriately to hide the effects of forgery. State-of-the-art copy-move forgery detection techniques for digital images are primarily motivated toward finding duplicate regions in an image. The last decade has seen lot of research advancement in the area of digital image forensics, whereby the investigation for possible forgeries is solely based on post-processing of images. In this study, the authors present a three-way classification of state-of-the-art digital forensic techniques, along with a complete survey of their operating principles. In addition, they analyse the schemes and evaluate and compare their performances in terms of a proposed set of parameters, which may be used as a standard benchmark for evaluating the efficiency of any general copy-move forgery detection technique for digital images. The comparison results provided by them would help a user to select the most optimal forgery detection technique, depending on the author requirements.

Cecilia Di Ruberto; "Histogram of Radon transform and texton matrix for texture analysis and classification," vol.11(9), pp.760-766, 9 2017. In this study, the authors introduce a new and efficient method to classify texture images. From the histogram of the Radon transform, a texture orientation matrix is obtained and combined with a texton matrix for generating a new type of co-occurrence matrix. From the co-occurrences matrix, 20 statistical features for texture images classification have been extracted: seven statistics of the first-level order and 13 of the second-level one. K-Nearest neighbour and support vector machine models are used for classification. The proposed approach has been tested on widely used texture datasets (Brodatz and University KTH Royal Institute of Technology Textures under varying Illumination, Pose and Scale) and compared with several different alternative methods. The experimental results show a very high-accuracy level, confirming the strength of the developed method which overcomes the state-of-the-art methods for texture classification.

Azam Karami;Laleh Tafakori; "Image denoising using generalised Cauchy filter," vol.11(9), pp.767-776, 9 2017. In many image processing analysis, it is important to significantly reduce the noise level. This study aims at introducing an efficient method for this purpose based on generalised Cauchy (GC) distribution. Therefore, some characteristics of GC distribution is considered. In particular, the characteristic function of a GC distribution is derived by using the theory of positive definite densities and utilising the density of a GC random variable as the characteristic function of a convolution of two generalised non-symmetric Linnik variables. Further, GC distribution is considered as a filter and in the proposed method for image noise reduction the optimal parameters of GC filter is defined by using the particle swarm optimisation. The proposed method is applied to different types of noisy images and the obtained results are compared with four state-of-the-art denoising algorithms. Experimental results confirm that their method could significantly reduce the noise effect.

Anupama Namburu;Srinivas Kumar Samayamantula;Srinivasa Reddy Edara; "Generalised rough intuitionistic fuzzy c-means for magnetic resonance brain image segmentation," vol.11(9), pp.777-785, 9 2017. Intuitionistic fuzzy sets (IFSs), rough sets are efficient tools to handle uncertainty and vagueness present in images and recently are combined to segment medical images in the presence of noise and intensity non homogeneity (INU). These hybrid algorithms are sensitive to initial centroids, parameter tuning and dependency with the fuzzy membership function to define the IFS. In this paper, a novel clustering algorithm, namely generalized rough intutionistic fuzzy c-means (GRIFCM) is proposed for brain magnetic resonance (MR) image segmentation avoiding the dependency with the fuzzy membership function. In this algorithm, each pixel is categorized into three rough regions based on the thresholds obtained by the image data by minimizing the noise. These regions are used to create IFS. The distance measure based on IFS eliminate's the influence of noise and INU present in the image producing accurate brain tissue segmentation. The proposed algorithm is evaluated through simulation and compared it with existing k-means (KM), fuzzy c-means (FCM), Rough fuzzy c-means (RFCM), Generalized rough fuzzy c-means (GRFCM), soft rough fuzzy c-means (SRFCM) and rough intuitionistic fuzzy c-means (RIFCM) algorithms. Experimental results prove the superiority of the proposed algorithm over the considered algorithms in all analyzed scenarios.

Haoting Liu;Hanqing Lu;Yu Zhang; "Image enhancement for outdoor long-range surveillance using IQ-learning multiscale Retinex," vol.11(9), pp.786-795, 9 2017. The visible light camera-based long-range surveillance always suffers from the complex atmosphere. When applying some traditional image enhancement methods, the computational effects behave limited because of their poor environment adaptability. To conquer that problem, a blind image quality (IQ) learning-based multiscale Retinex, i.e. the IQ-learning multiscale Retinex, is proposed. First, a series of typical degenerated images are collected. Second, several blind IQ evaluation metrics are computed for the dataset above. They are the image brightness degree, the image region contrast degree, the image edge blur degree, the image colour quality degree, and the image noise degree. Third, a wavelet transform multi-scale Retinex (WT_MSR) is used to carry out the basic image enhancement. A kind of optimal enhancement is implemented by the subjective evaluation and tuning of multiple optimal control parameters (MOCPs) of WT_MSR for these degenerated dataset. Fourth, the back propagation neural network (BPNN) is used to build a connection between the IQ metrics and the MOCPs. Finally, when a new image is captured, this system will compute its IQ metrics and estimate the MOCPs for the WT_MSR by BPNN; then a kind of optimal enhancement can be realised. Many outdoor applications have shown the effectiveness of proposed method.

Sui Gong;Timothy S. Newman; "Fine feature sensitive marching squares," vol.11(9), pp.796-802, 9 2017. A new contouring method for producing region boundaries in two-dimensional (2D) scalar-value image datasets (such as grey-scale intensity images from a digital camera or X-ray device) with sub-pixel precision is introduced here. The method, fine feature sensitive marching squares (FFS-MS), extends MS isocontouring to produce an isocontour that preserves fine-scale features (which are often incorrectly recovered by standard MS). This extension is the 2D analogue of Kaneko and Yamamoto's volume preserving marching cubes algorithm. It has several phases. First, it recovers an isocontour using standard MS. Then, it produces a new dataset with data values estimated by treating the recovered contour as the actual boundary. Using this new dataset, it next compares that dataset's estimated data values with the data values at corresponding locations in the original dataset. Finally, the method adjusts the original dataset's pixel values at every pixel location, where there is a high discrepancy between the original data value and the estimated data value. It iteratively repeats its phases until an optimality criterion is satisfied. Experimental analyses of FFS-MS are also presented. The analyses focus on FF recovery in comparison with the standard MS.

## IEEE Transactions on Signal Processing - new TOC (2017 September 21) [Website]

Ganesh Venkatraman;Antti Tölli;Markku Juntti;Le-Nam Tran; "Multigroup Multicast Beamformer Design for MISO-OFDM With Antenna Selection," vol.65(22), pp.5832-5847, Nov.15, 15 2017. We study the problem of designing transmit beamformers for a multigroup multicasting by considering a multiple-input single-output orthogonal frequency-division multiplexing framework. The design objective involves either minimizing the total transmit power for certain guaranteed quality of service or maximizing the minimum achievable rate among the users for a given transmit power budget. The problem of interest can be formulated as a nonconvex quadratically constrained quadratic programming (QCQP) for which the prevailing semidefinite relaxation (SDR) technique is inefficient for at least two reasons. At first, the relaxed problem cannot be reformulated as a semidefinite programming. Second, even if the relaxed problem is solved, the so-called randomization procedure should be used to generate a feasible solution to the original QCQP, which is difficult to derive for the considered problem. To overcome these shortcomings, we adopt successive convex approximation framework to find multicast beamformers directly. The proposed method not only avoids the need of randomization search, but also incurs less computational complexity compared to an SDR approach. In addition, we also extend multicasting beamformer design problem with an additional constraint on the number of active elements, which is particularly relevant when the number of antennas is larger than that of radio frequency chains. Numerical results are used to demonstrate the superior performance of our proposed methods over the existing solutions.

Pascal Vallet;Philippe Loubaton; "On the Performance of MUSIC With Toeplitz Rectification in the Context of Large Arrays," vol.65(22), pp.5848-5859, Nov.15, 15 2017. When using subspace methods for DoA estimation such as MUSIC, it is well known that a performance loss occurs when the number of available samples <inline-formula><tex-math notation="LaTeX">$N$</tex-math></inline-formula> is not large compared to the number of sensors <inline-formula><tex-math notation="LaTeX">$M$</tex-math></inline-formula>. This degradation is mainly due to the use of the sample correlation matrix (SCM), which is a poor estimator of the true correlation matrix of the observations in this situation. When the latter exhibits a Toeplitz structure, a standard trick consists in correcting the structure of the SCM by averaging its entries along the subdiagonals. This procedure, known as Toeplitz rectification, is widely known to improve the estimation of the true correlation matrix, hence the performance of the corresponding subspace methods. In this paper, we propose a statistical analysis of the MUSIC method using Toeplitz rectified SCM (refered to as R-MUSIC), in the context where <inline-formula> <tex-math notation="LaTeX">$M,N$</tex-math></inline-formula> are of the same order of magnitude. More precisely, considering the asymptotic regime in which <inline-formula><tex-math notation="LaTeX">$M,N$</tex-math></inline-formula> converge to infinity at the same rate, we prove the consistency and asymptotic normality of the R-MUSIC DoA estimates. Numerical simulations show the accurate prediction provided by the proposed theoretical analysis.

Satish Mulleti;Chandra Sekhar Seelamantula; "Paley–Wiener Characterization of Kernels for Finite-Rate-of-Innovation Sampling," vol.65(22), pp.5860-5872, Nov.15, 15 2017. Exact reconstruction of finite-rate-of-innovation signals can be achieved by employing customized sampling kernels that satisfy certain frequency-domain properties. We impose compact support in time as an additional constraint. Considering frequency-domain reconstruction, we derive conditions for admissible sampling kernels and corresponding sampling rates. Our constructive kernel design methodology is based on the Paley–Wiener theorem for compactly supported functions. The new kernels satisfy generalized Strang-Fix conditions and have specific polynomial-modulated-exponential-reproducing properties. Unlike exponential splines, which have a support that is directly proportional to the number of exponentials they can generate, the proposed kernels have a support that is independent of that number. To analyze noise robustness, we consider a special member of the class that has a sum-of-modulated splines (SMS) form in the time domain and optimize its parameters to minimize the noise variance. The sum-of-sincs (SoS) kernel reported in the literature is an instance of this construction. In noise robustness analysis, SMS kernels show improvement in mean-squared error (MSE) compared with the state-of-the-art alternatives. In continuous-time noise, the improvement in MSE is about 2 dB for low signal-to-noise ratio (SNR) and 7 dB for high SNR. In the case of discrete white Gaussian noise, the MSE is lower by as much as 25 dB by using a higher-order SMS kernel compared with the SoS kernel for SNRs in the range of 10–15 dB.

Bing Gao;Zhiqiang Xu; "Phaseless Recovery Using the Gauss–Newton Method," vol.65(22), pp.5885-5896, Nov.15, 15 2017. In this paper, we propose a Gauss–Newton algorithm to recover an <inline-formula><tex-math notation="LaTeX">$n$</tex-math></inline-formula>-dimensional signal from its phaseless measurements. The algorithm has two stages. In the first stage, the algorithm obtains a good initialization by calculating the eigenvector corresponding to the largest eigenvalue of a Hermitian matrix. In the second stage, the algorithm solves an optimization problem iteratively using the Gauss–Newton method. Our initialization method makes full use of all measurements and provides a good initial guess, as long as the number of random measurements is <inline-formula> <tex-math notation="LaTeX">$O(n)$</tex-math></inline-formula>. For real-valued signals, we prove that a resampled version of Gauss–Newton iterations converges to the global optimal solution quadratically with <inline-formula> <tex-math notation="LaTeX">$O(n\log n)$</tex-math></inline-formula> random measurements. Numerical experiments show that the Gauss–Newton method has better empirical performance than other algorithms, such as the Wirtinger flow algorithm and Altmin phase algorithm.

Souleymen Sahnoun;Konstantin Usevich;Pierre Comon; "Multidimensional ESPRIT for Damped and Undamped Signals: Algorithm, Computations, and Perturbation Analysis," vol.65(22), pp.5897-5910, Nov.15, 15 2017. In this paper, we present and analyze the performance of multidimensional ESPRIT (<inline-formula> <tex-math notation="LaTeX">$N$</tex-math></inline-formula>-D ESPRIT) method for estimating parameters of <inline-formula><tex-math notation="LaTeX">$N$</tex-math></inline-formula>-D superimposed damped and/or undamped exponentials. <inline-formula><tex-math notation="LaTeX">$N$</tex-math></inline-formula>-D ESPRIT algorithm is based on low-rank decomposition of multilevel Hankel matrices formed by the <inline-formula><tex-math notation="LaTeX">$N$ </tex-math></inline-formula>-D data. In order to reduce the computational complexity for large signals, we propose a fast <inline-formula><tex-math notation="LaTeX">$N$</tex-math></inline-formula>-D ESPRIT using truncated singular value decomposition (SVD). Then, through a first-order perturbation analysis, we derive simple expressions of the variance of the estimates in <inline-formula><tex-math notation="LaTeX">$N$</tex-math></inline-formula>-D multiple-tones case. These expressions do not involve the factors of the SVD. We also derive closed-form expressions of the variances of the complex modes, frequencies, and damping factors estimates in the <inline-formula> <tex-math notation="LaTeX">$N$</tex-math></inline-formula>-D single-tone case. Computer results are presented to show effectiveness of the fast version of <inline-formula><tex-math notation="LaTeX">$N$</tex-math></inline-formula>-D ESPRIT and verify theoretical expressions.

Antonio G. Marques;Santiago Segarra;Geert Leus;Alejandro Ribeiro; "Stationary Graph Processes and Spectral Estimation," vol.65(22), pp.5911-5926, Nov.15, 15 2017. Stationarity is a cornerstone property that facilitates the analysis and processing of random signals in the time domain. Although time-varying signals are abundant in nature, in many practical scenarios, the information of interest resides in more irregular graph domains. This lack of regularity hampers the generalization of the classical notion of stationarity to graph signals. This paper proposes a definition of weak stationarity for random graph signals that takes into account the structure of the graph where the random process takes place, while inheriting many of the meaningful properties of the classical time domain definition. Provided that the topology of the graph can be described by a normal matrix, stationary graph processes can be modeled as the output of a linear graph filter applied to a white input. This is shown equivalent to requiring the correlation matrix to be diagonalized by the graph Fourier transform; a fact that is leveraged to define a notion of power spectral density (PSD). Properties of the graph PSD are analyzed and a number of methods for its estimation are proposed. This includes generalizations of nonparametric approaches such as periodograms, window-based average periodograms, and filter banks, as well as parametric approaches, using moving-average, autoregressive, and ARMA processes. Graph stationarity and graph PSD estimation are investigated numerically for synthetic and real-world graph signals.

Aritra Konar;Nicholas D. Sidiropoulos; "First-Order Methods for Fast Feasibility Pursuit of Non-convex QCQPs," vol.65(22), pp.5927-5941, Nov.15, 15 2017. Quadratically constrained quadratic programming (QCQP) is NP-hard in its general non-convex form, yet it frequently arises in various engineering applications. Several polynomial-time approximation algorithms exist for non-convex QCQP problems (QCQPs), but their success hinges upon the ability to find at least one feasible point—which is also hard for a general problem instance. In this paper, we present a heuristic framework for computing feasible points of general non-convex QCQPs using simple first-order methods. Our approach features low computational and memory requirements, which makes it well suited for application to large-scale problems. While a priori it may appear that these benefits come at the expense of technical sophistication, rendering our approach too simple to even merit consideration for a non-convex and NP-hard problem, we provide compelling empirical evidence to the contrary. Experiments on synthetic as well as real-world instances of non-convex QCQPs reveal the surprising effectiveness of first-order methods compared to more established and sophisticated alternatives.

Mohammad Alaee Kerahroodi;Augusto Aubry;Antonio De Maio;Mohammad Mahdi Naghsh;Mahmoud Modarres-Hashemi; "A Coordinate-Descent Framework to Design Low PSL/ISL Sequences," vol.65(22), pp.5942-5956, Nov.15, 15 2017. This paper is focused on the design of phase sequences with good (aperiodic) autocorrelation properties in terms of peak sidelobe level and integrated sidelobe level. The problem is formulated as a biobjective Pareto optimization forcing either a continuous or a discrete phase constraint at the design stage. An iterative procedure based on the coordinate descent method is introduced to deal with the resulting optimization problems that are nonconvex and NP-hard in general. Each iteration of the devised method requires the solution of a nonconvex min–max problem. It is handled either through a novel bisection or an FFT-based method respectively for the continuous and the discrete phase constraint. Additionally, a heuristic approach to initialize the procedures employing the <inline-formula> <tex-math notation="LaTeX">$l_p$</tex-math></inline-formula>-norm minimization technique is proposed. Simulation results illustrate that the proposed methodologies can outperform some counterparts providing sequences with good autocorrelation features especially in the discrete phase/binary case.

Tianyu Qiu;Daniel P. Palomar; "Undersampled Sparse Phase Retrieval via Majorization–Minimization," vol.65(22), pp.5957-5969, Nov.15, 15 2017. In the undersampled phase retrieval problem, the goal is to recover an <inline-formula><tex-math notation="LaTeX"> $N$</tex-math></inline-formula>-dimensional complex-valued signal from only <inline-formula><tex-math notation="LaTeX"> $M<N$</tex-math></inline-formula> intensity measurements without phase information. This inverse system is not only nonconvex, but also underdetermined. In this paper, we propose to exploit the sparsity in the original signal and develop two low-complexity algorithms with superior performance based on the majorization–minimization framework. The proposed algorithms are preferred to existing benchmark methods, since at each iteration a simple convex surrogate problem is solved with a closed-form solution that monotonically decreases the objective function value. When the unknown signal is sparse in the standard basis, the first algorithm C-PRIME can produce a stationary point of the corresponding nonconvex phase retrieval problem. When the unknown signal is not sparse in the standard basis, the second algorithm SC-PRIME can find a coordinatewise stationary point of the more challenging phase retrieval problem through sparse coding. Experimental results validate that the proposed algorithms have higher successful recovery rate and less normalized mean square error than existing up-to-date methods under the same setting.

Shuichi Ohno;Teruyuki Shiraki;M. Rizwan Tariq;Masaaki Nagahara; "Mean Squared Error Analysis of Quantizers With Error Feedback," vol.65(22), pp.5970-5981, Nov.15, 15 2017. Quantization is a fundamental process in digital signal processing. <inline-formula><tex-math notation="LaTeX"> $\Delta \Sigma$</tex-math></inline-formula> modulators are often utilized for quantization, which can be easily implemented with static uniform quantizers and error feedback filters. In this paper, we analyze the mean squared quantization error of the quantizer with error feedback including the <inline-formula><tex-math notation="LaTeX"> $\Delta \Sigma$</tex-math></inline-formula> modulators. First, we study the quantizer with an ideal optimal error feedback filter that minimizes the mean squared error (MSE) of quantization. We show that the amplitude response of the optimal error feedback filter can be parameterized by one parameter. This parameterization enables us to find the optimal error feedback filter numerically. Second, the relationship between the number of bits used for the quantizer and the achievable MSE is clarified by using the optimal error feedback filter. This makes it possible to investigate the efficiency of the quantizer with the optimal error feedback filter in terms of MSE. Then, ideal optimal error feedback filters are approximated by practically implementable filters using the Yule-Walker method and the linear matrix inequality-based method. Numerical examples are provided for demonstrating our analysis and synthesis.

Axel Finke;Sumeetpal S. Singh; "Approximate Smoothing and Parameter Estimation in High-Dimensional State-Space Models," vol.65(22), pp.5982-5994, Nov.15, 15 2017. We present approximate algorithms for performing smoothing in a class of high-dimensional state-space models via sequential Monte Carlo methods (particle filters). In high dimensions, a prohibitively large number of Monte Carlo samples (particles), growing exponentially in the dimension of the state space, are usually required to obtain a useful smoother. Employing blocking approximations, we exploit the spatial ergodicity properties of the model to circumvent this curse of dimensionality. We thus obtain approximate smoothers that can be computed recursively in time and parallel in space. First, we show that the bias of our blocked smoother is bounded uniformly in the time horizon and in the model dimension. We then approximate the blocked smoother with particles and derive the asymptotic variance of idealized versions of our blocked particle smoother to show that variance is no longer adversely effected by the dimension of the model. Finally, we employ our method to successfully perform maximum-likelihood estimation via stochastic gradient-ascent and stochastic expectation–maximization algorithms in a 100-dimensional state-space model.

Qingjiang Shi;Haoran Sun;Songtao Lu;Mingyi Hong;Meisam Razaviyayn; "Inexact Block Coordinate Descent Methods for Symmetric Nonnegative Matrix Factorization," vol.65(22), pp.5995-6008, Nov.15, 15 2017. Symmetric nonnegative matrix factorization (SNMF) is equivalent to computing a symmetric nonnegative low rank approximation of a data similarity matrix. It inherits the good data interpretability of the well-known nonnegative matrix factorization technique and has better ability of clustering nonlinearly separable data. In this paper, we focus on the algorithmic aspect of the SNMF problem and propose simple inexact block coordinate decent methods to address the problem, leading to both serial and parallel algorithms. The proposed algorithms have guaranteed convergence to stationary solutions and can efficiently handle large-scale and/or sparse SNMF problems. Extensive simulations verify the effectiveness of the proposed algorithms compared to recent state-of-the-art algorithms.

Michael Tschannen;Helmut Bölcskei; "Robust Nonparametric Nearest Neighbor Random Process Clustering," vol.65(22), pp.6009-6023, Nov.15, 15 2017. We consider the problem of clustering noisy finite-length observations of stationary ergodic random processes according to their generative models without prior knowledge of the model statistics and the number of generative models. Two algorithms, both using the <inline-formula><tex-math notation="LaTeX">$L^1$</tex-math></inline-formula> -distance between estimated power spectral densities (PSDs) as a measure of dissimilarity, are analyzed. The first one, termed nearest neighbor process clustering (NNPC), relies on partitioning the nearest neighbor graph of the observations via spectral clustering. The second algorithm, simply referred to as <inline-formula> <tex-math notation="LaTeX">$k$</tex-math></inline-formula>-means, consists of a single <inline-formula> <tex-math notation="LaTeX">$k$</tex-math></inline-formula>-means iteration with farthest point initialization and was considered before in the literature, albeit with a different dissimilarity measure. We prove that both algorithms succeed with high probability in the presence of noise and missing entries, and even when the generative process PSDs overlap significantly, all provided that the observation length is sufficiently large. Our results quantify the tradeoff between the overlap of the generative process PSDs, the observation length, the fraction of missing entries, and the noise variance. Finally, we provide extensive numerical results for synthetic and real data and find that NNPC outperforms state-of-the-art algorithms in human motion sequence clustering.

Shiqian Chen;Xingjian Dong;Zhike Peng;Wenming Zhang;Guang Meng; "Nonlinear Chirp Mode Decomposition: A Variational Method," vol.65(22), pp.6024-6037, Nov.15, 15 2017. Variational mode decomposition (VMD), a recently introduced method for adaptive data analysis, has aroused much attention in various fields. However, the VMD is formulated based on the assumption of narrow-band property of the signal model. To analyze wide-band nonlinear chirp signals (NCSs), we present an alternative method called variational nonlinear chirp mode decomposition (VNCMD). The VNCMD is developed from the fact that a wideband NCS can be transformed to a narrow-band signal by using demodulation techniques. Our decomposition problem is, thus, formulated as an optimal demodulation problem, which is efficiently solved by the alternating direction method of multipliers. Our method can be viewed as a time–frequency filter bank, which concurrently extracts all the signal modes. Some simulated and real data examples are provided showing the effectiveness of the VNCMD in analyzing NCSs containing close or even crossed modes.

## IEEE Signal Processing Letters - new TOC (2017 September 21) [Website]

Cheng-Yaw Low;Andrew Beng-Jin Teoh;Kar-Ann Toh; "Stacking PCANet +: An Overly Simplified ConvNets Baseline for Face Recognition," vol.24(11), pp.1581-1585, Nov. 2017. The principal component analysis network (PCANet) is asserted as a parsimonious stacking-based convolutional neural networks (CNNs) instance for generic object recognition including face. However, to be regarded a CNN resemblance, PCANet lacks a nonlinearity in between two successive convolutional layers. The multilayer PCANet (by neglecting the nonlinearity pre-requisite) is also deemed far-fetched for the network depth beyond two, due to feature dimensionality explosion. We thus devise a PCANet alternative, dubbed PCANet+ in this letter, to untangle these constraints. To be more precise, conforming to the CNN essentials, PCANet+ conveys a mean-pooling unit manipulating each feature map. On top of that, we streamline the PCANet topology to permit a deep construction with an expanded PCA filter ensemble. We scrutinize the PCANet+ performance using face recognition technology and other two faces in the wild datasets, namely, labeled faces in the wild and YouTube faces. The experimental results reveal that the PCANet+ descriptor prevails over its predecessor and other stacking-based descriptors in face identification and verification, serving a baseline for ConvNets.

## IEEE Journal of Selected Topics in Signal Processing - new TOC (2017 September 21) [Website]

* "Frontcover," vol.11(6), pp.C1-C1, Sept. 2017.* Presents the front cover for this issue of the publication.

* "IEEE Journal of Selected Topics in Signal Processing publication information," vol.11(6), pp.C2-C2, Sept. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Blank Page," vol.11(6), pp.B767-B767, Sept. 2017.* This page or pages intentionally left blank.

* "Blank Page," vol.11(6), pp.B768-B768, Sept. 2017.* This page or pages intentionally left blank.

Pascal Frossard;Pier Luigi Dragotti;Antonio Ortega;Michael G. Rabbat;Alejandro Ribeiro; "Introduction to the IEEE Journal on Selected Topics in Signal Processing and IEEE Transactions on Signal and Information Processing Over Networks Joint Special Issue on Graph Signal Processing," vol.11(6), pp.771-773, Sept. 2017. The papers in this special issue are intended to address some of the main research challenges in Graph Signal Processing by presenting a collection of the latest advances in the domain. These papers examine key representation, learning and processing aspects for signals living on graphs and networks, as well as new methods and applications in graph signal processing. Numerous applications rely on the processing of high dimensional data that reside on irregular or otherwise unordered structures that are naturally modeled as networks. The need for new tools to process such data has led to the emergence of the field of graph signal processing, which merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process signals on structures such as graphs. This important new paradigm in signal processing research, coupled with its numerous applications in very different domains, has fueled the rapid development of an inter-disciplinary research community that has been working on theoretical aspects of graph signal processing and applications to diverse problems such as big data analysis, coding and compression of 3D point clouds, biological data processing, and brain network analysis.

Xiaoran Yan;Brian M. Sadler;Robert J. Drost;Paul L. Yu;Kristina Lerman; "Graph Filters and the Z-Laplacian," vol.11(6), pp.774-784, Sept. 2017. In network science, the interplay between dynamical processes and the underlying topologies of complex systems has led to a diverse family of models with different interpretations. In graph signal processing, this is manifested in the form of different graph shifts and their induced algebraic systems. In this paper, we propose the unifying Z-Laplacian framework, whose instances can act as graph shift operators. As a generalization of the traditional graph Laplacian, the Z-Laplacian spans the space of all possible Z -matrices, i.e., real square matrices with nonpositive off-diagonal entries. We show that the Z -Laplacian can model general continuous-time dynamical processes, including information flows and epidemic spreading on a given graph. It is also closely related to general nonnegative graph filters in the discrete time domain. We showcase its flexibility by considering two applications. First, we consider a wireless communications networking problem modeled with a graph, where the framework can be applied to model the effects of the underlying communications protocol and traffic. Second, we examine a structural brain network from the perspective of low- to high-frequency connectivity.

Joya A. Deri;José M. F. Moura; "Spectral Projector-Based Graph Fourier Transforms," vol.11(6), pp.785-795, Sept. 2017. This paper considers the definition of the graph Fourier transform (GFT) and of the spectral decomposition of graph signals. Current literature does not address the lack of unicity of the GFT. The GFT is the mapping from the signal set into its representation by a direct sum of irreducible shift invariant subspaces: 1) this decomposition may not be unique; and 2) there is freedom in the choice of basis for each component subspace. These issues are particularly relevant when the graph shift has repeated eigenvalues as is the case in many real-world applications; by ignoring them, there is no way of knowing if different researchers are using the same definition of the GFT and whether their results are comparable or not. The paper presents how to resolve the above degrees of freedom. We develop a quasi-coordinate free definition of the GFT and graph spectral decomposition of graph signals that we implement through oblique spectral projectors. We present properties of the GFT and of the spectral projectors and discuss a generalized Parseval's inequality. An illustrative example for a large real-world urban traffic dataset is provided.

Stefania Sardellitti;Sergio Barbarossa;Paolo Di Lorenzo; "On the Graph Fourier Transform for Directed Graphs," vol.11(6), pp.796-811, Sept. 2017. The analysis of signals defined over a graph is relevant in many applications, such as social and economic networks, big data or biological networks, and so on. A key tool for analyzing these signals is the so-called graph Fourier transform (GFT). Alternative definitions of GFT have been suggested in the literature, based on the eigen-decomposition of either the graph Laplacian or adjacency matrix. In this paper, we address the general case of directed graphs and we propose an alternative approach that builds the graph Fourier basis as the set of orthonormal vectors that minimize a continuous extension of the graph cut size, known as the Lovász extension. To cope with the nonconvexity of the problem, we propose two alternative iterative optimization methods, properly devised for handling orthogonality constraints. Finally, we extend the method to minimize a continuous relaxation of the balanced cut size. The formulated problem is again nonconvex, and we propose an efficient solution method based on an explicit-implicit gradient algorithm.

David B. H. Tay;Yuichi Tanaka;Akie Sakiyama; "Almost Tight Spectral Graph Wavelets With Polynomial Filters," vol.11(6), pp.812-824, Sept. 2017. The construction of spectral filters for graph wavelet transforms is addressed in this paper. Both the undecimated and decimated cases will be considered. The filter functions are polynomials and can be implemented efficiently without the need for any eigendecomposition, which is computationally expensive for large graphs. Polynomial filters also have the advantage of the vertex localization property. The construction is achieved by designing suitable transformations that are used on traditional multirate filter banks. It will be shown how the classical quadrature-mirror-filters and linear phase, critically/over- sampled filter banks can be used to construct spectral graph wavelets that are almost tight. A variety of design examples will be given to show the versatility of the design technique.

Hilmi E. Egilmez;Eduardo Pavez;Antonio Ortega; "Graph Learning From Data Under Laplacian and Structural Constraints," vol.11(6), pp.825-841, Sept. 2017. Graphs are fundamental mathematical structures used in various fields to represent data, signals, and processes. In this paper, we propose a novel framework for learning/estimating graphs from data. The proposed framework includes (i) formulation of various graph learning problems, (ii) their probabilistic interpretations, and (iii) associated algorithms. Specifically, graph learning problems are posed as the estimation of graph Laplacian matrices from some observed data under given structural constraints (e.g., graph connectivity and sparsity level). From a probabilistic perspective, the problems of interest correspond to maximum a posteriori parameter estimation of Gaussian-Markov random field models, whose precision (inverse covariance) is a graph Laplacian matrix. For the proposed graph learning problems, specialized algorithms are developed by incorporating the graph Laplacian and structural constraints. The experimental results demonstrate that the proposed algorithms outperform the current state-of-the-art methods in terms of accuracy and computational efficiency.

Peter Berger;Gabor Hannak;Gerald Matz; "Graph Signal Recovery via Primal-Dual Algorithms for Total Variation Minimization," vol.11(6), pp.842-855, Sept. 2017. We consider the problem of recovering a smooth graph signal from noisy samples taken on a subset of graph nodes. The smoothness of the graph signal is quantified in terms of total variation. We formulate the signal recovery task as a convex optimization problem that minimizes the total variation of the graph signal while controlling its global or node-wise empirical error. We propose a first-order primal-dual algorithm to solve these total variation minimization problems. A distributed implementation of the algorithm is devised to handle large-dimensional applications efficiently. We use synthetic and real-world data to extensively compare the performance of our approach with state-of-the-art methods.

Daniel Romero;Vassilis N. Ioannidis;Georgios B. Giannakis; "Kernel-Based Reconstruction of Space-Time Functions on Dynamic Graphs," vol.11(6), pp.856-869, Sept. 2017. Graph-based methods pervade the inference toolkits of numerous disciplines including sociology, biology, neuroscience, physics, chemistry, and engineering. A challenging problem encountered in this context pertains to determining the attributes of a set of vertices given those of another subset at possibly different time instants. Leveraging spatiotemporal dynamics can drastically reduce the number of observed vertices, and hence the sampling cost. Alleviating the limited flexibility of the existing approaches, the present paper broadens the kernel-based graph function estimation framework to reconstruct time-evolving functions over possibly time-evolving topologies. This approach inherits the versatility and generality of kernel-based methods, for which no knowledge on distributions or second-order statistics is required. Systematic guidelines are provided to construct two families of space-time kernels with complementary strengths: the first facilitates judicious control of regularization on a space-time frequency plane, whereas the second accommodates time-varying topologies. Batch and online estimators are also put forth. The latter comprise a novel kernel Kalman filter, developed to reconstruct space-time functions at affordable computational cost. Numerical tests with real datasets corroborate the merits of the proposed methods relative to competing alternatives.

Kai Qiu;Xianghui Mao;Xinyue Shen;Xiaohan Wang;Tiejian Li;Yuantao Gu; "Time-Varying Graph Signal Reconstruction," vol.11(6), pp.870-883, Sept. 2017. Signal processing on graphs is an emerging research field dealing with signals living on an irregular domain that is captured by a graph, and has been applied to sensor networks, machine learning, climate analysis, etc. Existing works on sampling and reconstruction of graph signals mainly studied static bandlimited signals. However, many real-world graph signals are time-varying, and they evolve smoothly, so instead of the signals themselves being bandlimited or smooth on graph, it is more reasonable that their temporal differences are smooth on graph. In this paper, a new batch reconstruction method of time-varying graph signals is proposed by exploiting the smoothness of the temporal difference signals, and the uniqueness of the solution to the corresponding optimization problem is theoretically analyzed. Furthermore, driven by practical applications faced with real-time requirements, huge size of data, lack of computing center, or communication difficulties between two nonneighboring vertices, an online distributed method is proposed by applying local properties of the temporal difference operator and the graph Laplacian matrix. Experiments on a variety of synthetic and real-world datasets demonstrate the excellent performance of the proposed methods.

Felipe Petroski Such;Shagan Sah;Miguel Alexander Dominguez;Suhas Pillai;Chao Zhang;Andrew Michael;Nathan D. Cahill;Raymond Ptucha; "Robust Spatial Filtering With Graph Convolutional Neural Networks," vol.11(6), pp.884-896, Sept. 2017. Convolutional neural networks (CNNs) have recently led to incredible breakthroughs on a variety of pattern recognition problems. Banks of finite-impulse response filters are learned on a hierarchy of layers, each contributing more abstract information than the previous layer. The simplicity and elegance of the convolutional filtering process makes them perfect for structured problems, such as image, video, or voice, where vertices are homogeneous in the sense of number, location, and strength of neighbors. The vast majority of classification problems, for example in the pharmaceutical, homeland security, and financial domains are unstructured. As these problems are formulated into unstructured graphs, the heterogeneity of these problems, such as number of vertices, number of connections per vertex, and edge strength, cannot be tackled with standard convolutional techniques. We propose a novel neural learning framework that is capable of handling both homogeneous and heterogeneous data while retaining the benefits of traditional CNN successes. Recently, researchers have proposed variations of CNNs that can handle graph data. In an effort to create learnable filter banks of graphs, these methods either induce constraints on the data or require preprocessing. As opposed to spectral methods, our framework, which we term Graph-CNNs, defines filters as polynomials of functions of the graph adjacency matrix. Graph-CNNs can handle both heterogeneous and homogeneous graph data, including graphs having entirely different vertex or edge sets. We perform experiments to validate the applicability of Graph-CNNs to a variety of structured and unstructured classification problems and demonstrate state-of-the-art results on document and molecule classification problems.

Xavier Desquesnes;Abderrahim Elmoataz; "Nonmonotonic Front Propagation on Weighted Graphs With Applications in Image Processing and High-Dimensional Data Classification," vol.11(6), pp.897-907, Sept. 2017. In this paper, we propose an adaptation of partial difference equations (PDEs) level set method for nonmonotonic front propagation on weighted graphs. This adaptation leads to a PDE, whose coefficients are data geometry dependent. Our motivation is to extend their applications to any discrete data that can be represented by a weighted graph. This paper follows several preliminaries of our works, and introduces several significant improvements: A simplified and explicit representation of a front on a weighted graph, a new formulation of the level set equation on weighted graphs considering both time-dependent and stationary versions of this equation in the case of signed velocities, and an efficient algorithm that generalized the fast marching to graphs with signed velocities. We propose to use this method for image processing and for high-dimensional data classification.

Muyuan Fang;Yu-Jin Zhang; "Query Adaptive Fusion for Graph-Based Visual Reranking," vol.11(6), pp.908-917, Sept. 2017. Developing effective fusion schemes for multiple feature types has always been a hot issue in content-based image retrieval. In this paper, we propose a novel method for graph-based visual reranking, which addresses two major limitations in existing methods. First, in the phase of graph construction, our method introduces fine-grained measurements for image relations, by assigning the edge weights using normalized similarity. Furthermore, in the phase of graph fusion, rather than summing up all the graphs for different single features indiscriminately, we propose to estimate the reliability of each feature through a statistical model, and selectively fuse the single graphs via query-adaptive fusion weights. Fusion methods with either labeled data and unlabeled data are proposed and the performance are evaluated and compared by experiments. Our method is evaluated on five public datasets, by fusing scale-invariant feature transform (SIFT), CNN, and hue, saturation, hue (HSV), three complementary features. Experimental results demonstrate the effectiveness of the proposed method, which yields superior results than the competing methods.

* "IEEE Journal of Selected Topics in Signal Processing information for authors," vol.11(6), pp.918-919, Sept. 2017.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "Become a published author in 4 to 6 weeks," vol.11(6), pp.920-920, Sept. 2017.* Advertisement, IEEE.

* "IEEE Signal Processing Society Information," vol.11(6), pp.C3-C3, Sept. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Blank Page," vol.11(6), pp.C4-C4, Sept. 2017.* This page or pages intentionally left blank.

## IEEE Signal Processing Magazine - new TOC (2017 September 21) [Website]

* "[Front cover[Name:_blank]]," vol.34(5), pp.C1-C1, Sept. 2017.* Presents the front cover for this issue of the publication.

* "ICASSP," vol.34(5), pp.C2-C2, Sept. 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "Masthead," vol.34(5), pp.2-2, Sept. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Min Wu; "Cameras, Music, and Synergy in Signal Processing [From the Editor[Name:_blank]]," vol.34(5), pp.3-4, Sept. 2017. Presents the guest editorial for this issue of the publication.

* "IEEE Signal Processing Cup 2018," vol.34(5), pp.4-4, Sept. 2017.* Presents information on the IEEE Signal Processing Cup 2018.

Rabab Ward; "Careers in Signal Processing: A Diverse Field Impacting the Future [President's Message[Name:_blank]]," vol.34(5), pp.5-11, Sept. 2017. Presents the President's message for this issue of the publication.

* "Election of Regional Directors-at-Large and Members-at-Large [Society News[Name:_blank]]," vol.34(5), pp.6-7, Sept. 2017.* Presents SPS society regional election results.

John Edwards; "Building the Foundation for "Generation Robot": Signal Processing is Helping to Make Robots a Part of Everyday Life [Special Reports[Name:_blank]]," vol.34(5), pp.8-11, Sept. 2017. Discusses how signal processing is an integral part of robot design and use. As robots transition out of the fictional world and into everyday life, signal processing is playing a major role in their development. In key areas, such as control, sensing, and modeling, signal processing is helping robots become our trusted new home, road, and workplace partners. Signal processing, coupled with the interface itself, is what makes our interface more effective than other commonly used robot teleoperation interfaces. By analyzing and incorporating information from a depth image, or a point cloud, our interface can focus the operator’s interaction around relevant commands for the robot. The point-and-click control interface’s key element is an algorithm that calculates and ranks grasps based on their effectiveness for general manipulation.

Pau Closas;Marco Luise;Jose-Angel Avila-Rodriguez;Christopher Hegarty;Jiyun Lee; "Advances in Signal Processing for GNSSs [From the Guest Editors[Name:_blank]]," vol.34(5), pp.12-15, Sept. 2017. Examines global navigation satellite systems (GNSS). The papers in this special issue address the design of special GNSS signals, a topic of particular interest in the past and still of great relevance today. It continues with the discussion of effective techniques for receiver performance enhancement and finishes with the analysis of some vulnerabilities. GNSS technology today is ubiquitous in many transversal infrastructures and has become the backbone of all applications where precise position, navigation, and timing (PNT) of user equipment is required. Moreover, GNSS is the pervasive PNT technology in outdoor environments, where its performance, coverage, and reliability exceeds that of other technical solutions.

Zheng Yao;Mingquan Lu; "Signal Multiplexing Techniques for GNSS: The Principle, Progress, and Challenges Within a Uniform Framework," vol.34(5), pp.16-26, Sept. 2017. Signal multiplexing techniques are those that enable the efficient transmission of multiple signals through a single modulator, up converter, power amplifier chain, and antenna aperture, without mutual interference. In the construction of new-generation global navigation satellite systems (GNSSs), signal multiplexing has encountered many challenges as the number of signals to be multiplexed increases and signal elements become more diverse and complex. In the past 15 years, many novel and advanced constant envelope multiplexing (CEM) techniques have been proposed. However, increased requirements for adaptability and flexibility of signal multiplexing techniques for future evolutional GNSSs have resulted in the need for further improvement in generalized CEM design theory. The uniform CEM mathematical framework and more general CEM design approaches can offer a good starting point to meet these challenges. In this article, we provide a comprehensive tutorial on the important concepts, recent advances, representative applications, and the remaining challenges of GNSS signal multiplexing techniques. To provide readers with a global overview of multiplexing techniques in GNSS and to foster new research ideas, a mathematical cornerstone and intrinsic conceptual relationship of assorted multiplexing techniques along with the high adaptability and high flexibility challenges of CEM design will be two key discussions.

Davide Margaria;Beatrice Motella;Marco Anghileri;Jean-Jacques Floch;Ignacio Fernandez-Hernandez;Matteo Paonni; "Signal Structure-Based Authentication for Civil GNSSs: Recent Solutions and Perspectives," vol.34(5), pp.27-37, Sept. 2017. A key aspect to be considered in the design of new generations of global navigation satellite system (GNSS) signals and receivers is a proper partition between system and receiver contribution to the robustness against spoofing attacks. This article maps the current threats and vulnerabilities of GNSS receivers and presents a survey of recent defenses, focusing on cryptographic solutions suitable to authenticate civil signals. Future perspectives and trends are analyzed, with particular emphasis on spreading code authentication techniques, considered as a key innovation for the next generation of civil GNSS signals. An assessment of the robustness and feasibility of the various presented solutions is also provided, analyzing in particular the impact on both current and future receivers.

Sana U. Qaisar;Craig R. Benson; "Processing Cost of Doppler Search in GNSS Signal Acquisition: Measuring Doppler shift in navigation satellite signals," vol.34(5), pp.53-58, Sept. 2017. To acquire a global navigation satellite system (GNSS) signal, the receiver must determine the Doppler offset experienced by the signal. The time and resources consumed in the Doppler search contribute to the cost and efficiency of the receiver, which is fundamentally dependent on signal processing techniques adapted for the search.

Seung-Hyun Kong; "High Sensitivity and Fast Acquisition Signal Processing Techniques for GNSS Receivers: From fundamentals to state-of-the-art GNSS acquisition technologies," vol.34(5), pp.59-71, Sept. 2017. Higher sensitivity and faster acquisition can be two conflicting goals for a global navigation satellite system (GNSS) acquisition function, and both of the goals must be considered in the development of GNSS signal processing techniques to meet the demands for location-based services (LBSs) in GNSS-challenged environments. This article introduces the fundamentals of GNSS acquisition functions and various GNSS acquisition techniques for new GNSS signals and investigates recent acquisition techniques achieving high sensitivity and fast acquisition. It provides useful information for engineers who study state-of-the-art GNSS signal acquisition techniques and want to understand the challenges involved in improving GNSS acquisition sensitivity and acquisition time.

Pau Closas;Adria Gusi-Amigo; "Direct Position Estimation of GNSS Receivers: Analyzing main results, architectures, enhancements, and challenges," vol.34(5), pp.72-84, Sept. 2017. This article discusses theoretical and practical aspects of direct position estimation (DPE) of global navigation satellite system (GNSS) receivers. DPE is an alternative approach to the well-established positioning method, where time delays and Doppler shifts to the visible satellites are estimated to solve for a geometrical problem to compute the receiver's position. In contrast, DPE infers position directly from the sampled data without intermediate steps. This approach has been seen, both theoretically and experimentally, to substantially increase the receiver's accuracy and reliability.

Moeness G. Amin;Daniele Borio;Yimin D. Zhang;Lorenzo Galleani; "Time-Frequency Analysis for GNSSs: From interference mitigation to system monitoring," vol.34(5), pp.85-95, Sept. 2017. In this article, we discuss the important role of time-frequency (TF) signal representations in enhancing global navigation satellite system (GNSS) receiver performance. Both linear transforms and quadratic TF distributions (QTFDs) are considered. We review recent advances of antijam techniques that exploit the distinction in the TF signatures between the desired and undesired signals, enabling effective jammer excision with minimum distortion to the navigation signals. The characterization of jammers by their instantaneous frequencies (IFs) lends itself to sparse structures in the TF domain and, as such, invites compressive sensing (CS) and sparse reconstruction to aid in joint-variable domain jamming localization and suppression. Furthermore, the integration of the spatial domain with the TFDs, through the use of multiantenna receivers, permits the applications of space-time processing for effective jamming mitigation. The article also describes the fundamental role of TFDs in monitoring the performance of new GNSSs, including satellite clocks and ionospheric scintillation data. Real GNSS data collected in the presence of jamming are used to demonstrate the effectiveness of TF-based antijam approaches.

Jiyun Lee;Y.T. Jade Morton;Jinsil Lee;Hee-Seung Moon;Jiwon Seo; "Monitoring and Mitigation of Ionospheric Anomalies for GNSS-Based Safety Critical Systems: A review of up-to-date signal processing techniques," vol.34(5), pp.96-110, Sept. 2017. The ionosphere has been the most challenging source of error to mitigate within the community of global navigation satellite system (GNSS)-based safety-critical systems. Users of those systems should be assured that the difference between an unknown true position and a system-derived position estimate is bounded with an extremely high degree of confidence. One of the major concerns for meeting this requirement, known as integrity, is ionosphere-induced error or discontinuity of GNSS signals significant enough to threaten the safety of users. The potentially hazardous ionospheric anomalies of interest in this article are ionospheric spatial decorrelation and ionospheric scintillation under disturbed conditions. As the demand of safety-critical navigation applications increases with the rapid growth of the autonomous vehicle sector, ionospheric monitoring and mitigation techniques become more important to support such systems.

Zaher Zak M. Kassas;Joe Khalife;Kimia Shamaei;Joshua Morales; "I Hear, Therefore I Know Where I Am: Compensating for GNSS Limitations with Cellular Signals," vol.34(5), pp.111-124, Sept. 2017. Global navigation satellite systems (GNSSs) have been the prevalent positioning, navigation, and timing technology over the past few decades. However, GNSS signals suffer from four main limitations.

Dimitrios Lymberopoulos;Jie Liu; "The Microsoft Indoor Localization Competition: Experiences and Lessons Learned," vol.34(5), pp.125-140, Sept. 2017. We present the results, experiences, and lessons learned from comparing a diverse set of indoor location technologies during the Microsoft Indoor Localization Competition. Over the last four years (2014-2017), more than 100 teams from academia and industry deployed their indoor location solutions in realistic, unfamiliar environments, allowing us to directly compare their accuracies and overhead. In this article, we provide an analysis of this four-year-long evaluation study's results and discuss the current state of the art in indoor localization.

S.M. Zafaruddin;Itsik Bergel;Amir Leshem; "Signal Processing for Gigabit-Rate Wireline Communications: An Overview of the State of the Art and Research Challenges," vol.34(5), pp.141-164, Sept. 2017. Signal processing played an important role in improving the quality of communications over copper cables in earlier digital subscriber line (DSL) technologies. Even more powerful signal processing techniques are required to enable a gigabit per second data rate in the upcoming fast access to subscriber terminals (G.fast.) standard. This new standard is different from its predecessors in many respects. In particular, G.fast will use a significantly higher bandwidth. At such a high bandwidth, crosstalk among different lines in a binder will reach unprecedented levels, which are beyond the capabilities of most efficient techniques for interference mitigation.

Alessandro Artusi;Thomas Richter;Touradj Ebrahimi;Rafal K. Mantiuk; "High Dynamic Range Imaging Technology [Lecture Notes[Name:_blank]]," vol.34(5), pp.165-172, Sept. 2017. In this lecture note, we describe high dynamic range (HDR) imaging systems. Such systems are able to represent luminances of much larger brightness and, typically, a larger range of colors than conventional standard dynamic range (SDR) imaging systems. The larger luminance range greatly improves the overall quality of visual content, making it appear much more realistic and appealing to observers. HDR is one of the key technologies in the future imaging pipeline, which will change the way the digital visual content is represented and manipulated today.

Lei Zhang;Wangmeng Zuo; "Image Restoration: From Sparse and Low-Rank Priors to Deep Priors [Lecture Notes[Name:_blank]]," vol.34(5), pp.172-179, Sept. 2017. The use of digital imaging devices, ranging from professional digital cinema cameras to consumer grade smartphone cameras, has become ubiquitous. The acquired image is a degraded observation of the unknown latent image, while the degradation comes from various factors such as noise corruption, camera shake, object motion, resolution limit, hazing, rain streaks, or a combination of them. Image restoration (IR), as a fundamental problem in image processing and low-level vision, aims to reconstruct the latent high-quality image from its degraded observation. Image degradation is, in general, irreversible, and IR is a typical ill-posed inverse problem. Due to the large space of natural image contents, prior information on image structures is crucial to regularize the solution space and produce a good estimation of the latent image. Image prior modeling and learning then are key issues in IR research. This lecture note describes the development of image prior modeling and learning techniques, including sparse representation models, low-rank models, and deep learning models.

Peter S. Apostolov;Borislav P. Yurukov;Alexey K. Stefanov; "An Easy and Efficient Method for Synthesizing Two-Dimensional Finite Impulse Response Filters with Improved Selectivity [Tips & Tricks[Name:_blank]]," vol.34(5), pp.180-183, Sept. 2017. It is hard to imagine what the world would look like without the modern technologies using digital signal processing. The developments in this technical field provide an opportunity for building technical devices that implement mathematical methods unattainable by analog technology. Many modern technical devices work with two-dimensional (2-D) signals in a process called digital image processing, and 2-D digital finite impulse response (FIR) filters are basic technical tools in image processing. FIR filters are extensively used in digital television, radio astronomy, radio location, biomedicine, and so on.

Logan L. Grado;Matthew D. Johnson;Theoden I. Netoff; "The Sliding Windowed Infinite Fourier Transform [Tips & Tricks[Name:_blank]]," vol.34(5), pp.183-188, Sept. 2017. The discrete Fourier transform (DFT) is the standard tool for spectral analysis in digital signal processing, typically computed using the fast Fourier transform (FFT). However, for real-time applications that require recalculating the DFT at each sample or over only a subset of the N center frequencies of the DFT, the FFT is far from optimal.

* "Errata," vol.34(5), pp.188-188, Sept. 2017.* In the July 2017 issue of IEEE Signal Processing Magazine, an error was introduced in the title of a feature article during the production process. The title of the article by Z. Zhang, N. Cummins, and B.W. Schuller printed incorrectly [IBID., vol. 34, no. 4, pp. 107-129, July 2017]. The correct title is "Advanced Data Exploitation in Speech Analysis."

Daniel Bone;Chi-Chun Lee;Theodora Chaspari;James Gibson;Shrikanth Narayanan; "Signal Processing and Machine Learning for Mental Health Research and Clinical Applications [Perspectives[Name:_blank]]," vol.34(5), pp.196-195, Sept. 2017. Human behavior offers a window into the mind. When we observe someone's actions, we are constantly inferring his or her mental states-their beliefs, intents, and knowledge-a concept known as theory of mind. For example.

* "Calendar [Dates Ahead[Name:_blank]]," vol.34(5), pp.C3-C3, Sept. 2017.* Presents information on upcoming meetings, conferences, and events.

## IET Signal Processing - new TOC (2017 September 21) [Website]

Sayanti Chatterjee;Smita Sadhu;Tapan Kumar Ghoshal; "Improved estimation and fault detection scheme for a class of non-linear hybrid systems using time delayed adaptive CD state estimator," vol.11(7), pp.771-779, 9 2017. An improved fault detection scheme for a non-linear hybrid system with delayed measurement by using a modified non-linear adaptive state estimator is proposed. The proposed estimator performs acceptably even when the (i) covariance of the measurement noise is unknown and also (ii) when the measurements are delayed. The algorithm for the proposed estimator called time-delayed R-adaptive central difference Kalman filter (TD-RACDKF) uses a modified R-adaptive 2nd order Central Difference estimator, also called Central Difference Kalman Filter (CDKF) in some literature. Algorithm for the proposed TD-RACDKF has been presented and its performance evaluated as well as characterised with Monte Carlo simulation on two standard non-linear systems with delayed measurements. The characterisation includes comparison with existing estimators. Having demonstrated the improved performance of the proposed state estimator, its use for fault detection of non-linear hybrid systems was investigated. The performance of the fault detection scheme has been illustrated with the help of a benchmark non-linear hybrid system, namely `three tank system' and by comparison with a previously available extended KF-based estimator.

Renu Jose;K.V.S. Hari; "Joint statistical framework for the estimation of channel and SFO in OFDM systems," vol.11(7), pp.780-787, 9 2017. Joint estimation of channel and sampling frequency offset (SFO) in orthogonal frequency division multiplexing (OFDM) systems, using Bayesian framework, is shown in this study. Hybrid Cramér-Rao lower bounds (HCRLBs) for the estimation of SFO together with channel are obtained. The significance of Bayesian approach in the formulation of joint estimator is shown by comparing HCRLB with the corresponding standard CRLB. The authors propose a joint maximum a posteriori (JMAP) algorithm for the estimation of channel and SFO in OFDM, utilising the prior statistical knowledge of channel. To reduce the complexity of JMAP estimator, a modified MAP algorithm, which has no grid searches, is also proposed. Also, they analyse the effect of inaccurate knowledge of channel statistics and signal-to-noise ratio on the estimation accuracy. The estimation methods are analysed by numerical simulations and resultant conclusions validate the better performance of the proposed algorithms when compared with previous algorithms.

Anil Kumar Vaghmare;Chevula Venkata Rama Rao; "Unsupervised noise removal technique based on constrained NMF," vol.11(7), pp.788-795, 9 2017. This paper presents a novel approach to mitigate background interference noise in a noisy audio environment based on unsupervised speech enhancement without the prior knowledge of speech as well as noise signal. In this scheme, a noisy speech signal is represented by a non-negative matrix factorisation (NMF) with sparseness constraints. Using NMF with constraints, noise density spectrum is estimated to design the spectral subtraction method. In the earlier method of NMF, correlation between the noise and the audio signals was not considered. The proposed method explores the correlation between the noise and speech signals. A sparse constraint NMF-based spectral subtraction method is designed to eliminate the noise. The performance is evaluated by considering various noise environments in terms of quality and intelligibility measures. Simulation results highlight the improvement in the enhanced speech with the suggested approach.

Minzhe Li;Zhongliang Jing;Peng Dong;Han Pan; "Joint DTC based on FISST and generalised Bayesian risk," vol.11(7), pp.796-804, 9 2017. This study proposes a recursive solution to target joint detection, tracking, and classification (DTC) based on finite set statistics (FISST) and generalised Bayesian risk. A new Bayesian risk is defined involving the costs of target existence probability estimation (detection), state estimation (tracking), and classification. The estimates and costs are calculated within the FISST framework for different hypotheses and decisions of the target class, and the optimal solution is then derived to minimise the new Bayesian risk. As different costs are unified, the inter-dependence of DTC is considered, and these three sub-problems are solved jointly. The effects of the parameters of the new Bayesian risk are also analysed. Simulations show that the authors' method has a better overall performance compared with traditional methods.

Dheeren Ku Mahapatra;Lakshi Prosad Roy; "Empirical model for SAR ground clutter data," vol.11(7), pp.805-813, 9 2017. In this study, the authors present an effective and flexible statistical model, called Burr Type-XII distribution (BXIID), for empirical modelling of synthetic aperture radar (SAR) clutter amplitude. Also, an estimator based on method of logcumulants (MoLC) is proposed for the parameters of BXIID. Cramer-Rao bounds of the BXIID parameters are computed to measure the effectiveness of the proposed estimator. They then derive the analytical conditions for applicability of MoLC to BXIID. Furthermore, it is shown here that the BXIID covers a larger space in the k3 k2 (third-order second-kind cumulant versus second-order second-kind cumulant) diagram than the one occupied with classical distributions and is flexible to model the SAR clutter with a variety of scenes. The BXII model is validated through single-look real SAR clutter data and multilook synthetic clutter data. Experimental results demonstrate that the BXIID achieves better goodness-of-fit than the state-of-the-art probability density functions and that clearly illustrates the validity and applicability of the model.

Zhi-Chao Zhang;Tao Yu;Mao-Kang Luo;Ke Deng; "Multichannel sampling expansions in the linear canonical transform domain associated with explicit system functions and finite samples," vol.11(7), pp.814-824, 9 2017. Focusing on two issues associated with the existing multichannel sampling expansions (MSEs) of the linear canonical transform (LCT), those are, the implicit expression of system functions possibly leads to the inconvenience of reconstructing signals in practical situations, and the reconstruction of the original signal from its finite samples, the authors first propose a novel MSE in the Fourier transform domain, providing an explicit expression for the response function of the reconstruction filter. Moreover on this basis, they formulate two kinds of LCT-type of MSEs related, respectively, to the modified convolution structure and the generalised convolution structure of the LCT. For these MSEs, though there is an explicit expression for system functions, the number of the signal's samples takes infinity. They then obtain multichannel interpolation formulae that interpolate a finite set of uniform samples through applying the derived MSEs to the LCT-band-limited, chirp periodic signals. They further present some possible applications of their proposals to show the advantage of the theory. Finally, the simulations are also performed to verify the correctness of the derived results.

Parth Raj Singh;Yide Wang;Pascal Chargé; "Performance enhancement of approximated model based near-field sources localisation techniques," vol.11(7), pp.825-829, 9 2017. Most of the existing near-field sources localisation methods are based on an approximated model. Making use of such an approximation brings degradation in the estimation accuracy. In this study, the authors propose a correction method to mitigate this problem and improve the estimation performance of the approximated model based methods. Simulation results show that the proposed technique can significantly improve the performance of the classical approximated model based near-field sources localisation methods.

Yinghui Quan;Yachao Li;Wen Hu;Yadi Zhai;Mengdao Xing; "FM sequence optimisation of chaotic-based random stepped frequency signal in through-the-wall radar," vol.11(7), pp.830-837, 9 2017. Chaotic-based random stepped frequency signal is applied in the multiple-input-multiple-output through-the-wall detection radar (MIMO-TWDR)recently. When the frequency modulation (FM) sequence of transmission signal is controlled by the chaotic signal, the single-frequency interference such as the power harmonics sneaking into the phase detector becomes periodical and therefore can be filtered in frequency domain. However, the target echo signal becomes random after chaotic modulation, where the matched filter usually is unable to be realised by Fourier transform and consequently the envelope of direct wave after the phase detector varies stochastically and is difficult to be eliminated by an analogue filter. The FM sequence of random disorganising cannot meet the demand of filtering out the single-frequency interference and direct wave simultaneously. Therefore, a new method of FM sequence optimisation of chaotic-based random stepped frequency signal based on genetic algorithm is proposed to solve these problems in this study. Simulations show that the optimised stepped frequency signal possesses the advantages of both chaotic-based random and linear stepped frequency signal. The proposed scheme achieves excellent performance on direct wave and single-frequency interference suppression and target detection. Moreover, it can avoid the interference between transmission antennas of MIMO radar.

Tianhang Yu;Minjian Zhao;Jie Zhong;Jian Zhang;Pei Xiao; "Low-complexity graph-based turbo equalisation for single-carrier and multi-carrier FTN signalling," vol.11(7), pp.838-845, 9 2017. The authors propose a novel turbo detection scheme based on the factor graph (FG) serial-schedule belief propagation equalisation algorithm with low complexity for single-carrier faster-than-Nyquist (SC-FTN) and multi-carrier FTN (MC-FTN) signalling. In this work, the additive white Gaussian noise channel and multi-path fading channels are both considered. The iterative FG-based equalisation algorithm can deal with severe intersymbol interference and intercarrier interference introduced by the generation of SC and MC-FTN signals, as well as the effect of multi-path fading. With the application of Gaussian approximation, the complexity of the proposed equalisation algorithm is significantly reduced. In the turbo detection, low-density parity check code is employed. The simulation results demonstrate that the FG-based turbo detection method can achieve satisfactory performance with low complexity.

Mahmoud Boudiaf;Moncef Benkherrat;Khaled Mansouri; "Denoising of single-trial event-related potentials using adaptive modelling," vol.11(7), pp.846-853, 9 2017. In this study, the authors present a modelling method based on the adaptive linear combiner to denoise single-trial event-related potentials. The orthonormal Hermite basis functions act as inputs of the adaptive linear combiner. To estimate and to adjust the parameters of the adaptive filter, the authors use the variable step-size least mean square algorithm which is well suited to track rapid changes of non-stationary signals. The performance of the method is tested with simulated evoked potentials and with real visual event-related potentials. For simulated data, the adaptive Hermite model gave significant enhancement in latency and amplitude estimation as well as in the observation of single-trial event-related potentials, in comparison with wavelet techniques and with other models of adaptive filters. For the real data, the proposed method filters the ongoing electroencephalogram activity, thus allowing a better identification of single-trial visual event-related potentials. The results confirm that the Hermite adaptive linear combiner model provides a simple and fast tool that helps to study single-trial event-related potential responses.

Zainalabedin Samadi;Vahid Tabatabavakili;Farzan Haddadi; "Channel aided interference alignment," vol.11(7), pp.854-860, 9 2017. Interference alignment (IA) techniques mostly attain their degrees of freedom benefits as the number of channel extensions tends to infinity. Intuitively, the more interfering signals that need to be aligned, the larger the number of dimensions needed to align them. This requirement poses a major challenge for IA in practical systems. This study evaluates the necessary and sufficient conditions on channel structure of a fully connected interference network with time-varying fading to make perfect IA feasible within limited number of channel extensions. The authors propose a method based on the obtained conditions on the channel structure to achieve perfect IA. For the case of 3 user interference channel, it is shown that only one condition on the channel coefficients is required to make perfect IA feasible at all receivers. IA feasibility literatures have mainly focused on network topology so far, in contrast, derived channel aiding conditions in this study can be considered as the perfect IA feasibility conditions on the channel structure.

Rong Qian;Defu Jiang;Wei Fu; "Digital constant-envelope modulation scheme for radar using multicarrier OFDM signals," vol.11(7), pp.861-868, 9 2017. Multicarrier signals using orthogonal frequency division multiplexing (OFDM) bring many advantages to radar such as low probability of intercept, improved target detection performance, optimisation of both radar transmitter and receiver etc. A major drawback of multicarrier OFDM signals is their high peak-to-mean envelope power ratio, which reduces the working efficiency of the used power amplifiers. To solve this problem, a digital constant-envelope modulation scheme is proposed for radar using multicarrier OFDM signals. The scheme can transmit multicarrier OFDM signals through power amplifiers being operated near saturation levels (and thus maximising power efficiency) with little spectral distortion, even in the presence of channel mismatch. Given the speed limitations of field programmable gate arrays (FPGAs), the proposed scheme is implemented in an FPGA at intermediate frequency level using a polyphase processing structure.

Lidong Liu;Ya Nan Wang;Lin Hou;Xiao Ran Feng; "Easy encoding and low bit-error-rate chaos communication system based on reverse-time chaotic oscillator," vol.11(7), pp.869-876, 9 2017. A new chaos communication system based on reverse-time chaotic oscillator (RTCO) is proposed in this study. In the system, driven by bipolar sequence, RTCO can directly generate chaotic wave signals that can encode arbitrary binary information, which is much easier than that of the existing chaotic communication scheme in that needs the initial condition estimation. Then the analytical expression of matched filter for the basis function of RTCO is derived. The proposed matched filter is capable of decreasing the effect of noise in the channel. Next, the binary information can be obtained by detecting the summation of multi-sampling during the symbol period through a setting threshold over additive white Gaussian noise (AWGN) channel, which further decreases the influence of noise in decoding procedure. In addition, the binary information can also be obtained over the Rayleigh channel, and its bit error rate (BER) expression is derived. Finally, the feasibility and the validity of the proposed system are given with numerical simulations. It is shown that in the proposed communication system, the encoded chaotic signal is generated with a much simpler method. Furthermore, the BER is lower than that in over AWGN channel, and the performance of the proposed system over Rayleigh channel is better than that of differential-chaos-shift-keying.

Weihong Fu;Jiehu Chen;Bo Yang; "Source recovery of underdetermined blind source separation based on SCMP algorithm," vol.11(7), pp.877-883, 9 2017. In this study, a new algorithm subspace complementary matching pursuit (SCMP) is developed for source recovery of underdetermined blind source separation. The proposed SCMP is more simplified than the conventional complementary matching pursuit (CMP) algorithm. SCMP algorithm selects more than one atom in each iteration to reduce computational complexity, and replaces the l2 norm minimisation involved in CMP with the approximate l0 norm minimisation to ensure higher recovery accuracy. Numerical results show that, compared with the existing algorithms for source recovery, such as CMP, orthogonal CMP (OCMP), optimised OCMP and sparsity adaptive CMP, the proposed SCMP algorithm significantly reduces the computational time with improved recovery accuracy.

Yu-Hao Chin;Yi-Zeng Hsieh;Mu-Chun Su;Shu-Fang Lee;Miao-Wen Chen;Jia-Ching Wang; "Music emotion recognition using PSO-based fuzzy hyper-rectangular composite neural networks," vol.11(7), pp.884-891, 9 2017. This study proposed a novel system for recognising emotional content in music, and the proposed system is based on particle swarm optimisation (PSO)-based fuzzy hyper-rectangular composite neural networks (PFHRCNNs), which integrates three computational intelligence tools, i.e. hyper-rectangular composite neural networks (HRCNNs), fuzzy systems, and PSO. PFHRCNN is flexible to the complex data due to the fuzzy membership estimation, and an optimisation of the parameters is provided by PSO. First, raw features are extracted from each music clips. After feature extraction, a HRCNN is separately constructed for each class. Each trained HRCNN will result in a set of crisp rules. A problem associated with these generated crisp rules is that some of them are ineffective; therefore, a crisp rule is transformed into a fuzzy rule incorporated with a confidence factor. Next, PSO is adopted to simultaneously trim the rules, search a set of good confidence factors, and fine-tune the locations of the selected hyper-rectangles to increase their effectiveness. Finally, a PFHRCNN consisted of a set of fuzzy rules can be generated to recognise the emotion state of music. The experimental result shows that the proposed system has a good performance.

## IEEE Transactions on Geoscience and Remote Sensing - new TOC (2017 September 21) [Website]

* "Front Cover," vol.55(9), pp.C1-C1, Sept. 2017.* Presents the front cover for this issue of the publication.

* "IEEE Transactions on Geoscience and Remote Sensing publication information," vol.55(9), pp.C2-C2, Sept. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Yangbin Lin;Cheng Wang;Bili Chen;Dawei Zai;Jonathan Li; "Facet Segmentation-Based Line Segment Extraction for Large-Scale Point Clouds," vol.55(9), pp.4839-4854, Sept. 2017. As one of the most common features in the man-made environments, straight lines play an important role in many applications. In this paper, we present a new framework to extract line segments from large-scale point clouds. The proposed method is fast to produce results, easy for implementation and understanding, and suitable for various point cloud data. The key idea is to segment the input point cloud into a collection of facets efficiently. These facets provide sufficient information for determining linear features in the local planar region and make line segment extraction become relatively convenient. Moreover, we introduce the concept “number of false alarms” into 3-D point cloud context to filter the false positive line segment detections. We test our approach on various types of point clouds acquired from different ways. We also compared the proposed method with several other methods and provide both quantitative and visual comparison results. The experimental results show that our algorithm is efficient and effective, and produce more accurate and complete line segments than the comparative methods. To further verify the accuracy of the line segments extracted by the proposed method, we also present a line-based registration framework, which employs these line segments on point clouds registration.

Jingbin Liu;Juha Hyyppä;Xiaowei Yu;Anttoni Jaakkola;Antero Kukko;Harri Kaartinen;Lingli Zhu;Xinlian Liang;Yunsheng Wang;Hannu Hyyppä; "A Novel GNSS Technique for Predicting Boreal Forest Attributes at Low Cost," vol.55(9), pp.4855-4867, Sept. 2017. One of the biggest challenges in forestry research is the effective and accurate measuring and monitoring of forest variables, as the exploitation potential of forest inventory products largely depends on the accuracy of estimates and on the cost of data collection. This paper presented a novel computational method of low-cost forest inventory using global navigation satellite system (GNSS) signals in a crowdsourcing approach. Statistical features of GNSS signals were extracted from widely available GNSS devices and were used for predicting forest attributes, including tree height, diameter at breast height, basal area, stem volume, and above-ground biomass, in boreal forest conditions. The basic evidence of the predictions is the physical correlations between forest variables and the responses of GNSS signals penetrating through the forest. The random forest algorithm was applied to the predictions. GNSS-derived prediction accuracies were comparable with those of the most accurate 2-D remote sensing techniques, and the predictions can be improved further by integration with other publicly available data sources without additional cost. This type of crowdsourcing technique enables the collection of up-to-date forest data at low cost, and it significantly contributes to the development of new reference data collection techniques for forest inventory. Currently, field reference can account for half of the total costs of forest inventory.

Pooja Mishra;Akanksha Garg;Dharmendra Singh; "Critical Analysis of Model-Based Incoherent Polarimetric Decomposition Methods and Investigation of Deorientation Effect," vol.55(9), pp.4868-4877, Sept. 2017. This paper critically analyzes several incoherent model-based decomposition methods for assessing the effect of deorientation in characterization of various land covers. It has been found that even after performing decomposition, ambiguity still occurs in scattering response from various land covers, such as urban and vegetation. Researchers introduced the concept of deorientation to remove this ambiguity. Therefore, in this paper, a critical analysis has been carried out using seven different three- and four-component decomposition methods with and without deorientation and two Eigen decomposition-based methods to investigate the scattering response on various land covers, such as urban, vegetation, bare soil, and water. The comprehensive evaluation of decomposition and deorientation effect has been performed by both visual and quantitative analyses. Two types of quantitative analysis have been performed; first, by observing percentage of scattering power and second, by analyzing the variation in the number of pixels in different land covers for each scattering contribution. The analysis shows that deorientation increases not only the power but also the number of pixels for surface and double bounce scattering. The number of pixels representing volume scattering remain almost the same for all the methods with or without deorientation, whereas volume scattering power reduces after deorientation. Eigen decomposition-based methods are observed to solve the problem of overestimation of volume scattering power.

Elias F. Berra;Rachel Gaulton;Stuart Barr; "Commercial Off-the-Shelf Digital Cameras on Unmanned Aerial Vehicles for Multitemporal Monitoring of Vegetation Reflectance and NDVI," vol.55(9), pp.4878-4886, Sept. 2017. This paper demonstrates the ability to generate quantitative remote sensing products by means of an unmanned aerial vehicle (UAV) equipped with one unaltered and one near infrared-modified commercial off-the-shelf (COTS) camera. Radiometrically calibrated orthomosaics were generated for 17 dates, from which digital numbers were corrected to surface reflectance and to normalized difference vegetation index (NDVI). Validation against ground measurements showed that 84%-90% of the variation in the ground reflectance and 95%-96% of the variation in the ground NDVI could be explained by the UAV-retrieved reflectance and NDVI, respectively. Comparisons against Landsat 8 data showed relationships of $0.73\leq R^{2} \geq 0.84$ for reflectance and $0.86\leq R^{2} \geq 0.89$ for NDVI. It was not possible to generate a fully consistent time series of reflectance, due to variable illumination conditions during acquisition on some dates. However, the calculation of NDVI resulted in a more stable UAV time series, which was consistent with a Landsat series of NDVI extracted over a deciduous and evergreen woodland. The results confirm that COTS cameras, following calibration, can yield accurate reflectance estimates (under stable within-flight illumination conditions), and that consistent NDVI time series can be acquired in very variable illumination conditions. Such methods have significant potential in providing flexible, low-cost approaches to vegetation monitoring at fine spatial resolution and for user-controlled revisit periods.

Lin Wang;Chein-I Chang;Li-Chien Lee;Yulei Wang;Bai Xue;Meiping Song;Chuanyan Yu;Sen Li; "Band Subset Selection for Anomaly Detection in Hyperspectral Imagery," vol.55(9), pp.4887-4898, Sept. 2017. This paper presents a new approach, called band subset selection (BSS)-based hyperspectral anomaly detection (AD), which selects multiple bands simultaneously as a band subset rather than selecting multiple bands one at a time as the tradition band selection (BS) does, referred to as sequential multiple BS (SQMBS). Its idea is to first use virtual dimensionality (VD) to determine the number of multiple bands, nBS needed to be selected as a band subset and then develop two iterative process, sequential BSS (SQ-BSS) algorithm and successive BSS (SC-BSS) algorithm to find an optimal band subset numerically among all possible nBS combinations out of the full band set. In order to terminate the search process the averaged least-squares error (ALSE) and 3-D receiver operating characteristic (3D ROC) curves are used as stopping criteria to evaluate performance relative to AD using the full band set. Experimental results demonstrate that BSS generally performs better background suppression while maintaining target detection capability compared to target detection using full band information.

Jelena Stamenkovic;Leila Guerriero;Paolo Ferrazzoli;Claudia Notarnicola;Felix Greifeneder;Jean-Philippe Thiran; "Soil Moisture Estimation by SAR in Alpine Fields Using Gaussian Process Regressor Trained by Model Simulations," vol.55(9), pp.4899-4912, Sept. 2017. In this paper, we address the problem of retrieving soil moisture over a grassland alpine area from Synthetic Aperture Radar (SAR) data using a statistical algorithm trained by simulations of a physical model. A time series of C-band VV-polarized Wide Swath images acquired by Envisat Advanced SAR (ASAR) in the snow-free periods of 2010 and 2011 was simulated using a discrete radiative transfer model (RTM). The test area was located in the Mazia valley, South Tyrol (Italy), where the main land types are meadows and pastures. Soil moisture was collected from five meteorological stations, two of which situated in meadows and the rest in pastures. The smallest and the highest RMSEs of the RTM simulations were 0.78 dB and 1.91 dB, respectively. After backscattering simulation, the top soil moisture was estimated using Gaussian Process Regression (GPR). GPR was trained with the backscatter model simulations (including terrain features) for 2010, and then used to predict moisture from radar observations acquired in 2011. The relative importance of different input features was also assessed. The RMSE of the predicted soil moisture for the largest training data set (including aspect as a terrain feature) was 5.6% Vol. and the corresponding correlation coefficient was 0.84.

Linlin Xu;Alexander Wong;David A. Clausi; "A Novel Bayesian Spatial–Temporal Random Field Model Applied to Cloud Detection From Remotely Sensed Imagery," vol.55(9), pp.4913-4924, Sept. 2017. With the fast advancement of remote sensing platforms and sensors, remotely sensed imagery (RSI) is increasingly being characterized by both high spatial resolution and high temporal resolution. How to efficiently use the rich spatial and temporal information in RSI for highly accurate object detection and classification is an important research question. Nevertheless, there is still a lack of a probabilistic framework that is capable of fully accounting for the spatial-temporal information in RSI for improved applications. In this paper, we present a Bayesian spatial-temporal random field model that constitutes a complete probabilistic framework for fully explaining the spatial-temporal correlation in RSI, leading to an enhanced object detection approach that is used for cloud detection from RSI. Under the Bayesian theorem, the posterior distribution of a label field is decomposed into the label prior, the data likelihood, the temporal label likelihood, and the temporal data likelihood. To address the difficulties in modeling the complex spatial-temporal correlation effect in the temporal data likelihood, a stochastic sampling approach is presented. Based on the maximum a posteriori approach, the posterior distribution is seamlessly integrated into the graph-cut optimization framework, and, therefore, the model optimization can be efficiently solved. The proposed algorithm is tested for cloud detection on both simulated and real RSIs and the results demonstrate that the proposed algorithm can effectively exploit the spatial-temporal information for achieving higher detection accuracy.

Fadi Kizel;Maxim Shoshany;Nathan S. Netanyahu;Gilad Even-Tzur;Jón Atli Benediktsson; "A Stepwise Analytical Projected Gradient Descent Search for Hyperspectral Unmixing and Its Code Vectorization," vol.55(9), pp.4925-4943, Sept. 2017. We present, in this paper, a new methodology for spectral unmixing, where a vector of fractions, corresponding to a set of endmembers (EMs), is estimated for each pixel in the image. The process first provides an initial estimate of the fraction vector, followed by an iterative procedure that converges to an optimal solution. Specifically, projected gradient descent (PGD) optimization is applied to (a variant of) the spectral angle mapper objective function, so as to significantly reduce the estimation error due to amplitude (i.e., magnitude) variations in EM spectra, caused by the illumination change effect. To improve the computational efficiency of our method over a commonly used gradient descent technique, we have analytically derived the objective function's gradient and the optimal step size (used in each iteration). To gain further improvement, we have implemented our unmixing module via code vectorization, where the entire process is “folded” into a single loop, and the fractions for all of the pixels are solved simultaneously. We call this new parallel scheme vectorized code PGD unmixing (VPGDU). VPGDU has the advantage of solving (simultaneously) an independent optimization problem per image pixel, exactly as other pixelwise algorithms, but significantly faster. Its performance was compared with the commonly used fully constrained least squares unmixing (FCLSU), the generalized bilinear model (GBM) method for hyperspectral unmixng, and the fast state-of-the-art methods, sparse unmixing by variable splitting and augmented Lagrangian (SUnSAL) and collaborative SUnSAL (CLSUnSAL) based on the alternating direction method of multipliers. Considering all of the prospective EMs of a scene at each pixel (i.e., without a priori knowledge which/how many EMs are actually present in a given pixel), we demonstrate that the accuracy due to VPGDU is considerably higher than that obtained by FCLSU, GBM, SUnSAL, and CLSUnSAL under va- ying illumination, and is, otherwise, comparable with respect to these methods. However, while our method is significantly faster than FCLSU and GBM, it is slower than SUnSAL and CLSUnSAL by roughly an order of magnitude.

Fei Zhu;Abderrahim Halimi;Paul Honeine;Badong Chen;Nanning Zheng; "Correntropy Maximization via ADMM: Application to Robust Hyperspectral Unmixing," vol.55(9), pp.4944-4955, Sept. 2017. In hyperspectral images, some spectral bands suffer from low signal-to-noise ratio due to noisy acquisition and atmospheric effects, thus requiring robust techniques for the unmixing problem. This paper presents a robust supervised spectral unmixing approach for hyperspectral images. The robustness is achieved by writing the unmixing problem as the maximization of the correntropy criterion subject to the most commonly used constraints. Two unmixing problems are derived: the first problem considers the fully constrained unmixing, with both the nonnegativity and sum-to-one constraints, while the second one deals with the nonnegativity and the sparsity promoting of the abundances. The corresponding optimization problems are solved using an alternating direction method of multipliers (ADMM) approach. Experiments on synthetic and real hyperspectral images validate the performance of the proposed algorithms for different scenarios, demonstrating that the correntropy-based unmixing with ADMM is particularly robust against highly noisy outlier bands.

Behnaz Papari;Chris S. Edrington;Farzaneh Kavousi-Fard; "An Effective Fuzzy Feature Selection and Prediction Method for Modeling Tidal Current: A Case of Persian Gulf," vol.55(9), pp.4956-4961, Sept. 2017. This paper develops a new two-stage approach for accurate modeling and prediction of tidal current. The proposed method makes use of a novel fuzzy feature selection to extract the most preferable features from the tidal current speed and direction data set. The selected features are further used to train a support vector regression for accurate prediction. The setting parameters of the proposed model are trained by a new optimization algorithm based on the harmony search algorithm to get to the most optimal training targets. The proposed optimization algorithm makes use of the crossover and mutation operators from genetic algorithm to escape from the local optima and find the global solutions. Experimental tidal data from Persian Gulf, Iran, are used to assess the accuracy and performance of the proposed model. The results show the appropriate performance and high precision of the proposed model in comparison with other famous methods.

Emmanuel Maggiori;Guillaume Charpiat;Yuliya Tarabalka;Pierre Alliez; "Recurrent Neural Networks to Correct Satellite Image Classification Maps," vol.55(9), pp.4962-4971, Sept. 2017. While initially devised for image categorization, convolutional neural networks (CNNs) are being increasingly used for the pixelwise semantic labeling of images. However, the proper nature of the most common CNN architectures makes them good at recognizing but poor at localizing objects precisely. This problem is magnified in the context of aerial and satellite image labeling, where a spatially fine object outlining is of paramount importance. Different iterative enhancement algorithms have been presented in the literature to progressively improve the coarse CNN outputs, seeking to sharpen object boundaries around real image edges. However, one must carefully design, choose, and tune such algorithms. Instead, our goal is to directly learn the iterative process itself. For this, we formulate a generic iterative enhancement process inspired from partial differential equations, and observe that it can be expressed as a recurrent neural network (RNN). Consequently, we train such a network from manually labeled data for our enhancement task. In a series of experiments, we show that our RNN effectively learns an iterative process that significantly improves the quality of satellite image classification maps.

Jakov V. Toporkov; "A Theoretical Study of Velocity SAR Imaging of a Moving, Nonstationary Scene," vol.55(9), pp.4972-4988, Sept. 2017. The concept of the “velocity synthetic aperture radar” (VSAR)-a multiaperture sensor capable of measuring radial velocities in the scene and utilizing this information to correct motion-induced imaging distortions inherent to SAR-was proposed two decades ago. Lately, with the emergence of truly multichannel systems featuring antenna arrays with dozens of elements, the approach has been enjoying a renewed interest. The viability and effectiveness of the algorithm were successfully demonstrated in a series of airborne field campaigns that involved imaging both man-made targets and natural maritime features. These experiments and the wealth of resulting data also underscored the need for comprehensive mathematical descriptions of expected target signatures in the collected “image stacks” and for further refinements of the VSAR imaging theory. This paper addresses both tasks by building upon the available mathematical results developed for the along-track interferometric SAR imagery of distributed evolving targets. The approach allows simultaneous accounting for all essential effects known to impact SAR imagery of a target or an extended feature: its azimuth velocity, radial velocity and acceleration, as well as finite coherence time. The emphasis is on obtaining closed-form expressions that could readily illustrate the structure and behavior of the VSAR stack spectrum of such a target and help gauge anticipated focusing improvement stemming from the VSAR image correction. In particular, it is rigorously shown that the VSAR algorithm is successful in situations when SAR defocusing arises predominantly from radial motion and short coherence times-the resulting resolution is generally no worse than that of the corresponding real-aperture radar. On the other hand, strong defocusing due to azimuth translation may be problematic to compensate within the VSAR approach framework.

Alberto Alonso-Arroyo;Valery U. Zavorotny;Adriano Camps; "Sea Ice Detection Using U.K. TDS-1 GNSS-R Data," vol.55(9), pp.4989-5001, Sept. 2017. A sea ice detection algorithm developed using the U.K. TechDemoSat-1 (U.K. TDS-1) global navigation satellite systems (GNSSs)-reflectometry data over the Arctic and Antarctic regions is presented. It is based on measuring the similarity of the received GNSS reflected waveform or delay Doppler map (DDM) to the coherent reflection model waveform. Over the open ocean, the scattered signal has a diffusive, incoherent nature; it is described by the rough surface scattering model based on the geometric optics and the Gaussian statistics for the ocean surface slopes. Over sea ice and, in particular, newly formed sea ice, the scattered signal acquires a coherence, which is characteristic for a surface with large flat areas. In order to measure the similarity of the received waveform or DDM, to the coherent reflection model, three different estimators are presented: the normalized DDM average, the trailing edge slope (TES), and the matched filter approach. Here, a probabilistic study is presented based on a Bayesian approach using two different and independent ground-truth data sets. This approach allows one to thoroughly assess the performance of the estimators. The best results are achieved for both the TES and the matched filter approach with a probability of detection of 98.5%, a probability of false alarm of ~ 3.6%, and a probability of error of 2.5%. However, the matched filter approach is preferred due to its simplicity. Data from AMSR2 processed using the Arctic Radiation and Turbulence Interaction STudy Sea Ice algorithm and from an Special Sensor Microwave Imager/Sounder radiometer processed by Ocean and Sea Ice SAF have been used as ground truth. A pixel has been classified as a sea ice pixel if the sea ice concentration (SIC) in it was larger than 15%. The measurement of the SIC is also assessed in this paper, but the nature of the U.K. TDS-1 data (lack of calibrated data) does not allow to make any specific conclusions about the SIC.

Cooper McCann;Kevin S. Repasky;Mikindra Morin;Rick L. Lawrence;Scott Powell; "Using Landsat Surface Reflectance Data as a Reference Target for Multiswath Hyperspectral Data Collected Over Mixed Agricultural Rangeland Areas," vol.55(9), pp.5002-5014, Sept. 2017. Low-cost flight-based hyperspectral imaging systems have the potential to provide important information for ecosystem and environmental studies as well as aide in land management. To realize this potential, methods must be developed to provide large-area surface reflectance data allowing for temporal data sets at the mesoscale. This paper describes a bootstrap method of producing a large-area, radiometrically referenced hyperspectral data set using the Landsat surface reflectance (LaSRC) data product as a reference target. The bootstrap method uses standard hyperspectral processing techniques that are extended to remove uneven illumination conditions between flight passes, allowing for radiometrically self-consistent data after mosaicking. Through selective spectral and spatial resampling, LaSRC data are used as a radiometric reference target. Advantages of the bootstrap method include the need for minimal site access, no ancillary instrumentation, and automated data processing. Data from two hyperspectral flights over the same managed agricultural and unmanaged range land covering approximately 5.8 km2 acquired on June 21, 2014 and June 24, 2015 are presented. Data from a flight over agricultural land collected on June 6, 2016 are compared with concurrently collected ground-based reflectance spectra as a means of validation.

Kangyu Zhang;Xiazhen Xu;Bing Han;Lamin R. Mansaray;Qiaoying Guo;Jingfeng Huang; "The Influence of Different Spatial Resolutions on the Retrieval Accuracy of Sea Surface Wind Speed With C-2PO Models Using Full Polarization C-Band SAR," vol.55(9), pp.5015-5025, Sept. 2017. This paper presents a comparison strategy for investigating the influence of spatial resolutions on sea surface wind speed retrieval accuracy with cross-polarized synthetic aperture radar images. First, for wind speeds retrieved from vertical transmitting-vertical receiving (VV)-polarized images, the optimal geophysical C-band model (CMOD) function was selected among four CMOD functions. Second, the most suitable C-band cross-polarized ocean (C-2PO) model was selected between two C-2POs for the VH-polarized image data set. Then, the VH-wind speeds retrieved by the selected C-2PO were compared with the VV-polarized sea surface wind speeds retrieved using the optimal CMOD, which served as a reference, at different spatial resolutions. Results show that the VH-polarized wind speed retrieval accuracy increases rapidly with the decrease in spatial resolutions from 100 to 1000 m, with a drop in root-mean-square error of 42%. However, the improvement in wind speed retrieval accuracy levels off with spatial resolutions decreasing from 1000 to 5000 m. This demonstrates that the pixel spacing of 1 km may be the compromising choice for the tradeoff between the spatial resolution and wind speed retrieval accuracy with cross-polarized images obtained from RADASAT-2 fine quad-polarization mode.

Feng Xu;Qian Song;Ya-Qiu Jin; "Polarimetric SAR Image Factorization," vol.55(9), pp.5026-5041, Sept. 2017. This paper reformulates the problem of polarimetric incoherent target decomposition as a general image factorization which aims to simultaneously estimate a dictionary of meaningful atom scatterers and their corresponding spatial distribution maps. Both model-based and eigenanalysis-based decompositions can be seen as special cases of image factorization under specific constraints. The inverse problem of image factorization can be converted to an equivalent nonnegative matrix factorization (NMF) problem via redundant coding. It enables a wide range of NMF algorithms with various regularizations to be directly applicable to polarimetric image analysis. The advantage of the proposed image factorization is demonstrated on both synthesized and real data. It also shows that extended applications such as speckle reduction and classification can benefit from the proposed image factorization.

Wan Wu;Xu Liu;Daniel K. Zhou;Allen M. Larar;Qiguang Yang;Susan H. Kizer;Quanhua Liu; "The Application of PCRTM Physical Retrieval Methodology for IASI Cloudy Scene Analysis," vol.55(9), pp.5042-5056, Sept. 2017. This paper applies a physical inversion approach to retrieve geophysical properties from the single instrumental field-of-view (FOV) spectral radiances measured by the Infrared Atmospheric Sounding Interferometer (IASI) under all-sky conditions. We demonstrate the use of a principal-component-based radiative transfer model (PCRTM) and a physical inversion methodology to simultaneously retrieve cloud radiative and microphysical properties along with atmospheric thermodynamic parameters. By using a fast parameterization scheme, the PCRTM can include the cloud scattering properties simulation in radiative transfer calculations without incurring much more computational cost. The computational speed achieved for a single FOV forward simulation under cloudy skies is similar to that normally achieved for clear skies. The retrieval algorithm introduced herein adopts a novel cloud phase determination scheme, to stabilize and/or constrain retrieval iterations, based on characteristics of the reflectance and transmittance of ice and water clouds. A modified Gaussian-Newton minimization technique is employed in the iterative inversion process in order to overcome a highly nonlinear cost function introduced by the cloud parameters. We carry out a rigorous error analysis for the retrieval of temperature, moisture, ozone (O3), and carbon monoxide (CO) from IASI measurements under cloudy-sky conditions. Our algorithm is applied to real IASI observations. Retrieval results are validated using European Center for Medium-Range Weather Forecasting data and collocated Lidar/Radar measurements, and the dependence of retrieval accuracy on cloud optical depth is illustrated.

Hua Zhang;Qunming Wang;Wenzhong Shi;Ming Hao; "A Novel Adaptive Fuzzy Local Information $C$ -Means Clustering Algorithm for Remotely Sensed Imagery Classification," vol.55(9), pp.5057-5068, Sept. 2017. This paper presents a novel adaptive fuzzy local information c-means (ADFLICM) clustering approach for remotely sensed imagery classification by incorporating the local spatial and gray level information constraints. The ADFLICM approach can enhance the conventional fuzzy c-means algorithm by producing homogeneous segmentation and reducing the edge blurring artifact simultaneously. The major contribution of ADFLICM is use of the new fuzzy local similarity measure based on pixel spatial attraction model, which adaptively determines the weighting factors for neighboring pixel effects without any experimentally set parameters. The weighting factor for each neighborhood is fully adaptive to the image content, and the balance between insensitiveness to noise and reduction of edge blurring artifact to preserve image details is automatically achieved by using the new fuzzy local similarity measure. Four different types of images were used in the experiments to examine the performance of ADFLICM. The experimental results indicate that ADFLICM produces greater accuracy than the other four methods and hence provides an effective clustering algorithm for classification of remotely sensed imagery.

Simon Zwieback;Scott Hensley;Irena Hajnsek; "Soil Moisture Estimation Using Differential Radar Interferometry: Toward Separating Soil Moisture and Displacements," vol.55(9), pp.5069-5083, Sept. 2017. Differential interferometric synthetic aperture radar (DInSAR) measurements are sensitive to displacements, but also to soil moisture mν changes. Here, we analyze whether soil moisture can be estimated from three DInSAR observables without making any assumptions about its complex spatio-temporal dynamics, with the goal of removing its contribution from the displacement estimates. We find that the referenced DInSAR phase can be a suitable means to estimate mν time series up to an overall offset, as indicated by correlations with in situ measurements of 0.75-0.90 in two campaigns. However, the phase can only be referenced when no displacements (and atmospheric delays) occur or when they can be estimated reliably. We study the separability of displacements and mν using two additional DInSAR observables (closure phase and coherence magnitude) that are sensitive to mν but insensitive to displacements. However, our analyses show that neither contains enough information for this purpose, i.e., it is not possible to estimate mν uniquely. The soil moisture correction of the displacement estimates is hence ambiguous too. Their applicability is furthermore limited by their proneness to model misspecifications and decorrelation. Consequently, the separation of soil moisture changes and displacements using DInSAR observations alone is difficult in practice, and-like for mitigating tropospheric errors-additional data (e.g., external mν estimates) or assumptions (e.g., spatiotemporal patterns) are required when the mν effects on the displacement estimates are comparable to the magnitude of the movements. This will be critical when soil moisture changes are correlated with the actual displacements.

Zhongbiao Chen;Yijun He;Biao Zhang; "An Automatic Algorithm to Retrieve Wave Height From X-Band Marine Radar Image Sequence," vol.55(9), pp.5084-5092, Sept. 2017. A new method is proposed to retrieve wave height from an X-band marine radar image sequence, without external measurements for reference. The X-band marine radar image sequence is first decomposed by empirical orthogonal function (EOF), and then the sea surface height profile is reconstructed and scaled from the first EOF mode. The radial profiles that are close to the peak wave direction are used to extract the zero-crossing wave periods and relative wave heights. The spectral width parameter is deduced from the histogram of a dimensionless wave period. Based on a joint probability distribution function (pdf) of a dimensionless wave period and wave height, the theoretical pdf of the wave height is derived. A shape parameter is defined for the theoretical pdf and the histogram of the relative wave heights, and then the calibration coefficient is estimated. The method is validated by comparing the significant wave heights retrieved from two different X-band marine radar systems with those measured by buoy; the correlation coefficient, the root-mean-square error, and the bias between them are 0.78, 0.51 m, and -0.19 m for HH polarization, while they are 0.77, 0.51 m, and 0.19 m for VV polarization, respectively. The sources of error of the method are discussed.

Bai Xue;Chunyan Yu;Yulei Wang;Meiping Song;Sen Li;Lin Wang;Hsian-Min Chen;Chein-I Chang; "A Subpixel Target Detection Approach to Hyperspectral Image Classification," vol.55(9), pp.5093-5114, Sept. 2017. Hyperspectral image classification faces various levels of difficulty due to the use of different types of hyperspectral image data. Recently, spectral-spatial approaches have been developed by jointly taking care of spectral and spatial information. This paper presents a completely different approach from a subpixel target detection view point. It implements four stage processes, a preprocessing stage, which uses band selection (BS) and nonlinear band expansion, referred to as BS-then-nonlinear expansion (BSNE), a detection stage, which implements constrained energy minimization (CEM) to produce subpixel target maps, and an iterative stage, which develops an iterative CEM (ICEM) by applying Gaussian filters to capture spatial information, and then feeding the Gaussian-filtered CEM-detection maps back to BSNE band images to reprocess CEM in an iterative manner. Finally, in the last stage Otsu's method is applied to converting ICEM-detected real-valued maps to discrete values for classification. The entire process is called BSNE-ICEM. Experimental results demonstrate BSNE-ICEM, which has advantages over support vector machine-based approaches in many aspects, such as easy implementation, fewer parameters to be used, and better false classification and precision rates.

Maryam Rahnemoonfar;Geoffrey Charles Fox;Masoud Yari;John Paden; "Automatic Ice Surface and Bottom Boundaries Estimation in Radar Imagery Based on Level-Set Approach," vol.55(9), pp.5115-5122, Sept. 2017. Accelerated loss of ice from Greenland and Antarctica has been observed in recent decades. The melting of polar ice sheets and mountain glaciers has considerable influence on sea level rise in a changing climate. Ice thickness is a key factor in making predictions about the future of massive ice reservoirs. The ice thickness can be estimated by calculating the exact location of the ice surface and subglacial topography beneath the ice in radar imagery. Identifying the locations of ice surface and bottom is typically performed manually, which is a very time-consuming procedure. Here, we propose an approach, which automatically detects ice surface and bottom boundaries using distance-regularized level-set evolution. In this approach, the complex topology of ice surface and bottom boundary layers can be detected simultaneously by evolving an initial curve in the radar imagery. Using a distance-regularized term, the regularity of the level-set function is intrinsically maintained, which solves the reinitialization issues arising from conventional level-set approaches. The results are evaluated on a large data set of airborne radar imagery collected during a NASA IceBridge mission over Antarctica and show promising results with respect to manually picked data.

Robert M. Beauchamp;V. Chandrasekar; "Characterization and Modeling of the Wind Turbine Radar Signature Using Turbine State Telemetry," vol.55(9), pp.5134-5147, Sept. 2017. Wind turbine observations and characterization efforts have treated the wind turbine as a noncooperative target. Similarly, suppression of the turbine's radar signature has been considered without the aid of state information from the wind turbine under observation. In this paper, X-band radar observations of a utility-scale wind turbine, with detailed turbine state telemetry, are investigated. From scattering theory, the wind turbine's physical structure has a deterministic radar cross section for a given observation geometry. Using the telemetry, the variation in the turbine's signature is considered over a range of operating states. The deterministic nature of a turbine's signature is demonstrated from radar observations, and a model is developed to isolate it. The turbine's radar signature, as it relates to changes in the operating state, is discussed with the intent of enabling future suppression techniques.

Xiaoqiang Lu;Xiangtao Zheng;Yuan Yuan; "Remote Sensing Scene Classification by Unsupervised Representation Learning," vol.55(9), pp.5148-5157, Sept. 2017. With the rapid development of the satellite sensor technology, high spatial resolution remote sensing (HSR) data have attracted extensive attention in military and civilian applications. In order to make full use of these data, remote sensing scene classification becomes an important and necessary precedent task. In this paper, an unsupervised representation learning method is proposed to investigate deconvolution networks for remote sensing scene classification. First, a shallow weighted deconvolution network is utilized to learn a set of feature maps and filters for each image by minimizing the reconstruction error between the input image and the convolution result. The learned feature maps can capture the abundant edge and texture information of high spatial resolution images, which is definitely important for remote sensing images. After that, the spatial pyramid model (SPM) is used to aggregate features at different scales to maintain the spatial layout of HSR image scene. A discriminative representation for HSR image is obtained by combining the proposed weighted deconvolution model and SPM. Finally, the representation vector is input into a support vector machine to finish classification. We apply our method on two challenging HSR image data sets: the UCMerced data set with 21 scene categories and the Sydney data set with seven land-use categories. All the experimental results achieved by the proposed method outperform most state of the arts, which demonstrates the effectiveness of the proposed method.

Hanyu Shi;Zhiqiang Xiao;Shunlin Liang;Han Ma; "A Method for Consistent Estimation of Multiple Land Surface Parameters From MODIS Top-of-Atmosphere Time Series Data," vol.55(9), pp.5158-5173, Sept. 2017. Most methods for generating global land surface products from satellite data are parameter specific and do not use multiple temporal observations, which often results in spatial and temporal discontinuity and physical inconsistency among different products. This paper proposes a data assimilation (DA) scheme to simultaneously estimate five land surface parameters from Moderate Resolution Imaging Spectroradiometer (MODIS) top-of-atmosphere (TOA) time series reflectance data under clear and cloudy conditions. A coupled land surface-atmosphere radiative transfer model is developed to simulate TOA reflectance, and an ensemble Kalman filter technique is used to retrieve the most influential surface parameters of the coupled model, such as leaf area index, by combining predictions from dynamic models and the MODIS TOA reflectance data whether under clear or cloudy conditions. Then, the retrieved surface parameters are input to the coupled model to calculate four other parameters: 1) land surface reflectance; 2) incident photosynthetically active radiation (PAR); 3) land surface albedo; and 4) the fraction of absorbed PAR (FAPAR). The estimated parameters are compared with those of the corresponding MODIS, the Global LAnd Surface Satellite, and the Geoland2/BioPar version 1 (GEOV1) products. Validation of the estimated parameters against ground measurements from several sites with different vegetation types demonstrates that this method can estimate temporally complete land surface parameter profiles from MODIS TOA time series reflectance data, with accuracy comparable to that of existing satellite products over the selected sites. The retrieved leaf area index profiles are smoother than the existing satellite products, and unlike the MOD09GA product, the retrieved surface reflectance values do not have the high peak values influenced by clouds. The use of the coupled land surface-atmosphere model and the DA technique ensures physical c- nnections between the land surface parameters and makes it possible to calculate radiation-related parameters for clear and cloudy atmospheric conditions, which is an improvement for FAPAR retrieval compared with the MODIS and GEOV1 products. The retrieved FAPAR and PAR values can reveal the significant differences in them under clear and cloudy atmospheric conditions.

Denis Demchev;Vladimir Volkov;Eduard Kazakov;Pablo F. Alcantarilla;Stein Sandven;Viktoriya Khmeleva; "Sea Ice Drift Tracking From Sequential SAR Images Using Accelerated-KAZE Features," vol.55(9), pp.5174-5184, Sept. 2017. In this paper, we propose a feature-tracking algorithm for sea ice drift retrieval from a pair of sequential satellite synthetic aperture radar (SAR) images. The method is based on feature tracking comprising feature detection, description, and matching steps. The approach exploits the benefits of nonlinear multiscale image representations using accelerated-KAZE (A-KAZE) features, a method that detects and describes image features in an anisotropic scale space. We evaluated several state-of-the-art feature-based algorithms, including A-KAZE, Scale Invariant Feature Transform (SIFT), and a very fast feature extractor that computes binary descriptors known as Oriented FAST and Rotated BRIEF (ORB) on dual polarized Sentinel-1A C-SAR extra wide swath mode data over the Arctic. The A-KAZE approach outperforms both ORB and SIFT up to an order of magnitude in ice drift. The experimental results showed high relevance of the proposed algorithm for retrieval of ice drift at subkilometre resolution from a pair of SAR images with 100-m pixel size. From this paper, we found that feature tracking using nonlinear scale-spaces is preferable due to its high efficiency against noise with respect to image features compared with other existing feature tracking alternatives that make use of Gaussian or linear scale spaces.

Xiangtao Zheng;Yuan Yuan;Xiaoqiang Lu; "Dimensionality Reduction by Spatial–Spectral Preservation in Selected Bands," vol.55(9), pp.5185-5197, Sept. 2017. Dimensionality reduction (DR) has attracted extensive attention since it provides discriminative information of hyperspectral images (HSI) and reduces the computational burden. Though DR has gained rapid development in recent years, it is difficult to achieve higher classification accuracy while preserving the relevant original information of the spectral bands. To relieve this limitation, in this paper, a different DR framework is proposed to perform feature extraction on the selected bands. The proposed method uses determinantal point process to select the representative bands and to preserve the relevant original information of the spectral bands. The performance of classification is further improved by performing multiple Laplacian eigenmaps (LEs) on the selected bands. Different from the traditional LEs, multiple Laplacian matrices in this paper are defined by encoding spatial-spectral proximity on each band. A common low-dimensional representation is generated to capture the joint manifold structure from multiple Laplacian matrices. Experimental results on three real-world HSIs demonstrate that the proposed framework can lead to a significant advancement in HSI classification compared with the state-of-the-art methods.

Bahareh Kalantar;Shattri Bin Mansor;Alfian Abdul Halin;Helmi Zulhaidi Mohd Shafri;Mohsen Zand; "Multiple Moving Object Detection From UAV Videos Using Trajectories of Matched Regional Adjacency Graphs," vol.55(9), pp.5198-5213, Sept. 2017. Image registration has been long used as a basis for the detection of moving objects. Registration techniques attempt to discover correspondences between consecutive frame pairs based on image appearances under rigid and affine transformations. However, spatial information is often ignored, and different motions from multiple moving objects cannot be efficiently modeled. Moreover, image registration is not well suited to handle occlusion that can result in potential object misses. This paper proposes a novel approach to address these problems. First, segmented video frames from unmanned aerial vehicle captured video sequences are represented using region adjacency graphs of visual appearance and geometric properties. Correspondence matching (for visible and occluded regions) is then performed between graph sequences by using multigraph matching. After matching, region labeling is achieved by a proposed graph coloring algorithm which assigns a background or foreground label to the respective region. The intuition of the algorithm is that background scene and foreground moving objects exhibit different motion characteristics in a sequence, and hence, their spatial distances are expected to be varying with time. Experiments conducted on several DARPA VIVID video sequences as well as self-captured videos show that the proposed method is robust to unknown transformations, with significant improvements in overall precision and recall compared to existing works.

Shihyan Lee;Gerhard Meister; "MODIS Aqua Optical Throughput Degradation Impact on Relative Spectral Response and Calibration of Ocean Color Products," vol.55(9), pp.5214-5219, Sept. 2017. Since Moderate Resolution Imaging Spectroradiometer Aqua's launch in 2002, the radiometric system gains of the reflective solar bands have been degrading, indicating changes in the system's optical throughput. To estimate the optical throughput degradation, the electronic gain changes were estimated and removed from the measured system gain. The derived optical throughput degradation shows a rate that is much faster in the shorter wavelengths than the longer wavelengths. The wavelength-dependent optical throughput degradation modulated the relative spectral response (RSR) of the bands. In addition, the optical degradation is also scan angle-dependent due to large changes in response versus the scan angle over time. We estimated the modulated RSR as a function of time and scan angles and its impacts on sensor radiometric calibration for the ocean science. Our results show that the calibration bias could be up to 1.8% for band 8 (412 nm) due to its larger out-of-band response. For the other ocean bands, the calibration biases are much smaller with magnitudes at least one order smaller.

Davide Comite;Fauzia Ahmad;DaHan Liao;Traian Dogaru;Moeness G. Amin; "Multiview Imaging for Low-Signature Target Detection in Rough-Surface Clutter Environment," vol.55(9), pp.5220-5229, Sept. 2017. Forward-looking ground-penetrating radar (FL-GPR) permits standoff sensing of shallow in-road threats. A major challenge facing this radar technology is the high rate of false alarms stemming from the vulnerability of the target responses to interference scattering arising from interface roughness and subsurface clutter. In this paper, we present a multiview approach for target detection in FL-GPR. Various images corresponding to the different views are generated using a tomographic algorithm, which considers the near-field nature of the sensing problem. Furthermore, for reducing clutter and maintaining high cross-range resolution over the imaged area, each image is computed in a segmentwise fashion using coherent integration over a suitable set of measurements from multiple platform positions. We employ two fusion approaches based on likelihood ratio tests detector to combine the multiview images for enhanced target detection. The superior performance of the multiview approach over single-view imaging is demonstrated using electromagnetic modeling data.

Jun Geng;Jing M. Chen;Weiliang Fan;Lili Tu;Qingjiu Tian;Ranran Yang;Yanjun Yang;Lei Wang;Chunguang Lv;Shengbiao Wu; "GOFP: A Geometric-Optical Model for Forest Plantations," vol.55(9), pp.5230-5241, Sept. 2017. Geometric-optical (GO) model suitable for forest plantation (GOFP) is a GO model for forest plantations at the stand level developed in this study based on a four-scale GO model a Geometric-Optical Model for Sloping Terrains-II (GOST2), which simulates the bidirectional reflectance distribution function (BRDF) for natural forest canopies. In most previous GO models, tree distributions are often assumed to meet the Poisson or Neyman model in a forest; therefore, these models are suitable for simulating BRDF for natural forest canopies. However, in forest plantations, tree distributions are proven to meet the hypergeometric model rather than the Poisson or Neyman model at the stand level. GOFP, in which the tree distributions are described using the hypergeometric model, is proposed to simulate the bidirectional reflectance factor (BRF) of forest plantations at the stand level. The area ratios of the four scene components (sunlit foliage, sunlit ground, shaded foliage, and shaded ground) of GOFP compare well with those simulated by a 3-D canopy visualization technique. A comparison is also made against discrete anisotropic radiative transfer, showing that GOFP has the ability to simulate BRF of forest plantations. Another comparison is made against operational land imager and Moderate Resolution Imaging Spectroradiometer surface.

Yongkang Li;Tong Wang;Baochang Liu;Lei Yang;Guoan Bi; "Ground Moving Target Imaging and Motion Parameter Estimation With Airborne Dual-Channel CSSAR," vol.55(9), pp.5242-5253, Sept. 2017. This paper deals with the issue of ground moving target imaging and motion parameter estimation with an airborne dual-channel circular stripmap synthetic aperture radar (CSSAR) system. Although several methods of ground moving target motion parameter estimation have been proposed for the conventional airborne linear stripmap SAR, they cannot be applied to airborne CSSAR because the range history of a ground moving target for airborne CSSAR is different than that for airborne linear stripmap SAR. In this paper, the moving target's range history for airborne dual-channel CSSAR and the target signal model after the displaced phase center antenna processing are derived, and a new ground moving target imaging and motion parameter estimation algorithm is developed. In this algorithm, the estimation of baseband Doppler centroid and its compensation are first performed. Then focusing is implemented in the 2-D frequency domain via phase multiplication, and the target is focused in the SAR image without azimuth displacement due to the compensation of the Doppler shift caused by its motion. Finally, the target's motion parameters are estimated with its Doppler parameters and its position in the SAR image. Numerical simulations are conducted to validate the derived range history and the performance of the proposed algorithm.

John Brevard Sigman;Benjamin E. Barrowes;Kevin O’Neill;Yinlin Wang;Janet E. Simms;Hollis H. Bennett;Donald E. Yule;Fridon Shubitidze; "High-Frequency Electromagnetic Induction Sensing of Nonmetallic Materials," vol.55(9), pp.5254-5263, Sept. 2017. We introduce a frequency-domain electromagnetic induction (EMI) instrument for detection and classification of objects with either high (σ > 105 S/m) or intermediate (1 <; σ <; 105 S/m) electrical conductivity. While high conductivity metallic targets exhibit a quadrature peak response for frequencies in a traditional EMI regime under 100 kHz, the response of intermediate conductivity objects manifests at higher frequencies, between 100 kHz and 15 MHz. Materials such as carbon fiber or conducting salt solutions exhibit conductivities in this intermediate range and are undetectable by traditional low-frequency EMI instruments. To detect these intermediate conductivity targets, we developed a high-frequency EMI (HFEMI) instrument with a frequency range extended to 15 MHz. The HFEMI instrument requires novel hardware considerations, coil design, and data processing schemes. Most importantly, the wire lengths of transmit and receive coils are shorter than those of traditional frequency EMI sensors, so that the phase on the transmit and receive coils is nearly constant. We present the hardware and software aspects of the HFEMI instrument along with preliminary data, demonstrating its ability to detect intermediate conductive objects.

Tao Chen;Shijian Lu; "Subcategory-Aware Feature Selection and SVM Optimization for Automatic Aerial Image-Based Oil Spill Inspection," vol.55(9), pp.5264-5273, Sept. 2017. Oil spill inspection is critical to the marine and coastal ecosystems, and has been widely studied by various remote sensing technologies, such as synthetic aperture radar and hyperspectral. To improve the temporal resolution and the inspection flexibility, we propose a novel aerial image-based system that can find oil spills timely from images captured using onboard optical cameras installed in unmanned aerial vehicle or airplanes. In particular, a subcategory-aware feature selection (FS) and support vector machine (SVM) joint optimization technique is proposed to learn a discriminative model that can tell the existence of oil spills within an optical image of the marine surface. A set of color-based features is first extracted and concatenated together to characterize the oil spill incidence in an image, where a new color autocorrelogram is designed, which can better describe each color's spatial distribution in an image. Furthermore, subcategory-aware joint FS and SVM optimization technique is designed, which is capable of generating the optimal feature subset and SVM component models. Experiments on a set of real-world marine surface images show that the proposed technique outperforms the state-of-the-art techniques and achieves promising results for aerial image-based oil spill inspection.

Vanika Singhal;Hemant K. Aggarwal;Snigdha Tariyal;Angshul Majumdar; "Discriminative Robust Deep Dictionary Learning for Hyperspectral Image Classification," vol.55(9), pp.5274-5283, Sept. 2017. This paper proposes a new framework for deep learning that has been particularly tailored for hyperspectral image classification. We learn multiple levels of dictionaries in a robust fashion. The last layer is discriminative that learns a linear classifier. The training proceeds greedily; at a time, a single level of dictionary is learned and the coefficients used to train the next level. The coefficients from the final level are used for classification. Robustness is incorporated by minimizing the absolute deviations instead of the more popular Euclidean norm. The inbuilt robustness helps combat mixed noise (Gaussian and sparse) present in hyperspectral images. Results show that our proposed techniques outperform all other deep learning methods-deep belief network, stacked autoencoder, and convolutional neural network. The experiments have been carried out on both benchmark deep learning data sets (MNIST, CIFAR-10, and Street View House Numbers) as well as on real hyperspectral imaging data sets.

Xiaohua Xu;David T. Sandwell;Ekaterina Tymofyeyeva;Alejandro González-Ortega;Xiaopeng Tong; "Tectonic and Anthropogenic Deformation at the Cerro Prieto Geothermal Step-Over Revealed by Sentinel-1A InSAR," vol.55(9), pp.5284-5292, Sept. 2017. The Cerro Prieto geothermal field (CPGF) lies at the step-over between the imperial and the Cerro Prieto faults in northern Baja California, Mexico. While tectonically this is the most active section of the southern San Andreas Fault system, the spatial and temporal deformation in the area is poorly resolved by the sparse global positioning system (GPS) network coverage. Moreover, interferograms from satellite observations spanning more than a few months are decorrelated due to the extensive agricultural activity in this region. Here we investigate the use of frequent, short temporal baseline interferograms offered by the new Sentinel-1A satellite to recover two components of deformation time series across these faults. Following previous studies, we developed a purely geometric approach for image alignment that achieves better than 1/200 pixel alignment needed for accurate phase recovery. We construct interferometric synthetic aperture radar time series using a coherence-based small baseline subset method with atmospheric corrections by means of common-point stacking. We did not apply enhanced spectral diversity because the burst discontinuities are generally small (<;1.4 mm) and can be effectively captured during the atmospheric corrections. With these algorithms, the subsidence at CPGF is clearly resolved. The maximum subsidence rate of 160 mm/yr, due to extraction of geothermal fluids and heat, dominates the ~40 mm/yr deformation across the proximal ends of the imperial, the Cerro Prieto, and the indiviso faults.

Anirban Santara;Kaustubh Mani;Pranoot Hatwar;Ankit Singh;Ankur Garg;Kirti Padia;Pabitra Mitra; "BASS Net: Band-Adaptive Spectral-Spatial Feature Learning Neural Network for Hyperspectral Image Classification," vol.55(9), pp.5293-5301, Sept. 2017. Deep learning based land cover classification algorithms have recently been proposed in the literature. In hyperspectral images (HSIs), they face the challenges of large dimensionality, spatial variability of spectral signatures, and scarcity of labeled data. In this paper, we propose an end-to-end deep learning architecture that extracts band specific spectral-spatial features and performs land cover classification. The architecture has fewer independent connection weights and thus requires fewer training samples. The method is found to outperform the highest reported accuracies on popular HSI data sets.

Hong Li;Yalong Song;C. L. Philip Chen; "Hyperspectral Image Classification Based on Multiscale Spatial Information Fusion," vol.55(9), pp.5302-5312, Sept. 2017. In hyperspectral image (HSI) classification, the combination of spectral information and spatial information can be applied to enhance the classification performance. In order to better characterize the variability of spatial features at different scales, we propose a new framework called multiscale spatial information fusion (MSIF). The MSIF consists of three parts: multiscale spatial information extraction, local 1-D embedding (L1-DE), and information fusion. First, spatial filter with different scales is used to extract multiscale spatial information. Then, L1-DE is utilized to map the spectral information and spatial information at different scales into 1-D space, respectively. Finally, the obtained 1-D coordinates are used to label the unlabeled spatial neighbors of the labeled samples. The proposed MSIF captures intrinsic spatial information contained in homogeneous regions of different sizes by multiscale strategy. Since the spatial information at different scales is processed separately in MSIF, the variance of spatial information at different scales can be reflected. The use of L1-DE reduces computational cost by mapping high-dimensional samples into 1-D space. In MSIF, the L1-DE and information fusion are used iteratively, and the iterative process terminates in a finite number of steps. The algorithm analysis demonstrates the effectiveness of the proposed method. The experimental results on four widely used HSI data sets show that the proposed method achieved higher classification accuracies compared with other state-of-the-art spectral-spatial classification methods.

Jinshan Cao;Xiuxiao Yuan;Jianhong Fu;Jianya Gong; "Precise Sensor Orientation of High-Resolution Satellite Imagery With the Strip Constraint," vol.55(9), pp.5313-5323, Sept. 2017. To achieve precise sensor orientation of high- resolution satellite imagery (HRSI), ground control points (GCPs) or height models are necessary to remove biases in orientation parameters. However, measuring GCPs is costly, laborious, and time consuming. We cannot even acquire well-defined GCPs in some areas. In this paper, a strip constraint model is established according to the geometric invariance that the biases of image points remain the same in dividing a strip image into standard images. Based on the rational function model and the strip constraint model, a feasible sensor orientation approach for HRSI with the strip constraint is presented. Through the use of the strip constraint, the bias compensation parameters of each standard image in the strip can be solved simultaneously with sparse GCPs. This approach remains effective even when the intermediate standard images in the strip are unavailable. Experimental results of the three ZiYuan-3 data sets show that two GCPs in the first image and two GCPs in the last image are sufficient for the sensor orientation of all the standard images in the strip. An orientation accuracy that is better than 1.1 pixels can be achieved in each standard image. Moreover, the inconsistent errors of tie points between adjacent standard images can also be reduced to less than 0.1 pixel. This result can guarantee that the generated complete digital orthophoto map of the whole strip is geometrically seamless.

Donghai Zheng;Xin Wang;Rogier van der Velde;Yijian Zeng;Jun Wen;Zuoliang Wang;Mike Schwank;Paolo Ferrazzoli;Zhongbo Su; "L-Band Microwave Emission of Soil Freeze–Thaw Process in the Third Pole Environment," vol.55(9), pp.5324-5338, Sept. 2017. Soil freeze-thaw transition monitoring is essential for quantifying climate change and hydrologic dynamics over cold regions, for instance, the Third Pole. We investigate the L-band (1.4 GHz) microwave emission characteristics of soil freeze-thaw cycle via analysis of tower-based brightness temperature (TBp) measurements in combination with simulations performed by a model of soil microwave emission considering vertical variations of permittivity and temperature. Vegetation effects are modeled using Tor Vergata discrete emission model. The ELBARA-III radiometer is installed in a seasonally frozen Tibetan grassland site to measure diurnal cycles of L-band TBp every 30 min, and supporting micrometeorological as well as volumetric soil moisture (θ) and temperature profile measurements are also conducted. Soil freezing/thawing phases are clearly distinguished by using T Bp measurements at two polarizations, and further analyses show that: 1) the four-phase dielectric mixing model is appropriate for estimating permittivity of frozen soil; 2) the soil effective temperature is well comparable with the temperature at 25 cm depth when soil liquid water is freezing, while it is closer to the one measured at 5 cm when soil ice is thawing; and 3) the impact on T B p caused by diurnal changes of ground permittivity is dominating the impact of changing ground temperature. Moreover, the simulations performed with the integrated Tor Vergata emission model and Noah land surface model indicate that the T Bp signatures of diurnal soil freeze-thaw cycle is more sensitive to the liquid water content of the soil surface layer than the in situ measurements taken at 5 cm depth.

Jinzheng Peng;Sidharth Misra;Jeffrey R. Piepmeier;Emmanuel P. Dinnat;Derek Hudson;David M. Le Vine;Giovanni De Amici;Priscilla N. Mohammed;Rajat Bindlish;Simon H. Yueh;Thomas Meissner;Thomas J. Jackson; "Soil Moisture Active/Passive L-Band Microwave Radiometer Postlaunch Calibration," vol.55(9), pp.5339-5354, Sept. 2017. The Soil Moisture Active/Passive (SMAP) microwave radiometer is a fully polarimetric L-band radiometer flown on the SMAP satellite in a 6 a.m./6 p.m. sun-synchronous orbit at 685-km altitude. Since April 2015, the radiometer has been under calibration and validation to assess the quality of the radiometer L1B data product. Calibration methods, including the SMAP L1B TA2TB [from antenna temperature (TA) to the Earth's surface brightness temperature (TB)] algorithm and TA forward models, are outlined, and validation approaches for calibration stability/quality are described in this paper, including future work. Results show that the current radiometer L1B data product (version 3) satisfies its requirements (uncertainty <;1.3 K and calibration drift <;0.4 K/months, and geolocation uncertainty <;4 km) although there are biases in TA over cold sky and in TB comparing with the Soil Moisture and Ocean Salinity TB v620 data products.

Ruben Carrasco;Jochen Horstmann;Jörg Seemann; "Significant Wave Height Measured by Coherent X-Band Radar," vol.55(9), pp.5355-5365, Sept. 2017. Significant wave height is one of the most important parameters for characterizing ocean waves and essential for coastal protection, shipping, as well as off shore industry operations. Within this paper, a robust method is introduced for retrieving significant wave heights from Doppler speed measurements acquired with a coherent-on-receive marine radar. The Doppler velocity is caused by the surface scattering in the line of site of the radar. To a huge extent its periodic component is induced by the orbital motions associated with surface waves. The proposed methodology is based on linear wave theory, accounts for projection effects caused by the fixed antenna look direction, and was applied to a coherent-on-receive radar operating at X-band with vertical polarization in transmit and receive. To show the overall performance of the method, a data set consisting of approximately 100 days of radar measurements was analyzed and used to retrieve significant wave heights. Comparisons to wave measurements collected by a wave rider buoy resulted in a root-mean-square (rms) error of 0.21 m and a bias of 0 m without any calibration parameters needed. To further improve the accuracy of significant wave height, a calibration factor needs to be accounted for, which improves the rms error to 0.15 m with a negligible bias of -0.01 m.

Yongyong Chen;Yanwen Guo;Yongli Wang;Dong Wang;Chong Peng;Guoping He; "Denoising of Hyperspectral Images Using Nonconvex Low Rank Matrix Approximation," vol.55(9), pp.5366-5380, Sept. 2017. Hyperspectral image (HSI) denoising is challenging not only because of the difficulty in preserving both spectral and spatial structures simultaneously, but also due to the requirement of removing various noises, which are often mixed together. In this paper, we present a nonconvex low rank matrix approximation (NonLRMA) model and the corresponding HSI denoising method by reformulating the approximation problem using nonconvex regularizer instead of the traditional nuclear norm, resulting in a tighter approximation of the original sparsity-regularised rank function. NonLRMA aims to decompose the degraded HSI, represented in the form of a matrix, into a low rank component and a sparse term with a more robust and less biased formulation. In addition, we develop an iterative algorithm based on the augmented Lagrangian multipliers method and derive the closed-form solution of the resulting subproblems benefiting from the special property of the nonconvex surrogate function. We prove that our iterative optimization converges easily. Extensive experiments on both simulated and real HSIs indicate that our approach can not only suppress noise in both severely and slightly noised bands but also preserve large-scale image structures and small-scale details well. Comparisons against state-of-the-art LRMA-based HSI denoising approaches show our superior performance.

Neng Zhong;Wen Yang;Anoop Cherian;Xiangli Yang;Gui-Song Xia;Mingsheng Liao; "Unsupervised Classification of Polarimetric SAR Images via Riemannian Sparse Coding," vol.55(9), pp.5381-5390, Sept. 2017. Unsupervised classification plays an important role in understanding polarimetric synthetic aperture radar (PolSAR) images. One of the typical representations of PolSAR data is in the form of Hermitian positive definite (HPD) covariance matrices. Most algorithms for unsupervised classification using this representation either use statistical distribution models or adopt polarimetric target decompositions. In this paper, we propose an unsupervised classification method by introducing a sparsity-based similarity measure on HPD matrices. Specifically, we first use a novel Riemannian sparse coding scheme for representing each HPD covariance matrix as sparse linear combinations of other HPD matrices, where the sparse reconstruction loss is defined by the Riemannian geodesic distance between HPD matrices. The coefficient vectors generated by this step reflect the neighborhood structure of HPD matrices embedded in the Euclidean space and hence can be used to define a similarity measure. We apply the scheme for PolSAR data, in which we first oversegment the images into superpixels, followed by representing each superpixel by an HPD matrix. These HPD matrices are then sparse coded, and the resulting sparse coefficient vectors are then clustered by spectral clustering using the neighborhood matrix generated by our similarity measure. The experimental results on different fully PolSAR images demonstrate the superior performance of the proposed classification approach against the state-of-the-art approaches.

Vincent Lecours;Rodolphe Devillers;Vanessa L. Lucieer;Craig J. Brown; "Artefacts in Marine Digital Terrain Models: A Multiscale Analysis of Their Impact on the Derivation of Terrain Attributes," vol.55(9), pp.5391-5406, Sept. 2017. Data acquisition artefacts are commonly found in multibeam bathymetric data, but their effects on mapping methodologies using geographic information system techniques have not been widely explored. Artefacts have been extensively studied in terrestrial settings, but their study in a marine context has currently been limited to engineering and surveying technology development in order to reduce their amplitude during data collection and postprocessing. Knowledge on how they propagate to further analyses like environmental characterization or terrain analysis is scant. The goal of this paper is to describe the contribution of different types of artefacts to marine terrain attributes at multiple scales. Using multibeam bathymetric data from German Bank, off Nova Scotia (Canada), digital bathymetric models (DBMs) were computed at five different spatial resolutions. Ten different amplitudes of heave, pitch, roll, and time artefacts were artificially introduced to generate altered DBMs. Then, six terrain attributes were derived from each of the reference and altered DBMs. Relationships between the amplitude of artefacts and the statistical and spatial distributions of: 1) altered bathymetry and terrain attributes surfaces and 2) errors caused by the artefacts were modeled. Spatial similarity between altered and reference surfaces was also assessed. Results indicate that most artefacts impact spatial similarity and that pitch and roll significantly impact the statistical distribution of DBMs and terrain attributes while time and heave artefacts have a more subtle impact. Results also confirm the relationship between spatial data quality and spatial scale, as finer-scale data were impacted by artefacts to a greater degree than broader-scale data.

Salman H. Khan;Xuming He;Fatih Porikli;Mohammed Bennamoun; "Forest Change Detection in Incomplete Satellite Images With Deep Neural Networks," vol.55(9), pp.5407-5423, Sept. 2017. Land cover change monitoring is an important task from the perspective of regional resource monitoring, disaster management, land development, and environmental planning. In this paper, we analyze imagery data from remote sensing satellites to detect forest cover changes over a period of 29 years (1987-2015). Since the original data are severely incomplete and contaminated with artifacts, we first devise a spatiotemporal inpainting mechanism to recover the missing surface reflectance information. The spatial filling process makes use of the available data of the nearby temporal instances followed by a sparse encoding-based reconstruction. We formulate the change detection task as a region classification problem. We build a multiresolution profile (MRP) of the target area and generate a candidate set of bounding-box proposals that enclose potential change regions. In contrast to existing methods that use handcrafted features, we automatically learn region representations using a deep neural network in a data-driven fashion. Based on these highly discriminative representations, we determine forest changes and predict their onset and offset timings by labeling the candidate set of proposals. Our approach achieves the state-of-the-art average patch classification rate of 91.6% (an improvement of ~16%) and the mean onset/offset prediction error of 4.9 months (an error reduction of five months) compared with a strong baseline. We also qualitatively analyze the detected changes in the unlabeled image regions, which demonstrate that the proposed forest change detection approach is scalable to new regions.

* "IEEE Transactions on Geoscience and Remote Sensing information for authors," vol.55(9), pp.C3-C3, Sept. 2017.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "IEEE Transactions on Geoscience and Remote Sensing institutional listings," vol.55(9), pp.C4-C4, Sept. 2017.* Presents a listing of institutional institutions relevant for this issue of the publication.

## IEEE Geoscience and Remote Sensing Letters - new TOC (2017 September 21) [Website]

* "Front Cover," vol.14(9), pp.C1-C1, Sept. 2017.* Presents the front cover for this issue of the publication.

* "IEEE Geoscience and Remote Sensing Letters publication information," vol.14(9), pp.C2-C2, Sept. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Yuemei Ren;Liang Liao;Stephen John Maybank;Yanning Zhang;Xin Liu; "Hyperspectral Image Spectral-Spatial Feature Extraction via Tensor Principal Component Analysis," vol.14(9), pp.1431-1435, Sept. 2017. We consider the tensor-based spectral-spatial feature extraction problem for hyperspectral image classification. First, a tensor framework based on circular convolution is proposed. Based on this framework, we extend the traditional principal component analysis (PCA) to its tensorial version tensor PCA (TPCA), which is applied to the spectral-spatial features of hyperspectral image data. The experiments show that the classification accuracy obtained using TPCA features is significantly higher than the accuracies obtained by its rivals.

Bei Zhao;Bo Huang;Yanfei Zhong; "Transfer Learning With Fully Pretrained Deep Convolution Networks for Land-Use Classification," vol.14(9), pp.1436-1440, Sept. 2017. In recent years, transfer learning with pretrained convolutional networks (CNets) has been successfully applied to land-use classification with high spatial resolution (HSR) imagery. The commonly used transfer CNets partially use the feature descriptor part of the pretained CNets, and replace the classifier part of the pretrained CNets in the old task with a new one. This causes the separation and asynchrony between the feature descriptor part and the classifier part of the transferred CNets during the learning process, which reduces the effectiveness of the training process. To overcome this weakness, a transfer learning method with fully pretrained CNets is proposed in this letter for the land-use classification of HSR images. In the proposed method, a multilayer perceptron (MLP) classifier is quickly pretrained using the high-level features extracted by the feature descriptor of the pretrained CNets. Fully pretrained CNets can be generated by concatenating the feature descriptor of the pretrained CNets and the pretained MLP. Because both the feature descriptor and the classifier are pretrained, the separation and asynchrony between the two parts can be avoided during the training process. The final transferred CNets are then obtained by fine-tuning the fully pretrained CNets with the random cropping and mirroring strategy. The experiments show that the proposed method can accelerate the convergence of the training process with no loss of accuracy in land-use classification, and its performance is comparable to other latest methods.

Gaetano Mollo;Rosario Di Napoli;Giuseppe Naviglio;Carmine Di Chiara;Egidio Capasso;Giovanni Alli; "Multifrequency Experimental Analysis (10 to 77 GHz) on the Asphalt Reflectivity and RCS of FOD Targets," vol.14(9), pp.1441-1443, Sept. 2017. In this letter, a multifrequency experimental analysis is conducted for the estimation of the asphalt reflectivity and for the measurement of the radar cross section of some typical foreign object debris (FOD). The analysis is made with experimental data with a frequency between 10 and 77 GHz, acquired with a vector network analyzer and with-ingegneria dei sistemi 77-GHz radar prototype for FOD Detection. Experimental data acquired in a real operative scenario (runway of Taranto/Grottaglie airport) is also presented. The results show the possibility to detect an FOD target on an airport runway.

Niutao Liu;Hongxia Ye;Ya-Qiu Jin; "Dielectric Inversion of Lunar PSR Media with Topographic Mapping and Comment on “Quantification of Water Ice in the Hermite-A Crater of the Lunar North Pole”," vol.14(9), pp.1444-1448, Sept. 2017. Dielectric inversion of lunar permanently shadowed region (PSR) of moon poles has been studied for estimation of possible water-ice content. The Campbell model was directly applied to mini-SAR data for inversion on the Hermite-A crater region. However, this letter presents quantitative analysis that the lunar surface topography, i.e., surface roughness and slopes, and underlying dielectric media, and so on, can significantly affect this inversion. The model is actually degenerated into a half-space model without topographic account. This letter presents a two-layer model of Kirchhoff-approximation surface/small perturbation approximation subsurface to take account of all these topographic factors for PSR dielectric inversion.

Hicham Ezzine;Ahmed Bouziane;Driss Ouazar;Moulay Driss Hasnaoui; "Downscaling of TRMM3B43 Product Through Spatial and Statistical Analysis Based on Normalized Difference Water Index, Elevation, and Distance From Sea," vol.14(9), pp.1449-1453, Sept. 2017. This letter aims to explore the potentialities of normalized difference water index (NDWI) and distance from sea to downscale coarse precipitation (TRMM3B43 product), whose contribution to downscaling precipitation remains unstudied. For this purpose, based on an open data set of 14 years, including TRMM3B43 and three predictors (NDWI, elevation, and distance from sea), stepwise regression and Akaike information criterion were applied in order to identify the best-fit models. The models that have given rise to best approximations and best-fits were used to downscale TRMM3B43 product, to a spatial resolution of 1 km. The resulting downscaled calibrated precipitations were validated by independent rain gauge stations (RGSs). The analysis exhibited that there is good and statistically significant correlations between TRMM3B43 and NDWI and a great agreement between downscaled precipitations and RGS measurements.

Yue Huang;Jacques Levy-Vehel;Laurent Ferro-Famil;Andreas Reigber; "Three-Dimensional Imaging of Objects Concealed Below a Forest Canopy Using SAR Tomography at L-Band and Wavelet-Based Sparse Estimation," vol.14(9), pp.1454-1458, Sept. 2017. Despite its ability to characterize 3-D environments, synthetic aperture radar (SAR) tomographic imaging, when applied to the characterization of targets concealed beneath forest canopies, may appear as an ill-conditioned estimation problem, with a complex mixture of numerous scattering mechanisms measured from a few different positions. Among the set of tomographic estimators that may be used to characterize such complex scattering environments, nonparametric tomographic techniques are more robust to focus on artifacts but limited in resolution and, hence, may fail to discriminate objects, whereas parametric ones provide better vertical resolution but cannot adequately handle continuously distributed volumetric scattering densities, characteristic of forest canopies. This letter addresses a new wavelet-based sparse tomographic estimation method for the 3-D imaging and discrimination of underfoliage objects that overcomes these limitations. The effectiveness of this new approach is demonstrated using L-band airborne tomographic SAR data acquired by the German Aerospace Center over Dornstetten, Germany.

Hongmeng Chen;Ming Li;Zeyu Wang;Yunlong Lu;Runqing Cao;Peng Zhang;Lei Zuo;Yan Wu; "Cross-Range Resolution Enhancement for DBS Imaging in a Scan Mode Using Aperture-Extrapolated Sparse Representation," vol.14(9), pp.1459-1463, Sept. 2017. This letter addresses the problem of cross-range superresolution in Doppler beam sharpening (DBS). The coherence of echoes in the azimuth direction and the sparsity of the DBS image in the Doppler domain are fully exploited; thus, a superresolution DBS imaging framework using aperture-extrapolated sparse representation (SR) is proposed. In this framework, aperture extrapolation based on the autoregressive model is utilized to predict the forward and backward information in the azimuth direction, and SR is exploited to extract the Doppler spectrum information. In addition, the resolution ability with different coherent processing intervals is analyzed. The sharpening ratio in this proposed algorithm can be improved by a factor of two or four theoretically in comparison with the conventional DBS imaging method. Experimental results demonstrate that the proposed framework can lead to noticeable performance improvement.

Chein-I Chang;Yao Li;Yulei Wang; "Progressive Band Processing of Fast Iterative Pixel Purity Index for Finding Endmembers," vol.14(9), pp.1464-1468, Sept. 2017. This letter develops a progressive band processing (PBP) of fast iterative pixel purity index (FIPPI) according to a band sequential acquisition format in such a way that FIPPI can be processed band by band, while band acquisition is ongoing. As a result, PBP-FIPPI can generate progressive profiles of interband changes among PPI counts which allow users to observe significant bands that capture PPI counts. The idea to implement PBP-FIPPI is to use an inner loop specified by skewers and an outer loop specified by bands to process FIPPI. Interestingly, these two loops can also be interchanged with an inner loop specified by bands and an outer loop iterated by growing skewers. The resulting FIPPI is called progressive skewer processing of FIPPI. It turns out that both versions provide different insights into the design of FIPPI.

Zhifeng Xiao;Yiping Gong;Yang Long;Deren Li;Xiaoying Wang;Hua Liu; "Airport Detection Based on a Multiscale Fusion Feature for Optical Remote Sensing Images," vol.14(9), pp.1469-1473, Sept. 2017. Automatically detecting airports from remote sensing images has attracted significant attention due to its importance in both military and civilian fields. However, the diversity of illumination intensities and contextual information makes this task difficult. Moreover, auxiliary features both within and surrounding the regions of interest are usually ignored. To address these problems, we propose a novel method that uses a multiscale fusion feature to represent the complementary information of each region proposal, which is extracted by constructing a GoogleNet with a light feature module model that has an additional light fully connected layer. Then, the fusion feature is input to a support vector machine whose performance is enhanced using a hard negative mining method. Finally, a simplified localization method is applied to tackle the problem of box redundancy and to optimize the locations of airports. An experiment demonstrates that the fusion feature outperforms other features on airport detection tasks from remote sensing images containing complicated contextual information.

Jing Bai;Wenhao Zhang;Zhenzhen Gou;Licheng Jiao; "Nonlocal-Similarity-Based Sparse Coding for Hyperspectral Imagery Classification," vol.14(9), pp.1474-1478, Sept. 2017. For hyperspectral imagery (HSI) classification, many works have shown the effectiveness of the spectral-spatial method. However, some previous works using neighboring information assumed that all neighboring pixels make an equal contribution to the central pixel, which is unreasonable for heterogeneous pixels, especially near the boundary of a region. In this letter, a nonlocal self-similarity based on the sparse coding method, followed by the use of a support vector machine classifier, is proposed to improve classification performance. Inspired by the success of nonlocal means, a new nonlocal weighted method is developed to determine the relationship between a test pixel and its neighboring ones. The nonlocal weights are determined by using the spectral angle mapper algorithm, which can exploit the spectral information of surface features. The experiments validate the superiority of our proposed method over existing approaches for HSI classification.

Fan Wang;Yan Wu;Peng Zhang;Qingjun Zhang;Ming Li; "Unsupervised SAR Image Segmentation Using Ambiguity Label Information Fusion in Triplet Markov Fields Model," vol.14(9), pp.1479-1483, Sept. 2017. The recently proposed triplet Markov fields (TMF) model enhances the nonstationary image prior modeling ability by introducing an auxiliary field. Motivated by the TMF model, we propose a generalized TMF model based on ambiguity label information fusion (ALF-TMF) for synthetic aperture radar (SAR) image segmentation. The redefined auxiliary field in ALF-TMF indicates the dominant direction of local image contents and gives explicit nonstationary divisions of SAR images. To reduce the influence of unreliable observations caused by speckle noise, the original label field is adaptively generalized by introducing ambiguity class based on image observation and local nonstationary contextual information. Given the extended label field, prior and likelihood terms are constructed and merged to provide the posterior segmentation decision via the Bayesian fusion rule. Real SAR images are utilized in the experimental analysis, and the effectiveness of the proposed method is validated accordingly.

David Malmgren-Hansen;Anders Kusk;Jørgen Dall;Allan Aasbjerg Nielsen;Rasmus Engholm;Henning Skriver; "Improving SAR Automatic Target Recognition Models With Transfer Learning From Simulated Data," vol.14(9), pp.1484-1488, Sept. 2017. Data-driven classification algorithms have proved to do well for automatic target recognition (ATR) in synthetic aperture radar (SAR) data. Collecting data sets suitable for these algorithms is a challenge in itself as it is difficult and expensive. Due to the lack of labeled data sets with real SAR images of sufficient size, simulated data play a big role in SAR ATR development, but the transferability of knowledge learned on simulated data to real data remains to be studied further. In this letter, we show the first study of Transfer Learning between a simulated data set and a set of real SAR images. The simulated data set is obtained by adding a simulated object radar reflectivity to a terrain model of individual point scatters, prior to focusing. Our results show that a Convolutional Neural Network (Convnet) pretrained on simulated data has a great advantage over a Convnet trained only on real data, especially when real data are sparse. The advantages of pretraining the models on simulated data show both in terms of faster convergence during the training phase and on the end accuracy when benchmarked on the Moving and Stationary Target Acquisition and Recognition data set. These results encourage SAR ATR development to continue the improvement of simulated data sets of greater size and complex scenarios in order to build robust algorithms for real life SAR ATR applications.

Mostafa Esmaeili;Mahdi Motagh;Andy Hooper; "Application of Dual-Polarimetry SAR Images in Multitemporal InSAR Processing," vol.14(9), pp.1489-1493, Sept. 2017. Multitemporal polarimetric synthetic aperture radar (SAR) data can be used to estimate the dominant scattering mechanism of targets in a stack of SAR data and to improve the performance of SAR interferometric methods for deformation studies. In this letter, we developed a polarimetric form of amplitude difference dispersion (ADD) criterion for time-series analysis of pixels in which interferometric noise shows negligible decorrelation in time and space in small baseline algorithm. The polarimetric form of ADD is then optimized in order to find the optimum scattering mechanism of the pixels, which in turn is used to produce new interferograms with better quality than single-pol SAR interferograms. The selected candidates are then combined with temporal coherency criterion for final phase stability analysis in full-resolution interferograms. Our experimental results derived from a data set of 17 dual polarizations X-band SAR images (HH/VV) acquired by TerraSAR-X shows that using optimum scattering mechanism in the small baseline method improves the number of pixel candidates for deformation analysis by about 2.5 times in comparison with the results obtained from single-channel SAR data. The number of final pixels increases by about 1.5 times in comparison with HH and VV in small baseline analysis. Comparison between persistent scatterer (PS) and small baseline methods shows that with regards to the number of pixels with optimum scattering mechanism, the small baseline algorithm detects 10% more pixels than PS in agricultural regions. In urban regions, however, the PS method identifies nearly 8% more coherent pixels than small baseline approach.

Massimiliano Pieraccini;Lapo Miccinesi;Neda Rojhani; "A GBSAR Operating in Monostatic and Bistatic Modalities for Retrieving the Displacement Vector," vol.14(9), pp.1494-1498, Sept. 2017. Ground-based synthetic aperture radar (GBSAR) systems are popular remote sensing instruments for detecting ground changes of slopes, and small displacements of large structures as bridges, dams, and construction works. These radars are able to provide maps of displacement along range direction only. In this letter, we propose to use a transponder for operating a conventional linear GBSAR as a bistatic radar with the aim to acquire two different components of the displacement of the targets in the field of view.

Alexandre E. Almeida;Ricardo da S. Torres; "Remote Sensing Image Classification Using Genetic-Programming-Based Time Series Similarity Functions," vol.14(9), pp.1499-1503, Sept. 2017. In several applications, the automatic identification of regions of interest in remote sensing images is based on the assessment of the similarity of associated time series, i.e., two regions are considered as belonging to the same class if the patterns found in their spectral information observed over time are somewhat similar. In this letter, we investigate the use of a genetic programming (GP) framework to discover an effective combination of time series similarity functions to be used in remote sensing classification tasks. Performed experiments in a Forest-Savanna classification scenario demonstrated that the GP framework yields effective results when compared with the use of traditional widely used similarity functions in isolation.

Noel Cressie;Rui Wang;Ben Maloney; "The Atmospheric Infrared Sounder Retrieval, Revisited," vol.14(9), pp.1504-1507, Sept. 2017. The algorithm used in the retrieval of geophysical quantities from the Atmospheric Infrared Sounder (AIRS) instrument depends on two fundamental components. The first is a cost function that is the sum of squares of the differences between cloud-cleared radiances and their corresponding forward-model terms. The second is the minimization of this cost function. For the retrieval of carbon dioxide, the minimization is further improved using the method of vanishing partial derivatives (VPDs). In this letter, we show that this VPD component is identical to a coordinate descent method with Newton-Raphson updates, which allows it to be put in context with other optimization algorithms. We also show that the AIRS cost function is a limiting case of the cost function used in optimal estimation, which demonstrates how uncertainty quantification in the AIRS retrieval can be implemented.

Sofia Siachalou;Giorgos Mallinis;Maria Tsakiri-Strati; "Analysis of Time-Series Spectral Index Data to Enhance Crop Identification Over a Mediterranean Rural Landscape," vol.14(9), pp.1508-1512, Sept. 2017. Spectral index time series can provide valuable phenological information into the classification process for the precise crop mapping, in order to reduce misclassification rates associated with low interclass and high intraclass spectral variability. Stochastic hidden Markov models (HMMs) are efficient yet computationally demanding classification approach which can simulate crop dynamics, exploiting the spectral information of their phenological states and the relations between these states. This letter aims to present a methodology which achieves accurate classification results while maintaining a low computational cost. A classification framework based on HMMs was developed, and different spectral indices were generated from the time series of Landsat ETM+ and RapidEye imagery, for modeling crop vegetation dynamics over a Mediterranean rural area, with high spatiotemporal crop heterogeneity. To further improve the HMMs indices classification, separability analysis and two different decision fusion strategies were tested. The assessment of the classification accuracy, along with an evaluation of the computational cost, indicated that the green-red vegetation index produced the most favorable results among the individual spectral indices. Although the decision fusion based on an integration of a reliability factor increased the overall accuracy by 3.1%, this came at the cost of computational time, compared to the separability analysis model which required less processing time.

Rodrigo F. Berriel;André Teixeira Lopes;Alberto F. de Souza;Thiago Oliveira-Santos; "Deep Learning-Based Large-Scale Automatic Satellite Crosswalk Classification," vol.14(9), pp.1513-1517, Sept. 2017. High-resolution satellite imagery has been increasingly used on remote sensing classification problems. One of the main factors is the availability of this kind of data. Despite the high availability, very little effort has been placed on the zebra crossing classification problem. In this letter, crowdsourcing systems are exploited in order to enable the automatic acquisition and annotation of a large-scale satellite imagery database for crosswalks related tasks. Then, this data set is used to train deep-learning-based models in order to accurately classify satellite images that contain or not contain zebra crossings. A novel data set with more than 240000 images from 3 continents, 9 countries, and more than 20 cities was used in the experiments. The experimental results showed that freely available crowdsourcing data can be used to accurately (97.11%) train robust models to perform crosswalk classification on a global scale.

Hicham Ezzine;Ahmed Bouziane;Driss Ouazar;Moulay Driss Hasnaoui; "Sensitivity of NDVI-Based Spatial Downscaling Technique of Coarse Precipitation to Some Mediterranean Bioclimatic Stages," vol.14(9), pp.1518-1521, Sept. 2017. This letter attempts to explore the potential sensitivity of the well-known spatial downscaling technique of coarse precipitation data to some bioclimatic stages of the Mediterranean area. For this purpose, first, an open data set covering a period of 15 years, including TRMM3B43, normalized difference vegetation index (NDVI), DEM, and rain gauge station measurements, was prepared. Then the NDVI-based spatial downscaling technique was applied over Morocco without taking account of bioclimatic stages. Second, based on the same data set, the key step of the downscaling approach (regression between TRMM3B43 and NDVI) was analyzed in five bioclimatic stages in order to assess the approach's sensitivity. This letter demonstrated that the spatial downscaling approach performs well in the subhumid, semiarid, and in the arid bioclimatic stages, to a lesser extent. However, the approach seems to be sensitive and not adapted to the Saharan and humid stages.

Yuming Xiang;Feng Wang;Ling Wan;Hongjian You; "An Advanced Multiscale Edge Detector Based on Gabor Filters for SAR Imagery," vol.14(9), pp.1522-1526, Sept. 2017. The ratio of averages is a robust edge detector which provides the property of constant false alarm rate for synthetic aperture radar (SAR) imagery. However, the rectangular window used in the calculation of local mean may cause numerous false maxima. The size of the processing window also has a significant effect on the detection performance, but it is difficult to determine the optimum window size. In this letter, we first propose a new ratio-based detector that is constructed by the Gabor odd filter. The scale of the proposed detector is related to the size of the processing window. Then, edge strength maps extracted by multiscale detectors are combined using an edge tracking algorithm to form a final response. We used the receiver operating characteristic curves to evaluate the performance of the proposed detector. The experimental results on simulated and real-world SAR images show that the proposed multiscale edge detector yields an accurate and consecutive edge response.

Zegang Ding;Zhen Wang;Sheng Lin;Tiandong Liu;Qi Zhang;Teng Long; "Local Fringe Frequency Estimation Based on Multifrequency InSAR for Phase-Noise Reduction in Highly Sloped Terrain," vol.14(9), pp.1527-1531, Sept. 2017. The interferometric phases in highly sloped terrain have the characteristics of large fringe density, narrow width, low correlation, and under-sampling. The local fringe frequ- ency (LFF) is a criterion to evaluate the trend and magnitude of the local terrain gradient and can be employed to improve the quality of interferograms. The results of the traditional LFF estimation method can be affected by phase noise, and sometimes the phase unwrapping (PU) operation is also required for some local regions. When it comes to highly sloped terrain, the phenomenon of phase under-sampling may cause incorrectness in the absolute interferometric phase during the operation of PU and may then influence the accuracy of the whole estimation. In order to solve this problem, this letter proposes an extended maximum-likelihood method for LFF estimation based on the multifrequency interferometric synthetic aperture radar (InSAR) data. Through the differences in the LFF between the different frequency InSAR data, the estimation quality map is introduced to modify the large error in certain regions by local 2-D fitting and thus achieves a accurate estimation of LFF in highly sloped terrain. Finally, the estimated results of LFF are used to guide the process of phase filtering. Simulated data and real airborne dual-frequency InSAR data are both employed to validate this proposed method.

Wan Li;Liangpei Zhang;Lefei Zhang;Bo Du; "GPU Parallel Implementation of Isometric Mapping for Hyperspectral Classification," vol.14(9), pp.1532-1536, Sept. 2017. Manifold learning algorithms such as the isometric mapping (ISOMAP) algorithm have been widely used in the analysis of hyperspectral images (HSIs), for both visualization and dimension reduction. As advanced versions of the traditional linear projection techniques, the manifold learning algorithms find the low-dimensional feature representation by nonlinear mapping, which can better preserve the local structure of the original data and thus benefit the data analysis. However, the high computational complexity of the manifold learning algorithms hinders their application in HSI processing. Although there are a few parallel implementations of manifold learning approaches that are available in the remote sensing community, they have not been designed to accelerate the eigen-decomposition process, which is actually the most time-consuming part of the manifold learning algorithms. In this letter, as a case study, we discuss the graphics processing unit parallel implementation of the ISOMAP algorithm. In particular, we focus on the eigen-decomposition process and verify the applicability of the proposed method by validating the embedding vectors and the subsequent classification accuracies. The experimental results obtained on different HSI data sets show an excellent speedup performance and consistent classification accuracy compared with the serial implementation.

Lisa Recchia;Michele Scagliola;Davide Giudici;Mieke Kuschnerus; "An Accurate Semianalytical Waveform Model for Mispointed SAR Interferometric Altimeters," vol.14(9), pp.1537-1541, Sept. 2017. Synthetic aperture radar (SAR) altimeters reduce the along-track footprint size exploiting the coherence of the transmitted pulses and achieve at the same time a noise reduction. Consequently, a large effort has been aimed at the formulation of theoretical models that apply to SAR altimeters, in order to fully exploit the improvement in spatial resolution obtained from the along-track aperture synthesis. This letter presents a novel semianalytical waveform model for SAR interferometric altimeters that preserves high accuracy even in the presence of mispointing. Starting from the waveform model proposed by Wingham et al. that provides a unified formulation for pulse-limited and SAR interferometric altimeters which can only be computed numerically, here, we describe a semianalytical approximation for small variations of the mispointing angles around an arbitrary combination of pitch and roll angles (μ̈, θ̈). The proposed semianalytical waveform model allows to reduce the high dimensionality of the model proposed by Wingham et al. and it has been proven to be accurate for variations of mispointing angles up to 0.4 deg around the (μ̈, θ̈). The performance of the proposed formulation has been evaluated on simulated data from Sentinel-6 configuration and on real data from CryoSat-2 SARin acquisitions over ocean.

Hongqing Liu;Dong Li;Yi Zhou;Trieu-Kien Truong; "Joint Wideband Interference Suppression and SAR Signal Recovery Based on Sparse Representations," vol.14(9), pp.1542-1546, Sept. 2017. The problem of synthetic aperture radar image recovery in the presence of wideband interference (WBI) is investigated. Delayed versions of a transmitted signal are utilized to construct a dictionary in which a signal of interest (SOI) has a sparse representation. In this letter, WBI is sparsely represented by the time-frequency domain. By utilizing the transform domains, a joint estimation approach is devised to simultaneously perform WBI suppression and SOI recovery within an optimization framework. Based on the separability property in the optimization, an alternating direction method of multipliers-based approach is developed to efficiently obtain a solution. Finally, simulation results are presented to demonstrate the superior performance of the joint estimation algorithm.

Ying Li;Yonghu Yang;Xueyuan Zhu; "Target Detection in Sea Clutter Based on Multifractal Characteristics After Empirical Mode Decomposition," vol.14(9), pp.1547-1551, Sept. 2017. Characteristic analysis of sea clutter is important in utilizing radar observations and detecting sea-surface targets. Real data signals are analyzed to determine the multifractal characteristics of sea clutter signals. Sea clutter is a nonlinear, nonstationary radar echo signal. A novel method that detects targets in sea clutter is proposed by completely utilizing the strengths of empirical mode decomposition (EMD) and combining it with multifractal characteristics. The EMD method is applied to decompose sea clutter signals into several intrinsic mode functions (IMFs). Multifractal detrended fluctuation analysis is utilized to calculate the generalized Hurst exponent for the main functions of IMF after which real sea clutter data are used for training and testing. Results show that targets in sea clutter can be effectively observed and detected through the proposed method, the performance of which is better than that of the target detection method for the generalized Hurst exponent under typical time, fractional Fourier transform and wavelet transform domains.

Joakim Strandberg;Thomas Hobiger;Rüdiger Haas; "Coastal Sea Ice Detection Using Ground-Based GNSS-R," vol.14(9), pp.1552-1556, Sept. 2017. Determination of sea ice extent is important both for climate modeling and transportation planning. Detection and monitoring of ice are often done by synthetic aperture radar imagery, but mostly without any ground truth. For the latter purpose, robust and continuously operating sensors are required. We demonstrate that signals recorded by ground-based Global Navigation Satellite System (GNSS) receivers can detect coastal ice coverage on nearby water surfaces. Beside a description of the retrieval approach, we discuss why GNSS reflectometry is sensitive to the presence of sea ice. It is shown that during winter seasons with freezing periods, the GNSS-R analysis of data recorded with a coastal GNSS installation clearly shows the occurrence of ice in the bay where this installation is located. Thus, coastal GNSS installations could be promising sources of ground truth for sea ice extent measurements.

Jingtian Tang;Shuanggui Hu;Zhengyong Ren;Chaojian Chen;Xiao Xiao;Cong Zhou; "Analytical Formulas for Underwater and Aerial Object Localization by Gravitational Field and Gravitational Gradient Tensor," vol.14(9), pp.1557-1560, Sept. 2017. Object localization techniques have significant applications in civil fields and safety problems. A novel analytical formula is developed for accurate underwater and aerial object real-time localization by combining gravitational field and horizontal gravitational gradient anomalies. The proposed method enhances the accuracy of object localization and its excess mass estimation; it also effectively avoids the possible numerical instability and the singularity in the previous works. Finally, a synthetic underwater object navigation model was adopted to verify its performance. The results show that our newly developed method is more practical than existing methods.

Sampelli Anoop;Devesh Kumar Maurya;P. V. N. Rao;M. Sekhar; "Validation and Comparison of LPRM Retrieved Soil Moisture Using AMSR2 Brightness Temperature at Two Spatial Resolutions in the Indian Region," vol.14(9), pp.1561-1564, Sept. 2017. The Advanced Microwave Scanning Radiometer 2 (AMSR2) on board the Global Change Observation Mission-Water 1 launched in May 2012 provides brightness temperature at two different spatial resolutions globally with an average temporal resolution of two days. Surface soil moisture is retrieved using land parameter retrieval model (LPRM) and level-3 brightness temperature data (10.65- and 36.5-GHz channels). The present LPRM implementation has AMSR2 brightness temperatures only with dielectric and vegetation parameters derived from 10.65- and 36.5-GHz channels, respectively. Retrieved soil moisture in the Indian region for 0.25° and 0.1° grid cells is validated and compared over several sites with in situ measurements. It is observed that in comparing the LPRM retrieved soil moisture with field measurements over various sites, 0.1° grid performed relatively better over 0.25° grid.

Zhibiao Jiang;Jian Wang;Qian Song;Zhimin Zhou; "A Refined Cluster-Analysis-Based Multibaseline Phase-Unwrapping Algorithm," vol.14(9), pp.1565-1569, Sept. 2017. As is well known, multibaseline phase unwrapping (PU) is put forward to overcome single-baseline PU in discontinuous-terrain-height estimation. This letter presents a refined algorithm based on the cluster analysis (CA)-based noise-robust efficient multibaseline PU algorithm proposed by H. Yu. The basic idea is to combine multiple interferometric synthetic aperture radar interferograms with different baseline lengths by a linear combination. The new interferograms after the linear combination increase the ambiguity heights. The number of resulting groups on the envelope of the intercept histogram is decreased and the distance between different intercept groups is widened. Compared with the conventional CA method, the significant advantage of the refined CA (RCA) algorithm is that it improves noise robustness when the intercept groups are densely distributed. The proposed RCA algorithm is validated using the simulated interferometric data. The results demonstrate that the noise robustness performance is better than that of the CA method.

Hannah Joerg;Matteo Pardini;Irena Hajnsek;Konstantinos P. Papathanassiou; "On the Separation of Ground and Volume Scattering Using Multibaseline SAR Data," vol.14(9), pp.1570-1574, Sept. 2017. In forest and agricultural scattering scenarios, the backscattered synthetic aperture radar (SAR) signature consists, depending on the frequency, of the superposition of ground and volume scattering contributions. Using multibaseline SAR data, SAR tomography techniques allow resolving contributions occurring at different heights. Two algorithms for the separation of ground and volume scattering are compared with respect to their ability to provide a coherent volume component that can be further used for parameter inversion, both of them requiring only the a priori known ground topography. Once the volume-only coherences are available, the total ground and volume scattering powers are estimated by means of a least squares fitting. The objective of this letter is to quantitatively evaluate the performance of this estimation by means of a Monte Carlo analysis with simulated data focusing on the impact of vertical resolution, errors in the knowledge of the ground topography and phase calibration residuals.

Youzheng Qi;Ling Huang;Xucun Wang;Guangyou Fang;Gang Yu; "Airborne Transient Electromagnetic Modeling and Inversion Under Full Attitude Change," vol.14(9), pp.1575-1579, Sept. 2017. During airborne transient electromagnetic (EM) surveys, transmitting and receiving antennas change their attitudes as the inevitable result of pilot maneuvers and natural forces, which makes the EM response different from that of the nominal attitudes when the antennas are straight and level. Attitude changes were usually neglected or partially considered in the past, which are not adequate for a quantitative interpretation. In this letter, we first scrutinize the mechanism of how the attitude change affects the EM response and divides these effects into two parts: the pure attitude effect and the resultant translation effect. Then, we introduce a novel method to involve the full attitude change in both modeling and inversion. Our compelling results finally demonstrate that the attitude change affects the early-time response much more than the late time and involving the full change in inversion can produce a better estimate of shallow geoelectric parameters.

Liangbing Chen;Zhaomin Rao;Yuhao Wang;Huilin Zhou; "The Maximum Rank of the Transfer Matrix in 1-D Mirrored Interferometric Aperture Synthesis," vol.14(9), pp.1580-1583, Sept. 2017. Because the principle of mirrored interferometric aperture synthesis (MIAS) is different from that of the conventional interferometric aperture synthesis, the antenna array for 1-D MIAS should be redesigned. And in order to get a precise estimation of the cosine visibilities, the maximum rank of the transfer matrix should be set as the key constraint condition of the array optimization model, but this has not been clearly presented before. In this letter, the maximum rank of the transfer matrix is discussed and proved by the mathematical induction method. When the position of each antenna in the array is an even multiple of 0.5, the value for the maximum rank is proven to be M - 2, and when the position of each antenna is an odd multiple of 0.5, the value for the maximum rank is proven to be M - 1, where M is the number of the spatial frequencies provided by the array. This conclusion is significant for the array design of 1-D MIAS.

Stefano Cavallaro; "Statistical Properties of Polarimetric Weather Radar Returns for Nonuniformly Filled Beams," vol.14(9), pp.1584-1588, Sept. 2017. An explicit expression for stochastic processes describing the dual-polarization weather radar echoes is presented. The probability distributions of the proposed model are defined in terms of the point values assumed by polarimetric and Doppler variables in the relevant radar sampling volume. The statistical properties of the model are discussed in order to verify its faithfulness as representative of real radar signals. The discussion considers the most general situation, i.e., it is not related to specific hydrometeor distributions or beam filling conditions.

Hongying Liu;Shuyuan Yang;Shuiping Gou;Puhua Chen;Yikai Wang;Licheng Jiao; "Fast Classification for Large Polarimetric SAR Data Based on Refined Spatial-Anchor Graph," vol.14(9), pp.1589-1593, Sept. 2017. The graph model-based semisupervised machine learning is well established. However, its computational complexity is still high in terms of the time consumption especially for large data. In this letter, we propose a fast semisupervised classification algorithm using the recently presented spatial-anchor graph for a large polarimetric synthetic aperture radar (Pol-SAR) data, named as Fast Spatial-Anchor Graph (FSAG) based algorithm. Based on an initial superpixel segmentation on the PolSAR image, the homogenous regions are obtained. The border pixels are reassigned to the most similar superpixel according to majority voting and distance measurement. Then, feature vectors are weighted within local homogenous regions. The refined spatial-anchor graph is constructed with these regions, and the semisupervised classification is conducted. Experimental results on synthesized and real PolSAR data indicate that the proposed FSAG greatly reduces time consumption and maintains the accuracy for terrain classifications compared with state-of-the-art graph-based approaches.

Amir Hosein Oveis;Mohammad Ali Sebt; "Dictionary-Based Principal Component Analysis for Ground Moving Target Indication by Synthetic Aperture Radar," vol.14(9), pp.1594-1598, Sept. 2017. This letter addresses an efficient algorithm for ground moving target detection and estimation of motion parameters by synthetic aperture radar (SAR). The proposed method outperforms the conventional robust principal component analysis (RPCA)-based ground moving target indication (GMTI) methods that were proposed in the literature. The ability to estimate the radial and along-track velocities of ground moving targets is provided. The rank constraint in the conventional RPCA problem will be automatically relaxed by employing a dictionary matrix for clutter representation. Thus, the new optimization problem will be solved easier with lower degrees of freedom. Furthermore, this dictionary helps to suppress the clutter of higher Doppler frequencies in case of wind blowing scenarios and intrinsic clutter motion modeling. By employing another dictionary matrix for all possible moving targets with different location and velocity, each solution of the optimization problem will be reasonable as it corresponds to a moving target. Although, the two dictionary matrices impose extra computational burden, this load will be prepared prior to other GMTI processing by the information of the SAR system and scenario parameters. Moreover, the algorithm is proposed for the single-channel SAR configurations and has lower computational load than the conventional RPCA-GMTI methods that have to process the recorded data of multichannel systems. Numerical and experimental results are used to evaluate the performance of the proposed method and validate the theoretical discussions.

Yanfeng Zhang;Yongjun Zhang;Zhang Yunjun;Zongze Zhao; "A Two-Step Semiglobal Filtering Approach to Extract DTM From Middle Resolution DSM," vol.14(9), pp.1599-1603, Sept. 2017. Many filtering algorithms have been developed to extract the digital terrain model (DTM) from dense urban light detection and ranging data or the high-resolution digital surface model (DSM), assuming a smooth variation of topographic relief. However, this assumption breaks for a middle-resolution DSM because of the diminished distinction between steep terrains and nonground points. This letter introduces a two-step semiglobal filtering (TSGF) workflow to separate those two components. The first SGF step uses the digital elevation model of the Shuttle Radar Topography Mission to obtain a flat-terrain mask for the input DSM; then, a segmentation-constrained SGF is used to remove the nonground points within the flat-terrain mask while maintaining the shape of the terrain. Experiments are conducted using DSMs generated from Chinese ZY3 satellite imageries, verified the effectiveness of the proposed method. Compared with the conventional progressive morphological filter method, the usage of flat-terrain mask reduced the average root-mean-square error of DTM from 9.76 to 4.03 m, which is further reduced to 2.42 m by the proposed TSGF method.

Seokhyeon Kim;Keerthana Balakrishnan;Yi Liu;Fiona Johnson;Ashish Sharma; "Spatial Disaggregation of Coarse Soil Moisture Data by Using High-Resolution Remotely Sensed Vegetation Products," vol.14(9), pp.1604-1608, Sept. 2017. A novel approach is presented to spatially disaggregate coarse soil moisture (SM) by only using remotely sensed vegetation index. The approach is based on the conditional relationship of vegetation with time-aggregated SM, allowing the coarse-scale SM to be disaggregated to the spatial resolution of the vegetation product. The method was applied to satellite-derived SM over January 2010-December 2011, using the high-resolution normalized difference vegetation index (NDVI). The results were evaluated against ground measurements during the two-year period over the contiguous United States and Spain, and also compared with an existing disaggregation method that also requires land surface temperature observations. It is shown that the proposed approach can provide fine-resolution SM with reasonable spatial variability.

Mian Pan;Jie Jiang;Qingpeng Kong;Jianguang Shi;Qinghua Sheng;Tao Zhou; "Radar HRRP Target Recognition Based on t-SNE Segmentation and Discriminant Deep Belief Network," vol.14(9), pp.1609-1613, Sept. 2017. In radar high-resolution range profile (HRRP)-based target recognition, one of the most challenging tasks is the noncooperative target recognition with imbalanced training data set. This letter presents a novel recognition framework to deal with this problem. The framework is composed of two steps: first, the t-distributed stochastic neighbor embedding (t-SNE) and synthetic sampling are utilized for data preprocessing to provide a well segmented and balanced HRRP data set; second, a discriminant deep belief network (DDBN) is proposed to recognize HRRP data. Compared with the conventional recognition models, the proposed framework not only makes better use of data set inherent structure among HRRP samples for segmentation, but also utilizes high-level features for recognition. Moreover, the DDBN shares latent information of HRRP data globally, which can enhance the ability of modeling the aspect sectors with few HRRP data. The experiments illustrate the meaning of the t-SNE, and validate the effectiveness of the proposed recognition framework with imbalanced HRRP data.

Leonardo B. L. Santos;Tiago Carvalho;Liana O. Anderson;Conrado M. Rudorff;Victor Marchezini;Luciana R. Londe;Silvia M. Saito; "An RS-GIS-Based ComprehensiveImpact Assessment of Floods—A Case Study in Madeira River, Western Brazilian Amazon," vol.14(9), pp.1614-1617, Sept. 2017. Geographical information systems-based methods can be handled as powerful tools in assessing and quantifying impacts and, thus, supporting strategies for disaster risk reduction (DRR). This is particularly relevant on scenarios of global climate change and intensified increased human interventions on riverine systems. The Madeira River in Porto Velho city (Brazilian Amazon) is a good example of susceptible area to both of these factors. We take advantage of the 2014 flood, the largest recorded for this region, for combining remote sensing and geographic information system with socio, health, and infrastructure data to quantify spatially the flood impacts. Using high resolution airborne images, we applied a machine learning classification algorithm for detecting urban areas. Our results show that at the flood extent related to the highest river level at least 0.65 km2 of urban area, 87 km of urban streets, four public schools, and two public health units were affected. More than 16 800 people suffered the impacts directly, and children represented 29.7% of them. Based on registered data, it was quantified that the city registered more than 20 cases of leptospirosis and the truck flow on the region decreased up to 92%. The spatially-explicit results of this letter are potential to guide strategies aiming to support decision-making for DRR.

Suhui Xu;Xiaodong Mu;Dong Chai;Shuyang Wang; "Adapting Remote Sensing to New Domain With ELM Parameter Transfer," vol.14(9), pp.1618-1622, Sept. 2017. It is time consuming to annotate unlabeled remote sensing images. One strategy is taking the labeled remote sensing images from another domain as training samples, and the target remote sensing labels are predicted by supervised classification. However, this may lead to negative transfer due to the distribution difference between the two domains. To address this issue, we propose a novel domain adaptation method through transferring the parameters of extreme learning machine (ELM). The core of this method is learning a transformation to map the target ELM parameters to the source, making the classifier parameters of the target domain maximally aligned with the source. Our method has several advantages which was previously unavailable within a single method: multiclass adaptation through parameter transferring, learning the final classifier and transformation simultaneously, and avoiding negative transfer. We perform experiments on three data sets that indicate improved accuracy and computational advantages compared to baseline approaches.

Dong-Xiao Yue;Feng Xu;Ya-Qiu Jin; "Wishart–Bayesian Reconstruction of Quad-Pol From Compact-Pol SAR Image," vol.14(9), pp.1623-1627, Sept. 2017. Compact polarimetry (compact-pol), as an effective polarization system, can reduce the system complexity and data volume in comparison with quad polarimetry (quad-pol). Reconstruction of quad-pol data from compact-pol has been discussed mostly based on iterative algorithms which make use of the empirically parameterized model with the assumption of reflection symmetry of the scatterer. In this letter, a linear relationship between the compact-pol and quad-pol is first derived, and then the Wishart-Bayesian regularized inverse algorithm is developed to reconstruct pseudo quad-pol data from compact-pol. Such problem is solved using the efficient alternating direction method of multipliers to recover the pseudo quad-pol covariance matrix. The reconstruction performance is evaluated by coherence index, in comparison with existing methods.

Chenchen Lin;Puming Huang;Weiwei Wang;Yu Li;Jingwei Xu; "Unambiguous Signal Reconstruction Approach for SAR Imaging Using Frequency Diverse Array," vol.14(9), pp.1628-1632, Sept. 2017. The conflict between range and azimuth ambiguities is a challenging problem for spaceborne high-resolution and wide-swath (HRWS) synthetic aperture radar (SAR) imaging. In this letter, a novel ambiguity resolution approach based on frequency diverse array (FDA) is proposed to retrieve unambiguous signal from that with ambiguities in both range and azimuth domains. By exploiting the range-angle-dependent property of transmit steering vector in FDA and applying the second range dependence compensation approach, echoes from different ambiguous regions are separable in transmit spatial-Doppler frequency domains. Then a corresponding transmit beamformer is designed to extract spectrum component of each ambiguous region, which can be rearranged to comprise the desired unambiguous signal. Compatible with most existing azimuth ambiguity suppression algorithms which employ the receive degrees of freedom, the proposed approach can further enhance the capability of resolving ambiguity for HRWS SAR systems, and its effectiveness is verified by simulation experiments.

Yatong Zhou;Chaojun Shi;Hanming Chen;Jianyong Xie;Guoning Wu;Yangkang Chen; "Spike-Like Blending Noise Attenuation Using Structural Low-Rank Decomposition," vol.14(9), pp.1633-1637, Sept. 2017. Spikelike noise is a common type of random noise existing in many geoscience and remote sensing data sets. The attenuation of spike-like noise has become extremely important recently, because it is the main bottleneck when processing the simultaneous source data that are generated from the modern seismic acquisition. In this letter, we propose a novel low-rank decomposition algorithm that is effective in rejecting the spike-like noise in the seismic data set. The specialty of the low-rank decomposition algorithm is that it is applied along the morphological direction of the seismic data sets with a prior knowledge of the morphology of the seismic data, which we call local slope. The seismic data are of much lower rank along the morphological direction than along the space direction. The morphology of the seismic data (local slope) is obtained via a robust plane-wave destruction method. We use two simulated field data examples to illustrate the algorithm workflow and its effective performance.

Grant J. Scott;Richard A. Marcum;Curt H. Davis;Tyler W. Nivin; "Fusion of Deep Convolutional Neural Networks for Land Cover Classification of High-Resolution Imagery," vol.14(9), pp.1638-1642, Sept. 2017. Deep convolutional neural networks (DCNNs) have recently emerged as the highest performing approach for a number of image classification applications, including automated land cover classification of high-resolution remote-sensing imagery. In this letter, we investigate a variety of fusion techniques to blend multiple DCNN land cover classifiers into a single aggregate classifier. While feature-level fusion is widely used with deep neural networks, our approach instead focuses on fusion at the classification/information level. Herein, we train three different DCNNs: CaffeNet, GoogLeNet, and ResNet50. The effectiveness of various information fusion methods, including voting, weighted averages, and fuzzy integrals, is then evaluated. In particular, we used DCNN cross-validation results for the input densities of fuzzy integrals followed by evolutionary optimization. This novel approach produces the state-of-the-art classification results up to 99.3% for the UC Merced data set and the 99.2% for the RSD data set.

Pompilio Araújo;Rodolfo Miranda;Diedre Carmo;Raul Alves;Luciano Oliveira; "Air-SSLAM: A Visual Stereo Indoor SLAM for Aerial Quadrotors," vol.14(9), pp.1643-1647, Sept. 2017. In this letter, we introduce a novel method for visual simultaneous localization and mapping (SLAM)-so-called Air-SSLAM-which exploits a stereo camera configuration. In contrast to monocular SLAM, scale definition and 3-D information are issues that can be more easily dealt with in stereo cameras. Air-SSLAM starts from computing keypoints and the correspondent descriptors over the pair of images, using good features-to-track and rotated-binary robust-independent elementary features, respectively. Then a map is created by matching each pair of right and left frames. The long-term map maintenance is continuously performed by analyzing the quality of each matching, as well as by inserting new keypoints into uncharted areas of the environment. Three main contributions can be highlighted in our method: (1) a novel method to match keypoints efficiently; (2) three quality indicators with the aim of speeding up the mapping process; and (3) map maintenance with uniform distribution performed by image zones. By using a drone equipped with a stereo camera, flying indoor, the translational average error with respect to a marked ground truth was computed, demonstrating promising results.

* "IEEE Geoscience and Remote Sensing Letters information for authors," vol.14(9), pp.C3-C3, Sept. 2017.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "IEEE Geoscience and Remote Sensing Letters Institutional Listings," vol.14(9), pp.C4-C4, Sept. 2017.* Presents a listing of institutional institutions relevant for this issue of the publication.

## IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing - new TOC (2017 September 21) [Website]

* "Frontcover," vol.10(8), pp.C1-C1, Aug. 2017.* Presents the front cover for this issue of the publication.

* "IEEE Geoscience and Remote Sensing Society," vol.10(8), pp.C2-C2, Aug. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

L. Mou;X. Zhu;M. Vakalopoulou;K. Karantzalos;N. Paragios;B. Le Saux;G. Moser;D. Tuia; "Multitemporal Very High Resolution From Space: Outcome of the 2016 IEEE GRSS Data Fusion Contest," vol.10(8), pp.3435-3447, Aug. 2017. In this paper, the scientific outcomes of the 2016 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society are discussed. The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset, which included a temporal pair of very high resolution panchromatic and multispectral Deimos-2 images and a video captured by the Iris camera on-board the International Space Station. The problems addressed and the techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and mixed ideas and methodologies from the remote sensing, video processing, and computer vision. In particular, the winning team developed a deep learning method to jointly address spatial scene labeling and temporal activity modeling using the available image and video data. The second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection. The methodological key ideas of both these approaches and the main results of the corresponding experimental validation are discussed in this paper.

Muhammad Bilal;Janet E. Nichol; "Evaluation of the NDVI-Based Pixel Selection Criteria of the MODIS C6 Dark Target and Deep Blue Combined Aerosol Product," vol.10(8), pp.3448-3453, Aug. 2017. The moderate resolution and imaging spectroradiometer (MODIS) Collection 6 (C6) level 2 operational aerosol product (MOD04) contains the Dark Target (DT) and Deep Blue (DB) combined aerosol optical depth (AOD) observations (DTB) at 10 km resolution, which is generated using the selection criteria based on the static normalized difference vegetation index (NDVI) as follows: 1) the DT AOD data are used for NDVI > 0.3; 2) the DB AOD data are used for NDVI <; 0.2; and 3) the average of both algorithms or AOD data with highest quality flag are used for ≤ 0.2 NDVI ≤ 0.3. The objective of this study is to evaluate the NDVI pixel selection criteria used in the DTB AOD product. For this, the DT, the DB, and the DTB AOD retrievals are evaluated using the Aerosol Robotic Network (AERONET) level 2.0 cloud-screened and quality-controlled AOD data over Beijing from 2002 to 2014, Lahore from 2007 to 2013, and Paris from 2005 to 2014. The DT and DB AOD retrievals considered by the DTB product are tabulated. For comparison purposes, the MODIS level 3 monthly NDVI product (MOD13A3) at 1 km resolution is also tabulated indicating how the NDVI-based pixel selection criteria operate for the DT and DB AOD retrievals used in the DTB product. Results show that the DT AOD retrievals for NDVI ≤ 0.3 are used in the DTB product, and this increases the mean bias and percentage of retrievals above the expected error. These results conclude that the DTB AOD product must follow the dynamic NDVI values for pixel selection criteria.

Yunping Chen;Shudong Wang;Weihong Han;Yajv Xiong;Wenhuan Wang;Ling Tong; "A New Air Pollution Source Identification Method Based on Remotely Sensed Aerosol and Improved Glowworm Swarm Optimization," vol.10(8), pp.3454-3464, Aug. 2017. Air pollution sources generally cannot be identified as the specific factories but certain industries. Focusing on this issue, a new method, based on an improved glowworm swarm optimization and remotely sensed imagery, was proposed to precisely orientate and quantify air pollution sources in this study. In addition, meteorological data and GIS information were also used to backtrack the pollution source. After that, in order to quantify the pollution of each factory in the study areas, three pollution indices, pollution gross (PG), pollution intensity, and area-normalized pollution (ANP), were proposed. As a result, the polluting contribution of each factory was listed, and the most polluting factories, which were bulletined as the key monitoring factories by the local authority, were accurately extracted. Among the pollution indices, ANP is the most robust, reliable, and recommended. Furthermore, the result also shows factory pollution background information achieved from the historical remote sensing data which can be used to improve the precision of identification. To our knowledge, this study provides the first attempt to address the problem of identifying a pollution source as originating from an individual factory based on remote sensing data. The proposed method provides a useful tool for air quality management, and the result would be meaningful to environmental and economic issue.

Modurodoluwa Adeyinka Okeowo;Hyongki Lee;Faisal Hossain;Augusto Getirana; "Automated Generation of Lakes and Reservoirs Water Elevation Changes From Satellite Radar Altimetry," vol.10(8), pp.3465-3481, Aug. 2017. Limited access to in-situ water level data for lakes and reservoirs have been a major setback for regional and global studies of reservoirs, surface water storage changes, and monitoring the hydrologic cycle. Processing satellite radar altimetry data over inland water bodies on a large scale has been a cumbersome task primarily due to the removal of contaminated measurements as a result of surrounding land. In this study, we proposed a new algorithm to automatically generate time series from raw satellite radar altimetry data without user intervention. With this method, users with a little knowledge on the field can now independently process radar altimetry for diverse applications. The method is based on K-means clustering, interquartile range, and statistical analysis of the dataset for outlier detection. Jason-2 and Envisat radar altimetry data were used to demonstrate the capability of this algorithm. A total of 37 satellite crossings over 30 lakes and reservoirs located in the U.S., Brazil, and Nigeria were used based on the availability of in-situ data. We compared the results against in-situ data and root-mean-square error values ranged from 0.09 to 1.20 m. We also confirmed the potential of this algorithm over rivers and wetlands using the southern Congo River and Everglades wetlands in Florida, respectively. Finally, the different retracking algorithms in Envisat; Ice-1, Ice-2, Ocean, and Sea-Ice were compared using the proposed algorithm. Ice-1 performed best for generating water level time series for in-land water bodies and the result is consistent with previous studies.

Donato Amitrano;Gerardo Di Martino;Antonio Iodice;Daniele Riccio;Giuseppe Ruello; "Small Reservoirs Extraction in Semiarid Regions Using Multitemporal Synthetic Aperture Radar Images," vol.10(8), pp.3482-3492, Aug. 2017. In this paper, we introduce a novel framework for small reservoirs extraction in semiarid environment. The task is accomplished through the introduction of a pseudoprobability index derived from multitemporal synthetic aperture radar RGB images. These products are characterized by the ease of interpretation for nonexpert users, and the possibility to be processed using simple algorithms, allowing, in this case, for the definition of an ad hoc band ratio for feature extraction. The reliability of the proposed approach is demonstrated through a case study in Burkina Faso in which 19 reservoirs up to about 6000 m2 extent were tested. The obtained accuracy with respect to the available ground truth is higher than 88%.

Loreley Selene Lago;Martin Saraceno;Laura A. Ruiz-Etcheverry;Marcello Passaro;Fernando Ariel Oreiro;Enrique Eduardo D'Onofrio;Raúl A. González; "Improved Sea Surface Height From Satellite Altimetry in Coastal Zones: A Case Study in Southern Patagonia," vol.10(8), pp.3493-3503, Aug. 2017. High-resolution 20-Hz Jason-2 satellite altimetry data obtained from crossing tracks numbered 52 and 189 in San Matias Gulf, Argentina, are compared with a 22-month-long time series of sea level measured by a bottom pressure recorder. It was deployed 1.3 km from the nominal intersection of the two tracks and 0.9 km from the coast. Results show that by improving retracking and tidal modeling, satellite altimetry data become more accurate close to the coast. Indeed, a larger number of reliable data are obtained up to 1.6 km from the coast when satellite data are retracked using adaptive leading edge subwaveform retracker (ALES) rather than using the classic Brown model. The tidal model that showed the lowest root sum square (RSS) of the difference between the in situ and the modeled tidal amplitude and phase is TPXO8 (RSS 4.8 cm). Yet, the lowest difference from in situ tidal constituents is obtained by harmonic analysis of the available 23-year-long 1-Hz altimetry dataset (RSS 4.1 cm), highlighting the potential of altimetry data to compute tides. Considering ALES retracking and TPXO8 tidal correction for the 20-Hz Jason-2 data, we finally show that it is possible to retrieve 70% more data and to improve correlation with in situ measurements from 0.79 to 0.95. The sea level anomaly obtained this way has a root mean square difference from in situ data of only 13 cm as close as 4 km from the coast. Overall, the analysis performed indicates that satellite altimetry data can be greatly improved, even in complex macrotidal coastal regions.

Suman Singha;Rudolf Ressel; "Arctic Sea Ice Characterization Using RISAT-1 Compact-Pol SAR Imagery and Feature Evaluation: A Case Study Over Northeast Greenland," vol.10(8), pp.3504-3514, Aug. 2017. Synthetic Aperture Radar (SAR) polarimetry has become a valuable tool in space-borne SAR-based sea ice analysis. The two major objectives in SAR-based remote sensing of sea ice are, on the one hand, to have a large coverage and, on the other hand, to obtain a radar response that carries as much information as possible in order to characterize sea ice. Single-polarimetric acquisitions of existing sensors offer a wide coverage on the ground, whereas dual polarimetric or even better fully polarimetric data offer a higher information content, which allows for a more reliable automated sea ice analysis at a cost of smaller swath. In order to reconcile the advantages of fully polarimetric acquisitions with the higher ground coverage of acquisitions with fewer polarimetric channels, hybrid/compact polarimetric acquisitions offer an excellent tradeoff between the mentioned objectives. With the advent of the RISAT-1 satellite platform, we are able to explore the potential of compact dual pol acquisitions for sea ice analysis and classification. Our algorithmic approach for an automated sea ice classification consist of two steps. In the first step, we perform a feature extraction followed by a feature evaluation procedure. The resulting feature vectors are then ingested into a trained artificial neural network classifier to arrive at a pixel-wise supervised classification. We present a comprehensive polarimetric feature analysis and classification results on a dataset acquired off the eastern Greenland coast, along with comparisons of results obtained from near-coincident (spatially and temporally) C -band fully polarimetric imagery acquired by RADARSAT-2.

Junfei Xie;Jianhua Zhou; "Classification of Urban Building Type from High Spatial Resolution Remote Sensing Imagery Using Extended MRS and Soft BP Network," vol.10(8), pp.3515-3528, Aug. 2017. This study presents a new approach for classification of building type in complex urban scene. The approach consists of two parts: extended multiresolution segmentation (EMRS) and soft classification using BP network (SBP). The technology scheme is referred to here as EMRS-SBP. EMRS is used to guide the design of descriptor. A descriptor is a feature expression or a symbolized algorithm to systematically promote the expressing capability of image features. A classifier can perform far better to discern complex pattern of combining pixels working in an EMRS-based feature space constructed by a number of such descriptors. SBP serves as a classifier model to generate natural clusters of member which refers to here as both pixels and image patches. Class-mark ensured member is denoted as sure member and the rest as unsure (fuzzy) members. The latter can be relabeled through recursive defuzzifying according to the information carried by the gradually increased sure members. By using EMRS-SBP, three building types, i.e., old-fashioned courtyard dwellings, multistorey residential buildings, and high-rise buildings, can be accurately classified from high spatial resolution imagery in a feature space constructed with fifteen descriptors including nine EMRS-based ones. There is evidence that the mean overall accuracy using SBP in the EMRS-based feature space is 19.8% higher than that using the hard classification with BP network in a single resolution segmentation space and meanwhile, the mean kappa statistic value (κ) is 25.1% higher.

Abebe Mohammed Ali;Roshanak Darvishzadeh;Andrew K. Skidmore; "Retrieval of Specific Leaf Area From Landsat-8 Surface Reflectance Data Using Statistical and Physical Models," vol.10(8), pp.3529-3536, Aug. 2017. One of the key traits in the assessment of ecosystem functions is a specific leaf area (SLA). The main aim of this study was to examine the potential of new generation satellite images, such as Landsat-8 imagery, for the retrieval of SLA at regional and global scales. Therefore, both statistical and radiative transfer model (RTM) inversion approaches for estimating SLA from the new Landsat-8 product were evaluated. Field data were collected for 33 sample plots during a field campaign in summer 2013 in the Bavarian Forest National Park, Germany, while Landsat-8 image data concurrent with the time of field campaign were acquired. Estimates of SLA were examined using different Landsat-8 spectral bands, vegetation indices calculated from these bands, and the inversion of a canopy RTM. The RTM inversion was performed utilizing continuous wavelet analysis and a look-up table (LUT) approach. The results were validated using R2 and the root-mean-square error (RMSE) between the estimated and measured SLA. In general, SLA was estimated accurately by both statistical and RTM inversion approaches. The relationships between measured and estimated SLA using the enhanced vegetation index were strong (R2 = 0.77 and RMSE = 4.44%). Furthermore, the predictive model developed from combination of the wavelet features at 654.5 nm (scale 9) and 2200.5 nm (scale 2) correlated strongly with SLA (R2 = 0.79 and RMSE = 7.52%). The inversion of LUT using a spectral subset consisting of bands 5, 6, and 7 of Landsat-8 (R2 = 0.73 and RMSE = 5.33%) yielded a higher accuracy and precision than any other spectral subset. The findings of this study provide insights into the potential of the new generation of multispectral medium-resolution satellite imagery, such as Landsat-8 and Sentinel-2, for accurate retrieval and mapping of SLA using either statistical or RTM inversion methods.

Xuejiao Bai;Pengxin Wang;Hongshuo Wang;Yi Xie; "An Up-Scaled Vegetation Temperature Condition Index Retrieved From Landsat Data with Trend Surface Analysis," vol.10(8), pp.3537-3546, Aug. 2017. Drought causes great losses in regional agricultural production and decreases socioeconomic growth. The vegetation temperature condition index (VTCI) has a distinct advantage in monitoring the onset, duration, and intensity of droughts. With the development of modern remote sensing technologies, remotely sensed data with variable spatial and temporal resolution are used to generate multiscale maps of droughts. Therefore, understanding the scale effect and developing appropriate up-scaling methods to retrieve spatiotemporal drought variables across different scales is valuable. As an alternative to the commonly used window averaging (WA) method, we develop the trend surface analysis (TSA) method based on multiple regression analysis to up-scale Landsat-derived VTCI (Landsat-VTCI) images from a finer to a coarser resolution. The two methods are systematically evaluated in a case study according to various statistical indicators, including the spatial and frequency distributions of features, and the correlation coefficients and root mean square errors between up-scaled Landsat-VTCI images and moderate-resolution Imaging Spectroradiometer (MODIS)-derived VTCI (MODIS-VTCI) images. The results show that TSA is reliable and more suitable than WA for non-normally distributed Landsat-derived VTCIs, whereas the WA results are similar to the TSA results for normal distributions. The TSA method is flexible for any type of distribution of Landsat-VTCIs within a study area and can be programmed to up-scale spatial drought variables from a finer to a coarser spatial resolution because of its efficiency and flexibility compared to the WA method.

Leping Chen;Daoxiang An;Xiaotao Huang; "A Backprojection-Based Imaging for Circular Synthetic Aperture Radar," vol.10(8), pp.3547-3555, Aug. 2017. Circular synthetic aperture radar (CSAR) has attracted much attention in the field of high-resolution SAR imaging. However, the CSAR image focusing is affected by the motion deviations of platform. In the processing of experimental CSAR data to deal with motion errors, the main way is using setup calibrators, which restricts its widespread applications. In this paper, based on the estimation of motion errors, an autofocus CSAR imaging strategy is proposed without using any setup calibrator. The first step is to split the entire aperture into several subapertures, the second step is to process the data in subapertures with an autofocus backprojection method, and the last step is to obtain the final CSAR image by merging the subimages obtained from the subaperture processing. The CSAR data processing results prove that the proposed strategy can remove the motion errors accurately and acquire well-focused CSAR images.

Wanying Song;Ming Li;Peng Zhang;Yan Wu;Lu Jia;Lin An; "Unsupervised PolSAR Image Classification and Segmentation Using Dirichlet Process Mixture Model and Markov Random Fields With Similarity Measure," vol.10(8), pp.3556-3568, Aug. 2017. The Markov random fields (MRF) is skillful in incorporating the spatial-contextual information of images and has been widely applied to remote-sensing image classification and segmentation. However, the traditional MRF-based method is unable to determine the precise number of clusters automatically. It is known that the Dirichlet process mixture model (DPMM) takes the number of clusters as a model parameter and estimates it in image classification. Therefore, the DPMM is a powerful and potential method for classification tasks. Then, in this paper, by fusing the DPMM model and a similarity measure scheme into the MRF framework, we propose a novel unsupervised classification and segmentation method for polarimetric synthetic aperture radar (PolSAR) images, abbreviated as DPMM-SMMRF. First, the DPMM built by the multidimensional Gaussian distribution is introduced into the MRF framework, which enables the proposed DPMM-SMMRF model to identify the underlying number of clusters automatically. Second, to utilize the polarization information adequately and modulate the spatial correlation, the similarity measure between the neighboring polarimetric covariance matrices is utilized to construct the interaction term; thus, providing strong noise immunity and enhancing the ability of the classification of the sample pixels. Then, for updating the class labels and the parameters in the proposed DPMM-SMMRF model, we propose a detailed sampling procedure based on the Gibbs sampling. Experiments on real PolSAR images demonstrate that the proposed DPMM-SMMRF model can automatically recognize the number of clusters and simultaneously obtain higher classification accuracy, more accurate edge location, and smoother homogeneous areas compared to several recent MRF models.

Huaitao Fan;Zhimin Zhang;Robert Wang;Ning Li;Wei Xu;Zhen Xu; "Demonstration of Dual-Channel TOPS SAR Imaging With Airborne C-Band Data," vol.10(8), pp.3569-3581, Aug. 2017. Multichannel in azimuth synthetic aperture radar (SAR) operating in the terrain observation by progressive scans (TOPS) acquisition mode has attracted much attention recently for its capability to achieve ultrawide-swath imaging with a high spatial resolution. In order to verify the feasibility and operability of this newly developed remote sensing concept, a C-band airborne azimuth dual-channel TOPS SAR has been designed by the Institute of Electronics, Chinese Academy of Sciences, as a test bed for future spaceborne realizations. This paper introduces the experimental SAR system and reports the data processing results of an outfield experiment conducted in late September 2014. The importance of the experiment resides in its potential to validate several important technical aspects of this novel SAR operation with real experimental data, including channel mismatch cancellation and unambiguous signal reconstruction. Besides, two kinds of processing methods are proposed to calibrate the influence of antenna phase center fluctuation occurred in the dual-channel TOPS SAR. Finally, the experimental results obtained, including the phase mismatch cancellation and the focused imageries, are presented and analyzed.

Sung-Ho Chae;Won-Jin Lee;Hyung-Sup Jung;Lei Zhang; "Ionospheric Correction of L-Band SAR Offset Measurements for the Precise Observation of Glacier Velocity Variations on Novaya Zemlya," vol.10(8), pp.3591-3603, Aug. 2017. The synthetic aperture radar (SAR) offset tracking method has been widely used for multitemporal analysis of fast glacier movements in the polar region. However, it can be severely distorted, particularly in the case of L-band SAR systems mainly due to a frequent occurrence of ionospheric effects in the polar region. In this study, we developed an efficient method to extract and correct the ionospheric contribution from SAR offset tracking measurements. The method exploits an iterative directional filtering approach, which is based on the pattern and directionality of ionospheric streaks. The measurement performance of the proposed method was evaluated by using three L-band advanced land observing satellite phased array type L-band synthetic aperture radar pairs. Our results showed that the proposed correction achieved the improved measurement accuracies from 4.68-23.88 to 1.03-1.51 m/yr. It means that the accuracies of corrected measurements were about 5-16 times better than those of the original measurements. From the results, we concluded that our correction technique is highly suitable for the precise measurement of the glacier displacements even in the presence of strong ionospheric effects. Using the proposed method, the variations of glacier velocities were measured in the Vylki, Shury, and Kropotnika glaciers on Novaya Zemlya, which is located in the Russian Arctic Ocean, and the grounding zones were detected from the measurements in the Shury and Kropotnika glaciers. It further confirmed that the proposed correction method is allowed for the precise monitoring of glacier movements. However, in cases of severe ionosphere-distorted measurements, the proposed method may be limitedly applied.

Wen Xie;Licheng Jiao;Biao Hou;Wenping Ma;Jin Zhao;Shuyin Zhang;Fang Liu; "POLSAR Image Classification via Wishart-AE Model or Wishart-CAE Model," vol.10(8), pp.3604-3615, Aug. 2017. Neural network such as an autoencoder (AE) and a convolutional autoencoder (CAE) have been successfully applied in image feature extraction. For the statistical distribution of polarimetric synthetic aperture radar (POLSAR) data, we combine the Wishart distance measurement into the training process of the AE and the CAE. In this paper, a new type of AE and CAE is specially defined, which we name them Wishart-AE (WAE) and Wishart-CAE (WCAE). Furthermore, we connect the WAE or the WCAE with a softmax classifier to compose a classification model for the purpose of POLSAR image classification. Compared with AE and CAE models, WAE and WCAE models can achieve higher classification accuracy because they could obtain the classification features, which are more suitable for POLSAR data. What is more, the WCAE model utilizes the local spatial information of a POLSAR image when compared with the WAE model. A convolutional natural network (CNN), which also makes use of the spatial information, has been widely applied in image classification, but our WCAE model is time-saving than the CNN model. Given the above, our methods not only improve the classification performance but also save the experimental time. Experimental results on four POLSAR datasets also demonstrate that our proposed methods are significantly effective.

Pasquale Iervolino;Raffaella Guida; "A Novel Ship Detector Based on the Generalized-Likelihood Ratio Test for SAR Imagery," vol.10(8), pp.3616-3630, Aug. 2017. Ship detection with synthetic aperture radar (SAR) images, acquired at different working frequencies, is presented in this paper where a novel technique is proposed based on the generalized-likelihood ratio test (GLRT). Suitable electromagnetic models for both the sea clutter and the signal backscattered from the ship are considered in the new technique in order to improve the detector performance. The GLRT is compared to the traditional constant false alarm rate (CFAR) algorithm through Monte-Carlo simulations in terms of receiver operating characteristic (ROC) curves and computational load at different bands (S-, C-, and X-). Performances are also compared through simulations with different orbital and scene parameters at fixed values of band and polarization. The GLRT is then applied to real datasets acquired from different sensors (TerraSAR-X, Sentinel-1, and Airbus airborne demonstrator) operating at different bands (S-, C-, and X-). An analysis of the target-to-clutter ratio (TCR) is then performed and detection outcomes are compared with an automatic identification system data when available. Simulations show that the GLRT presents better ROCs than those obtained through the CFAR algorithm. On the other side, results on real SAR images demonstrate that the proposed approach greatly improves the TCR (between 22 and 32 dB on average), but its computational time is 1.5 times slower when compared to the CFAR algorithm.

Fengying Xie;Mengyun Shi;Zhenwei Shi;Jihao Yin;Danpei Zhao; "Multilevel Cloud Detection in Remote Sensing Images Based on Deep Learning," vol.10(8), pp.3631-3640, Aug. 2017. Cloud detection is one of the important tasks for remote sensing image processing. In this paper, a novel multilevel cloud detection method based on deep learning is proposed for remote sensing images. First, the simple linear iterative clustering (SLIC) method is improved to segment the image into good quality superpixels. Then, a deep convolutional neural network (CNN) with two branches is designed to extract the multiscale features from each superpixel and predict the superpixel as one of three classes including thick cloud, thin cloud, and noncloud. Finally, the predictions of all the superpixels in the image yield the cloud detection result. In the proposed cloud detection framework, the improved SLIC method can obtain accurate cloud boundaries by optimizing initial cluster centers, designing dynamic distance measure, and expanding search space. Moreover, different from traditional cloud detection methods that cannot achieve multilevel detection of cloud, the designed deep CNN model can not only detect cloud but also distinguish thin cloud from thick cloud. Experimental results indicate that the proposed method can detect cloud with higher accuracy and robustness than compared methods.

Ya'nan Zhou;Jun Li;Li Feng;Xin Zhang;Xiaodong Hu; "Adaptive Scale Selection for Multiscale Segmentation of Satellite Images," vol.10(8), pp.3641-3651, Aug. 2017. With dramatically increasing of the spatial resolution of satellite imaging sensors, object-based image analysis (OBIA) has been gaining prominence in remote sensing applications. Multiscale image segmentation is a prerequisite step that splits an image into hierarchical homogeneous segmented objects for OBIA. However, scale selection remains a challenge in multiscale segmentation. In this study, we presented an adaptive approach for defining and estimating the optimal scale in the multiscale segmentation process. Central to our method is the combined use of image features from segmented objects and prior knowledge from historical thematic maps in a top-down segmentation procedure. Specifically, the whole image was first split into segmented objects, with the largest scale in a presupposition segmentation scale sequence. Second, based on segmented object features and prior knowledge in the local region of thematic maps, we calculated complexity values for each segmented object. Third, if the complexity values of an object were large enough, this object would be further split into multiple segmented objects with a smaller scale in the scale sequence. Then, in the similar manner, complex segmented objects were split into the simplest objects iteratively. Finally, the final segmentation result was obtained and evaluated. We have applied this method on a GF-1 multispectral satellite image and a ZY-3 multispectral satellite image to produce multiscale segmentation maps and further classification maps, compared with the state-of-the-art and the traditional mean shift algorithm. The experimental results illustrate that the proposed method is practically helpful and efficient to produce the appropriate segmented image objects with optimal scales.

Zhipeng Deng;Hao Sun;Shilin Zhou;Juanping Zhao;Huanxin Zou; "Toward Fast and Accurate Vehicle Detection in Aerial Images Using Coupled Region-Based Convolutional Neural Networks," vol.10(8), pp.3652-3664, Aug. 2017. Vehicle detection in aerial images, being an interesting but challenging problem, plays an important role for a wide range of applications. Traditional methods are based on sliding-window search and handcrafted or shallow-learning-based features with heavy computational costs and limited representation power. Recently, deep learning algorithms, especially region-based convolutional neural networks (R-CNNs), have achieved state-of-the-art detection performance in computer vision. However, several challenges limit the applications of R-CNNs in vehicle detection from aerial images: 1) vehicles in large-scale aerial images are relatively small in size, and R-CNNs have poor localization performance with small objects; 2) R-CNNs are particularly designed for detecting the bounding box of the targets without extracting attributes; 3) manual annotation is generally expensive and the available manual annotation of vehicles for training R-CNNs are not sufficient in number. To address these problems, this paper proposes a fast and accurate vehicle detection framework. On one hand, to accurately extract vehicle-like targets, we developed an accurate-vehicle-proposal-network (AVPN) based on hyper feature map which combines hierarchical feature maps that are more accurate for small object detection. On the other hand, we propose a coupled R-CNN method, which combines an AVPN and a vehicle attribute learning network to extract the vehicle's location and attributes simultaneously. For original large-scale aerial images with limited manual annotations, we use cropped image blocks for training with data augmentation to avoid overfitting. Comprehensive evaluations on the public Munich vehicle dataset and the collected vehicle dataset demonstrate the accuracy and effectiveness of the proposed method.

Shilpa Suresh;Shyam Lal;Chintala Sudhakar Reddy;Mustafa Servet Kiran; "A Novel Adaptive Cuckoo Search Algorithm for Contrast Enhancement of Satellite Images," vol.10(8), pp.3665-3676, Aug. 2017. Owing to the increased demand for satellite images for various practical applications, the use of proper enhancement methods are inevitable. Visual enhancement of such images mainly focuses on improving the contrast of the scene procured, conserving its naturalness with minimum image artifacts. Last one decade traced an extensive use of metaheuristic approaches for automatic image enhancement processes. In this paper, a robust and novel adaptive Cuckoo search based Enhancement algorithm is proposed for the enhancement of various satellite images. The proposed algorithm includes a chaotic initialization phase, an adaptive Lévy flight strategy and a mutative randomization phase. Performance evaluation is done by quantitative and qualitative results comparison of the proposed algorithm with other state-of-the-art metaheuristic algorithms. Box-and-whisker plots are also included for evaluating the stability and convergence capability of all the algorithms tested. Test results substantiate the efficiency and robustness of the proposed algorithm in enhancing a wide range of satellite images.

Yaser Esmaeili Salehani;Saeed Gazor; "Smooth and Sparse Regularization for NMF Hyperspectral Unmixing," vol.10(8), pp.3677-3692, Aug. 2017. In this paper, we propose a matrix factorization method for hyperspectral unmixing using the linear mixing model. In this method, we add the arctan functions of the endmembers to the ℓ2-norm of the error in order to exploit the sparse property of the fractional abundances. Most of the energy of spectral signatures of materials is concentrated around the first few subbands resulting in smooth spectral signatures. To exploit this smoothness, we also add a weighted norm of the spectral signatures of the materials and to limit their nonsmooth errors. We propose a multiplicative iterative algorithm to solve this minimization problem as a nonnegative matrix factorization (NMF) problem. We apply our proposed Arctan-NMF method on the synthetic data from real spectral library and compare the performance of Arctan-NMF method with several state-of-the-art unmixing methods. Moreover, we evaluate the efficiency of Arctan-NMF on two different types of real hyperspectral data. Our simulations show that the Arctan-NMF is more effective than the state-of-the-art methods in terms of spectral angle distance and abundance angle distance.

Bin Yang;Wenfei Luo;Bin Wang; "Constrained Nonnegative Matrix Factorization Based on Particle Swarm Optimization for Hyperspectral Unmixing," vol.10(8), pp.3693-3710, Aug. 2017. Spectral unmixing is an important part of hyperspectral image processing. In recent years, constrained nonnegative matrix factorization (CNMF) has been successfully applied for unmixing without the pure-pixel assumption and the result is physically meaningful. However, traditional CNMF algorithms always have two limitations: 1) Most of them are based on gradient methods and usually get trapped in a local optimum. 2) As they adopt static penalty function as the constraint handling method, it's difficult to choose a proper regularization parameter that can balance the tradeoff between reconstruction error and constraint well, which leads to the decreased accuracy. In this paper, we introduce particle swarm optimization (PSO) combined with two types of progressive constraint handling approaches for spectral unmixing in the framework of CNMF. A basic method called high-dimensional double-swarm PSO (HDPSO) algorithm is first proposed. It divides the original high-dimension problem into a series of easier subproblems and adopts two interactive swarms to search endmembers and abundances, respectively. Then, adaptive PSO (APSO) and multiobjective PSO algorithms are proposed by respectively incorporating adaptive penalty function and multiobjective optimization approaches into HDPSO. Experiments with both simulated data and real hyperspectral images are used to compare these methods with traditional algorithms and results validate that the proposed methods give better performance for spectral unmixing.

Eleftheria A. Mylona;Olga A. Sykioti;Konstantinos D. Koutroumbas;Athanasios A. Rontogiannis; "Spectral Unmixing-Based Clustering of High-Spatial Resolution Hyperspectral Imagery," vol.10(8), pp.3711-3721, Aug. 2017. This paper introduces a novel unsupervised spectral unmixing-based clustering method for high-spatial resolution hyperspectral images (HSIs). In contrast to most clustering methods reported so far, which are applied on the spectral signature representations of the image pixels, the idea in the proposed method is to apply clustering on the abundance representations of the pixels. Specifically, the proposed method comprises two main processing stages namely: an unmixing stage (consisting of the endmember extraction and abundance estimation (AE) substages) and a clustering stage. In the former stage, suitable endmembers are selected first as the most representative pure pixels. Then, the spectral signature of each pixel is expressed as a linear combination of the endmembers' spectral signatures and the pixel itself is represented by the relative abundance vector, which is estimated via an efficient AE algorithm. The resulting abundance vectors associated with the HSI pixels are next fed to the clustering stage. Eventually, the pixels are grouped into clusters, in terms of their associated abundance vectors and not their spectral signatures. Experiments are performed on a synthetic HSI dataset as well as on three airborne HSI datasets of high-spatial resolution containing vegetation and urban areas. The experimental results corroborate the effectiveness of the proposed method and demonstrate that it outperforms state-of-the-art clustering techniques in terms of overall accuracy, average accuracy, and kappa coefficient.

Andrea Marinoni;Harold Clenet; "Higher Order Nonlinear Hyperspectral Unmixing for Mineralogical Analysis Over Extraterrestrial Bodies," vol.10(8), pp.3722-3733, Aug. 2017. Algorithms allowing the deconvolution of hyperspectral data play a key-role in remotely sensed data processing for mineralogical investigation. Modified Gaussian model (MGM) based methods are of particular interest because they are able to retrieve accurate estimates of minerals abundances and chemistry in surface's rocks. However, MGM-based frameworks deliver high computational complexity and sensitivity to initial parameters for statistical distribution definition. In this paper, a new approach for efficient and robust mineralogical investigation over extraterrestrial bodies is introduced. The proposed framework takes advantage of the solid characterization of remote sensing hyperspectral images by unmixing higher order nonlinear combinations of reflectance features associated with mafic minerals. Experimental results achieved over Mars and Moon hyperspectral images show that the proposed scheme is able to retrieve magmatic mineral abundance maps that are highly correlated to those achieved by means of MGM-based scheme while overcoming the aforesaid issues. Finally, an empirical study allowing to distinguish between clino- and orthopyroxenes by properly processing the outcomes of nonlinear hyperspectral unmixing method is reported.

Saeid Niazmardi;Abdolreza Safari;Saeid Homayouni; "A Novel Multiple Kernel Learning Framework for Multiple Feature Classification," vol.10(8), pp.3734-3743, Aug. 2017. Multiple Kernel Learning (MKL) algorithms have recently demonstrated their effectiveness for classifying the data with numerous features. These algorithms aim at learning an optimal composite kernel through combining the basis kernels constructed from different features. Despite their satisfactory results, MKL algorithms assume that the basis kernels are a priori computed. Moreover, they adopt complex optimization methods to train the combination of the basis kernels, which are usually hard to solve and can only handle the binary classification problems. In this paper, a novel MKL framework was introduced in order to address all these issues. This framework optimizes a data-dependent kernel evaluation measure in order to learn both the basis kernels and their combination. The kernel evaluation measure should be able to estimate the goodness of the composite kernel for a multiclass classification problem. In this paper, we defined such a measure based on the similarity between the composite kernel and an ideal kernel. To this end, three different kernel-based similarity measures, namely kernel alignment (KA), centered kernel alignment (CKA), and Hilbert-Schmidt independence criterion (HSIC), were presented. For solving the optimization problem of the proposed MKL framework, we used the metaheuristic optimization algorithms, which in addition to being accurate algorithms can be easily implemented. The performance of the proposed framework was evaluated by classifying the features extracted from two multispectral and hyperspectral datasets. The results showed that this framework outperformed the other state-of-the-art MKL algorithms in terms of both classification accuracy and the computational time.

Xiaoning Song;Yawei Wang;Bohui Tang;Pei Leng;Sun Chuan;Jian Peng;Alexander Loew; "Estimation of Land Surface Temperature Using FengYun-2E (FY-2E) Data: A Case Study of the Source Area of the Yellow River," vol.10(8), pp.3744-3751, Aug. 2017. Land surface temperature (LST) is a key variable used for studies of water cycles and energy budgets of land-atmosphere interfaces. This study addresses the theory of LST retrieval from data acquired by the Chinese operational geostationary meteorological satellite FengYun-2E (FY-2E) in two thermal infrared channels (IR1: 10.29-11.45 μm and IR2: 11.59-12.79 μm) using a generalized split-window algorithm. Specifically, land surface emissivity (LSE) in the two thermal infrared channels is estimated from the LSE in channels 31 and 32 of the moderate-resolution imaging spectroradiometer (MODIS) product. In addition, an eight-day composition MODIS LSE product (MOD11A2) and the daily MODIS LSE product (MOD11A1) are used in the algorithm to estimate FY-2E emissivities. The results indicate that the LST derived from MOD11A1 is more accurate and, therefore, more appropriate for daily cloud-free LST estimation. Finally, the estimated LST was validated using the MODIS LST product for the heterogeneous source area of the Yellow River. The results show a significant correlation between the two datasets, with a correlation coefficient (R) varying from 0.60 to 0.94 and a root mean square error ranging from 1.89 to 3.71 K. Moreover, the estimated LST agrees well with ground-measured soil temperatures, with an R of 0.98.

Yimian Dai;Yiquan Wu; "Reweighted Infrared Patch-Tensor Model With Both Nonlocal and Local Priors for Single-Frame Small Target Detection," vol.10(8), pp.3752-3767, Aug. 2017. Many state-of-the-art methods have been proposed for infrared small target detection. They work well on the images with homogeneous backgrounds and high-contrast targets. However, when facing highly heterogeneous backgrounds, they would not perform very well, mainly due to: 1) the existence of strong edges and other interfering components, 2) not utilizing the priors fully. Inspired by this, we propose a novel method to exploit both local and nonlocal priors simultaneously. First, we employ a new infrared patch-tensor (IPT) model to represent the image and preserve its spatial correlations. Exploiting the target sparse prior and background nonlocal self-correlation prior, the target-background separation is modeled as a robust low-rank tensor recovery problem. Moreover, with the help of the structure tensor and reweighted idea, we design an entrywise local-structure-adaptive and sparsity enhancing weight to replace the globally constant weighting parameter. The decomposition could be achieved via the elementwise reweighted higher order robust principal component analysis with an additional convergence condition according to the practical situation of target detection. Extensive experiments demonstrate that our model outperforms the other state-of-the-arts, in particular for the images with very dim targets and heavy clutters.

Changjiang Hu;Craig Benson;Chris Rizos;Li Qiao; "Single-Pass Sub-Meter Space-Based GNSS-R Ice Altimetry: Results From TDS-1," vol.10(8), pp.3782-3788, Aug. 2017. Space-based Global Navigation Satellite System Reflectometry (GNSS-R) altimetry remains an open challenge. This paper reports on space-based GNSS-R altimetry using 40-s period of intermediate frequency recording from the TechDemoSat-1 mission. This recording is unique because one GPS signal is reflected from ice. The waveforms that are used to determine path delay are generated by 1 ms coherent integration. Pseudoranges are smoothed every 0.5 s by linear models before calculating the path delay. Altimetric results are compared to DTU10 mean sea surface heights, with good agreement being obtained. The RMS difference of 4.4 m is much smaller than reported in the current literature. Very good altimetric precision of better than 1 m (0.96 m) is achieved with a spatial resolution of 3.8 km. This result validates the potential of space-based GNSS-R altimetry.

Qingyun Yan;Weimin Huang;Cecilia Moloney; "Neural Networks Based Sea Ice Detection and Concentration Retrieval From GNSS-R Delay-Doppler Maps," vol.10(8), pp.3789-3798, Aug. 2017. In this paper, a neural networks (NN) based scheme is presented for detecting sea ice and retrieving sea ice concentration (SIC) from global navigation satellite system reflectometry delay-Doppler maps (DDMs). Here, a multilayer perceptron neural network with back-propagation learning is adopted. In practice, two NN were separately developed for sea ice detection and concentration retrieval purposes. In the training phase, DDM pixels were employed as an input. The SIC data obtained by Nimbus-7 SMMR and DMSP SSM/I-SSMIS sensors were used as the target data, which were also regarded as ground-truth data in this paper. After the training process using a dataset collected around February 4, 2015, these networks were used to produce corresponding detection and concentration estimation for other four sets of DDM data, which were collected around February 12, 2015, February 20, 2015, March 16, 2015, and April 17, 2015, respectively. Results show high accuracy in sea ice detection and concentration estimation with DDMs using the proposed scheme. On average, the accuracy for sea ice detection is about 98.4%. In terms of estimated SIC, the mean absolute error is less than 9%, whereas the correlation coefficient is as high as 0.93 compared with the reference data. It was also found that low sea state and wind speed could lead to an overestimation of SIC for partially ice-covered region.

Faozi Saïd;Stephen John Katzberg;Seubson Soisuvarn; "Retrieving Hurricane Maximum Winds Using Simulated CYGNSS Power-Versus-Delay Waveforms," vol.10(8), pp.3799-3809, Aug. 2017. A novel approach in retrieving hurricane maximum winds using simulated NASA Cyclone Navigation Satellite System (CYGNSS) data is presented. Five hundred fifty two hurricane wind fields, from the 2010-2011 Atlantic and Eastern pacific hurricane seasons, were used to test the algorithm. These wind fields have been obtained from the hurricane weather research and forecasting model (HWRF). Power-versus-delay waveforms associated with specular points located along CYGNSS tracks crossing these wind fields were simulated. These “storm” power-versus-delay waveforms were compared to “reference” power-versus-delay waveforms generated over a set of synthetic Willoughby storms with known maximum wind speeds. The retrieved maximum wind speeds are compared against the hurricane research division reanalysis data (Best Track) and HWRF. For Best Track maximum wind speeds less than 40 m/s and greater than 40 m/s, the overall bias against Best Track is 11.3 and 2.1 m/s, respectively. When comparing against HWRF maximum wind speeds less than 40 m/s and greater than 40 m/s, the overall bias is 11.5 and 3.0 m/s, respectively. These results are improved when translation effects were applied to these synthetic storms: compared against Best Track for maximum wind speeds less than 40 m/s and greater than 40 m/s, the biases are 9.0 and -1.13 m/s, respectively. When compared against HWRF, the biases are 8.6 and 0.4 m/s, respectively.

Jing-He Li;Yu-Jie Zhang;Rui Qi;Qing Huo Liu; "Wavelet-Based Higher Order Correlative Stacking for Seismic Data Denoising in the Curvelet Domain," vol.10(8), pp.3810-3820, Aug. 2017. To whiten random noise and identify coherent noise while preserving the features of seismic events, a hybrid denoising scheme of wavelet-based higher order correlative stacking (HOCS) in the curvelet domain is proposed. The proposed algorithm uses HOCS to isolate the coefficients of seismic events in the curvelet domain. It then removes the noises and recovers signals recorded in noisy environment, without the need to choose an arbitrary threshold; the HOCS method selects a threshold automatically in the curvelet domain. Therefore, with the HOCS, it is possible to capture the features of useful signals with good correlations at all scales and all angles, then to remove the features of coherent noise with disordered correlations. Using interpretive seismic records of karst cavities and hidden sinkhole detections after artificial backfill, we show that the proposed scheme improves noisy seismic data significantly with respect to both signal-to-noise ratio and fidelity. To demonstrate the advantages of this hybrid denoising scheme, a comparison of the performances between different individual denoising methods is investigated for complex seismic records contaminated with different types of noise. Numerical case studies and three field data examples validate the effectiveness of the hybrid denoising scheme proposed in this paper.

Yuqing Wang;Zhenming Peng;Xiaoyang Wang;Yanmin He; "Matching Pursuit-Based Sliced Wigner Higher Order Spectral Analysis for Seismic Signals," vol.10(8), pp.3821-3828, Aug. 2017. The Wigner higher order spectra (WHOS) are multidimensional time-frequency distributions defined by extending the Wigner-Ville distribution (WVD) to higher order spectra domains. As a subset of WHOS, the sliced WHOS (SWHOS) are used for conveniently representing time-frequency spectra. The SWHOS provide a better localized time-frequency support compared with WVD, but still suffers from cross term issues. Therefore, we propose a matching pursuit-based sliced Wigner higher order spectra (MP-SWHOS) algorithm, which can obtain a sparser high-resolution time-frequency spectrum without cross terms. The performance of MP-SWHOS is assessed on a simulated model and real data. The application to seismic spectral decomposition shows that the proposed algorithm can provide single-frequency slices with greater precision, important in the analysis of hydrocarbon reservoirs.

* "Call For Papers," vol.10(8), pp.3829-3830, Aug. 2017.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "Introducing IEEE collabratec," vol.10(8), pp.3831-3831, Aug. 2017.* Advertisement, IEEE. IEEE Collabratec is a new, integrated online community where IEEE members, researchers, authors, and technology professionals with similar fields of interest can network and collaborate, as well as create and manage content. Featuring a suite of powerful online networking and collaboration tools, IEEE Collabratec allows you to connect according to geographic location, technical interests, or career pursuits. You can also create and share a professional identity that showcases key accomplishments and participate in groups focused around mutual interests, actively learning from and contributing to knowledgeable communities. All in one place! Learn about IEEE Collabratec at ieeecollabratec.org.

* "Information for Authors," vol.10(8), pp.C3-C3, Aug. 2017.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "Institutional Listings," vol.10(8), pp.C4-C4, Aug. 2017.* Presents a listing of institutional institutions relevant for this issue of the publication.

## IEEE Geoscience and Remote Sensing Magazine - new TOC (2017 September 21) [Website]

* "Front Cover," vol.5(3), pp.C1-C1, Sept. 2017.* Presents the front cover for this issue of the publication.

* "GRSM Call for Papers," vol.5(3), pp.C2-C2, Sept. 2017.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "Staff List," vol.5(3), pp.2-2, Sept. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Lorenzo Bruzzone; "IEEE GRSM Now Included in Thomson Reuters?s Journal Citation Report [From the Editor[Name:_blank]]," vol.5(3), pp.3-4, Sept. 2017. Presents information on IEEE GRSM inclusion in Thomson Reuters?s Journal Citation Report.

Adriano Camps; "IEEE GRSS Accomplishes New Milestones [President's Message[Name:_blank]]," vol.5(3), pp.5-7, Sept. 2017. Presents the President’s message for this issue of the publication.

Arnau Fombuena; "Unmanned Aerial Vehicles and Spatial Thinking: Boarding Education With Geotechnology And Drones," vol.5(3), pp.8-18, Sept. 2017. The recent boom in the number and importance of unmanned aerial vehicles (UAVs), such as drones, unmanned aircraft systems (UASs), and remotely piloted aircraft systems (RPASs), has placed the geosciences and remote sensing (RS) community in a privileged position. But the increasing market demand for a geoenabled workforce contrasts markedly with the number of college-level students enrolling in the related disciplines. This article focuses on current and future opportunities for incorporating UAVs, geosciences, and RS as part of education programs to engage incoming students (and society more broadly) in this set of emerging technologies. Specifically, we will review the current status of geosciences and RS education involving UAVs, including a strengths, weaknesses, opportunities, and threats (SWOT) matrix and a vision toward the future. In short, it is essential that we systematize, disseminate, and universalize topics related to geosciences and RS education in terms of UAVs because the fields are growing exponentially, and the trend is expected to continue.

Hripsime Matevosyan;Ignasi Lluch;Armen Poghosyan;Alessandro Golkar; "A Value-Chain Analysis for the Copernicus Earth Observation Infrastructure Evolution: A Knowledgebase of Users, Needs, Services, and Products," vol.5(3), pp.19-35, Sept. 2017. This article reviews and analyzes the needs of Earth observation (EO) services' users, stakeholders, and beneficiaries. It identifies the key elements of the value chain of the European EO infrastructure and builds a comprehensive knowledgebase of those elements, represented as a relational database. The entities in the database are users, needs, services, and products. The database also includes connections between these entities-such as users to needs and products to services-via mapping tables. Leveraging data from the relevant policy and requirement documents as well as from research project reports, the database contains 63 users, 37 explicit needs, and 95 EO products across six Copernicus services.

Stephen L. Durden;Dragana Perkovic-Martin; "The RapidScat Ocean Winds Scatterometer: A Radar System Engineering Perspective," vol.5(3), pp.36-43, Sept. 2017. The NASA International Space Station (ISS)-RapidScat scatterometer operated on board the ISS from October 2014 into August 2016. It was developed using a combination of new subsystems and spare SeaWinds scatterometer engineering model subsystems to interface with the ISS. Using commercial (nonflight-qualified) parts in the new assemblies, developing RapidScat required a relatively small budget and short time schedule (just over two years). This article describes RapidScat's development from the perspective of radar system engineering, particularly in relation to performance requirements and testing.

Lionel Gourdeau;Bughsin Djath;Alexandre Ganachaud;Fernando Nino;Florence Birol;Jacques Verron;Nicolas Fuller; "Altimetry in a Regional Tropical Sea [Space Agencies[Name:_blank]]," vol.5(3), pp.44-52, Sept. 2017. The satellite for Argos and AltiKa (SARAL/AltiKa) is the first ocean altimeter mission to operate in the Ka-band frequency. The objective of this article is to investigate the extent to which SARAL/AltiKa sea-level measurements provide valuable information in a complex bathymetric region, i.e., the semienclosed Solomon Sea. The data-editing procedure is revisited, and we propose two new data-editing criteria. The first is based on the detection of erroneous sea-level values after computation, and the second directly analyzes the radar measurements and geophysical corrections. We show that both methods are significantly more efficient than the standard procedure used in operational processing chains.

Diane K. Davies;Molly E. Brown;Kevin J. Murphy;Karen A. Michael;Bradley T. Zavodsky;E. Natasha Stavros;Mark L. Caroll; "Workshop on Using NASA Data for Time-Sensitive Applications [Space Agencies[Name:_blank]]," vol.5(3), pp.52-58, Sept. 2017. Presents information on the Workshop on Using NASA Data for Time-Sensitive Applications.

Feng Xu;Feng Wang;Qiang Yin; "China Chapter Chairs Meeting in Shanghai [Chapters[Name:_blank]]," vol.5(3), pp.59-60, Sept. 2017. Presents information on various GRS Society chapters.

Colin Schwegmann; "The Reinvigorated South African GRSS Chapter [Chapters[Name:_blank]]," vol.5(3), pp.61-62, Sept. 2017. Presents information on various GRS Society chapters.

* "GRSS Chapters and Contact Information [Chapters[Name:_blank]]," vol.5(3), pp.63-64, Sept. 2017.* Presents information on various GRS Society chapters.

David Le Vine; "Three New GRSS Distinguished Lecturers Announced [Distinguished Lecturer Program[Name:_blank]]," vol.5(3), pp.65-68, Sept. 2017. Presents information on the GRSS Distinguished Lecturers series.

* "GRSS Members Elevated to IEEE Senior Member in April and June 2017 [GRSS Member Highlights[Name:_blank]]," vol.5(3), pp.69-69, Sept. 2017.* Present GRSS members who were elevated to the status of IEEE Senior Member.

Werner Wiesbeck;Martti Hallikainen;Mahta Moghaddam; "IEEE GRSS Awards 2018: Call for Nominations [GRSS Member Highlights[Name:_blank]]," vol.5(3), pp.70-71, Sept. 2017. Presents a call for nominations for select GRSS awards for 2018.

Yuqi Bai;Clifford A. Jacobs;Mei-Po Kwan;Christoph Waldmann; "Geoscience and the Technological Revolution [Perspectives[Name:_blank]]," vol.5(3), pp.72-75, Sept. 2017. The imperative for geoscience is to help society understand the Earth system and thus inform decision-making processes. This necessity has never been greater than it is today, nor have the challenges been more complex.

* "Calendar [Calendar[Name:_blank]]," vol.5(3), pp.76-76, Sept. 2017.* Presents the GRSS upcoming calendar of events.

Topic revision: r6 - 22 May 2015, AndreaVaccari

©2017 University of Virginia. Privacy Statement
Virginia Image and Video Analysis - School of Engineering and Applied Science - University of Virginia
P.O. Box 400743 - Charlottesville, VA - 22904 - E-Mail viva.uva@gmailREMOVEME.com