Relevant TOCs

IEEE Transactions on Image Processing - new TOC (2017 July 20) [Website]

Mahdi S. Hosseini;Konstantinos N. Plataniotis; "Derivative Kernels: Numerics and Applications," vol.26(10), pp.4596-4611, Oct. 2017. A generalized framework for numerical differentiation (ND) is proposed for constructing a finite impulse response (FIR) filter in closed form. The framework regulates the frequency response of ND filters for arbitrary derivative-order and cutoff frequency selected parameters relying on interpolating power polynomials and maximally flat design techniques. Compared with the state-of-the-art solutions, such as Gaussian kernels, the proposed ND filter is sharply localized in the Fourier domain with ripple-free artifacts. Here, we construct 2D MaxFlat kernels for image directional differentiation to calculate image differentials for arbitrary derivative order, cutoff level and steering angle. The resulted kernel library renders a new solution capable of delivering discrete approximation of gradients, Hessian, and higher-order tensors in numerous applications. We tested the utility of this library on three different imaging applications with main focus on the unsharp masking. The reported results highlight the high efficiency of the 2D MaxFlat kernel and its versatility with respect to robustness and parameter control accuracy.

Liping Jing;Chenyang Shen;Liu Yang;Jian Yu;Michael K. Ng; "Multi-Label Classification by Semi-Supervised Singular Value Decomposition," vol.26(10), pp.4612-4625, Oct. 2017. Multi-label problems arise in various domains, including automatic multimedia data categorization, and have generated significant interest in computer vision and machine learning community. However, existing methods do not adequately address two key challenges: exploiting correlations between labels and making up for the lack of labelled data or even missing labelled data. In this paper, we proposed to use a semi-supervised singular value decomposition (SVD) to handle these two challenges. The proposed model takes advantage of the nuclear norm regularization on the SVD to effectively capture the label correlations. Meanwhile, it introduces manifold regularization on mapping to capture the intrinsic structure among data, which provides a good way to reduce the required labelled data with improving the classification performance. Furthermore, we designed an efficient algorithm to solve the proposed model based on the alternating direction method of multipliers, and thus, it can efficiently deal with large-scale data sets. Experimental results for synthetic and real-world multimedia data sets demonstrate that the proposed method can exploit the label correlations and obtain promising and better label prediction results than the state-of-the-art methods.

Kuo-Liang Chung;Tsu-Chun Hsu;Chi-Chao Huang; "Joint Chroma Subsampling and Distortion-Minimization-Based Luma Modification for RGB Color Images With Application," vol.26(10), pp.4626-4638, Oct. 2017. In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each <inline-formula> <tex-math notation="LaTeX">$2\times 2$ </tex-math></inline-formula> UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, <inline-formula> <tex-math notation="LaTeX">$U_{s}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$V_{s}$ </tex-math></inline-formula>. Based on <inline-formula> <tex-math notation="LaTeX">$U_{s}$ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$V_{s}$ </tex-math></inline-formula>, and the corresponding <inline-formula> <tex-math notation="LaTeX">$2\times 2$ </tex-math></inline-formula> original RGB block, a main theorem is provided to determine the ideally modified <inline-formula> <tex-math notation="LaTeX">$2\times 2$ </tex-math></inline-formula> luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original <inline-formula> <tex-math notation="LaTeX">$2\times 2$ </tex-math></inline-formula> RGB block and the reconstructed <inline-formula> <tex-math notation="LaTeX">$2\times 2$ </tex-math></inline-formula> RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR qual- ty, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.

Michael T. McCann;Michael Unser; "High-Quality Parallel-Ray X-Ray CT Back Projection Using Optimized Interpolation," vol.26(10), pp.4639-4647, Oct. 2017. We propose a new, cost-efficient method for computing back projections in parallel-ray X-ray CT. Forward and back projections are the basis of almost all X-ray CT reconstruction methods, but computing these accurately is costly. In the special case of parallel-ray geometry, it turns out that reconstruction requires back projection only. One approach to accelerate the back projection is through interpolation: fit a continuous representation to samples of the desired signal, then sample it at the required locations. Instead, we propose applying a prefilter that has the effect of orthogonally projecting the underlying signal onto the space spanned by the interpolator, which can significantly improve the quality of the interpolation. We then build on this idea by using oblique projection, which simplifies the computation while giving effectively the same improvement in quality. Our experiments on analytical phantoms show that this refinement can improve the reconstruction quality for both filtered back projection and iterative reconstruction in the high-quality regime, i.e., with low noise and many measurements.

Baochang Zhang;Yun Yang;Chen Chen;Linlin Yang;Jungong Han;Ling Shao; "Action Recognition Using 3D Histograms of Texture and A Multi-Class Boosting Classifier," vol.26(10), pp.4648-4660, Oct. 2017. Human action recognition is an important yet challenging task. This paper presents a low-cost descriptor called 3D histograms of texture (3DHoTs) to extract discriminant features from a sequence of depth maps. 3DHoTs are derived from projecting depth frames onto three orthogonal Cartesian planes, i.e., the frontal, side, and top planes, and thus compactly characterize the salient information of a specific action, on which texture features are calculated to represent the action. Besides this fast feature descriptor, a new multi-class boosting classifier (MBC) is also proposed to efficiently exploit different kinds of features in a unified framework for action classification. Compared with the existing boosting frameworks, we add a new multi-class constraint into the objective function, which helps to maintain a better margin distribution by maximizing the mean of margin, whereas still minimizing the variance of margin. Experiments on the MSRAction3D, MSRGesture3D, MSRActivity3D, and UTD-MHAD data sets demonstrate that the proposed system combining 3DHoTs and MBC is superior to the state of the art.

Shoubiao Tan;Xi Sun;Wentao Chan;Lei Qu;Ling Shao; "Robust Face Recognition With Kernelized Locality-Sensitive Group Sparsity Representation," vol.26(10), pp.4661-4668, Oct. 2017. In this paper, a novel joint sparse representation method is proposed for robust face recognition. We embed both group sparsity and kernelized locality-sensitive constraints into the framework of sparse representation. The group sparsity constraint is designed to utilize the grouped structure information in the training data. The local similarity between test and training data is measured in the kernel space instead of the Euclidian space. As a result, the embedded nonlinear information can be effectively captured, leading to a more discriminative representation. We show that, by integrating the kernelized local-sensitivity constraint and the group sparsity constraint, the embedded structure information can be better explored, and significant performance improvement can be achieved. On the one hand, experiments on the ORL, AR, extended Yale B, and LFW data sets verify the superiority of our method. On the other hand, experiments on two unconstrained data sets, the LFW and the IJB-A, show that the utilization of sparsity can improve recognition performance, especially on the data sets with large pose variation.

Sebastien C. Wong;Victor Stamatescu;Adam Gatt;David Kearney;Ivan Lee;Mark D. McDonnell; "Track Everything: Limiting Prior Knowledge in Online Multi-Object Recognition," vol.26(10), pp.4669-4683, Oct. 2017. This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fast-learning image classifier, that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm. We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage, we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types. We describe our biologically inspired implementation, which adaptively learns the shape and motion of tracked objects, and apply it to the Neovision2 Tower benchmark data set, which contains multiple object types. An experimental evaluation demonstrates that our approach is competitive with the state-of-the-art video object recognition systems that do make use of object-specific prior knowledge in detection and tracking, while providing additional practical advantages by virtue of its generality.

Yuming Fang;Chi Zhang;Jing Li;Jianjun Lei;Matthieu Perreira Da Silva;Patrick Le Callet; "Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model," vol.26(10), pp.4684-4696, Oct. 2017. In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

Stefanos Eleftheriadis;Ognjen Rudovic;Marc Peter Deisenroth;Maja Pantic; "Gaussian Process Domain Experts for Modeling of Facial Affect," vol.26(10), pp.4697-4711, Oct. 2017. Most of existing models for facial behavior analysis rely on generic classifiers, which fail to generalize well to previously unseen data. This is because of inherent differences in source (training) and target (test) data, mainly caused by variation in subjects’ facial morphology, camera views, and so on. All of these account for different contexts in which target and source data are recorded, and thus, may adversely affect the performance of the models learned solely from source data. In this paper, we exploit the notion of domain adaptation and propose a data efficient approach to adapt already learned classifiers to new unseen contexts. Specifically, we build upon the probabilistic framework of Gaussian processes (GPs), and introduce domain-specific GP experts (e.g., for each subject). The model adaptation is facilitated in a probabilistic fashion, by conditioning the target expert on the predictions from multiple source experts. We further exploit the predictive variance of each expert to define an optimal weighting during inference. We evaluate the proposed model on three publicly available data sets for multi-class (MultiPIE) and multi-label (DISFA, FERA2015) facial expression analysis by performing adaptation of two contextual factors: “where” (view) and “who” (subject). In our experiments, the proposed approach consistently outperforms: 1) both source and target classifiers, while using a small number of target examples during the adaptation and 2) related state-of-the-art approaches for supervised domain adaptation.

Rameswar Panda;Niluthpol Chowdhury Mithun;Amit K. Roy-Chowdhury; "Diversity-Aware Multi-Video Summarization," vol.26(10), pp.4712-4724, Oct. 2017. Most video summarization approaches have focused on extracting a summary from a single video; we propose an unsupervised framework for summarizing a collection of videos. We observe that each video in the collection may contain some information that other videos do not have, and thus exploring the underlying complementarity could be beneficial in creating a diverse informative summary. We develop a novel diversity-aware sparse optimization method for multi-video summarization by exploring the complementarity within the videos. Our approach extracts a multi-video summary, which is both interesting and representative in describing the whole video collection. To efficiently solve our optimization problem, we develop an alternating minimization algorithm that minimizes the overall objective function with respect to one video at a time while fixing the other videos. Moreover, we introduce a new benchmark data set, Tour20, that contains 140 videos with multiple manually created summaries, which were acquired in a controlled experiment. Finally, by extensive experiments on the new Tour20 data set and several other multi-view data sets, we show that the proposed approach clearly outperforms the state-of-the-art methods on the two problems—topic-oriented video summarization and multi-view video summarization in a camera network.

Debarati Kundu;Deepti Ghadiyaram;Alan C. Bovik;Brian L. Evans; "Large-Scale Crowdsourced Study for Tone-Mapped HDR Pictures," vol.26(10), pp.4725-4740, Oct. 2017. Measuring digital picture quality, as perceived by human observers, is increasingly important in many applications in which humans are the ultimate consumers of visual information. Standard dynamic range (SDR) images provide 8 b/color/pixel. High dynamic range (HDR) images, usually created from multiple exposures of the same scene, can provide 16 or 32 b/color/pixel, but need to be tonemapped to SDR for display on standard monitors. Multiexposure fusion (MEF) techniques bypass HDR creation by fusing an exposure stack directly to SDR images to achieve aesthetically pleasing luminance and color distributions. Many HDR and MEF databases have a relatively small number of images and human opinion scores, obtained under stringently controlled conditions, thereby limiting realistic viewing. Moreover, many of these databases are intended to compare tone-mapping algorithms, rather than being specialized for developing and comparing image quality assessment models. To overcome these challenges, we conducted a massively crowdsourced online subjective study. The primary contributions described in this paper are: 1) the new ESPL-LIVE HDR Image Database that we created containing diverse images obtained by tone-mapping operators and MEF algorithms, with and without post-processing; 2) a large-scale subjective study that we conducted using a crowdsourced platform to gather more than 300 000 opinion scores on 1811 images from over 5000 unique observers; and 3) a detailed study of the correlation performance of the state-of-the-art no-reference image quality assessment algorithms against human opinion scores of these images. The database is available at http://signal.ece.utexas.edu/%7Edebarati/HDRDatabase.zip.

Heng Zhang;Vishal M. Patel;Rama Chellappa; "Low-Rank and Joint Sparse Representations for Multi-Modal Recognition," vol.26(10), pp.4741-4752, Oct. 2017. We propose multi-task and multivariate methods for multi-modal recognition based on low-rank and joint sparse representations. Our formulations can be viewed as generalized versions of multivariate low-rank and sparse regression, where sparse and low-rank representations across all modalities are imposed. One of our methods simultaneously couples information within different modalities by enforcing the common low-rank and joint sparse constraints among multi-modal observations. We also modify our formulations by including an occlusion term that is assumed to be sparse. The alternating direction method of multipliers is proposed to efficiently solve the resulting optimization problems. Extensive experiments on three publicly available multi-modal biometrics and object recognition data sets show that our methods compare favorably with other feature-level fusion methods.

Jun Zhang;Mingxia Liu;Dinggang Shen; "Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks," vol.26(10), pp.4753-4764, Oct. 2017. One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simultaneously in real time, using limited training data. Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale (e.g., thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection.

Xinchao Wang;Bin Fan;Shiyu Chang;Zhangyang Wang;Xianming Liu;Dacheng Tao;Thomas S. Huang; "Greedy Batch-Based Minimum-Cost Flows for Tracking Multiple Objects," vol.26(10), pp.4765-4776, Oct. 2017. Minimum-cost flow algorithms have recently achieved state-of-the-art results in multi-object tracking. However, they rely on the whole image sequence as input. When deployed in real-time applications or in distributed settings, these algorithms first operate on short batches of frames and then stitch the results into full trajectories. This decoupled strategy is prone to errors because the batch-based tracking errors may propagate to the final trajectories and cannot be corrected by other batches. In this paper, we propose a greedy batch-based minimum-cost flow approach for tracking multiple objects. Unlike existing approaches that conduct batch-based tracking and stitching sequentially, we optimize consecutive batches jointly so that the tracking results on one batch may benefit the results on the other. Specifically, we apply a generalized minimum-cost flows (MCF) algorithm on each batch and generate a set of conflicting trajectories. These trajectories comprise the ones with high probabilities, but also those with low probabilities potentially missed by detectors and trackers. We then apply the generalized MCF again to obtain the optimal matching between trajectories from consecutive batches. Our proposed approach is simple, effective, and does not require training. We demonstrate the power of our approach on data sets of different scenarios.

Olivier Le Meur;Antoine Coutrot;Zhi Liu;Pia Rämä;Adrien Le Roch;Andrea Helo; "Visual Attention Saccadic Models Learn to Emulate Gaze Patterns From Childhood to Adulthood," vol.26(10), pp.4777-4789, Oct. 2017. How people look at visual information reveals fundamental information about themselves, their interests and their state of mind. While previous visual attention models output static 2D saliency maps, saccadic models aim to predict not only where observers look at but also how they move their eyes to explore the scene. In this paper, we demonstrate that saccadic models are a flexible framework that can be tailored to emulate observer’s viewing tendencies. More specifically, we use fixation data from 101 observers split into five age groups (adults, 8–10 y.o., 6–8 y.o., 4–6 y.o., and 2 y.o.) to train our saccadic model for different stages of the development of human visual system. We show that the joint distribution of saccade amplitude and orientation is a visual signature specific to each age group, and can be used to generate age-dependent scan paths. Our age-dependent saccadic model does not only output human-like, age-specific visual scan paths, but also significantly outperforms other state-of-the-art saliency models. We demonstrate that the computational modeling of visual attention, through the use of saccadic model, can be efficiently adapted to emulate the gaze behavior of a specific group of observers.

Feng Shao;Wenchong Lin;Weisi Lin;Qiuping Jiang;Gangyi Jiang; "QoE-Guided Warping for Stereoscopic Image Retargeting," vol.26(10), pp.4790-4805, Oct. 2017. In the field of stereoscopic 3D (S3D) display, it is an interesting as well as meaningful issue to retarget the stereoscopic images to the target resolution, while the existing stereoscopic image retargeting methods do not fully take user’s Quality of Experience (QoE) into account. In this paper, we have presented a QoE-guided warping method for stereoscopic image retargeting, which retarget the stereoscopic image and adapt its depth range to the target display while promoting user’s QoE. Our method takes shape preservation, visual comfort preservation, and depth perception preservation energies into account, and simultaneously optimizes the 2D coordinates and depth information in 3D space. It also considers the specific viewing configuration in the visual comfort and depth perception preservation energy constraints. Experimental results on visually uncomfortable and comfortable stereoscopic images demonstrate that in comparison with the existing stereoscopic image retargeting methods, the proposed method can achieve a reasonable performance optimization among the QoE’s factors of image quality, visual comfort, and depth perception, leading to promising overall S3D experience.

IEEE Transactions on Medical Imaging - new TOC (2017 July 20) [Website]

* "Table of contents," vol.36(7), pp.C1-C4, July 2017.* Presents the table of contents for this issue of the publication.

* "IEEE Transactions on Medical Imaging publication information," vol.36(7), pp.C2-C2, July 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Yuri Levin-Schwartz;Vince D. Calhoun;Tülay Adalı; "Quantifying the Interaction and Contribution of Multiple Datasets in Fusion: Application to the Detection of Schizophrenia," vol.36(7), pp.1385-1395, July 2017. The extraction of information from multiple sets of data is a problem inherent to many disciplines. This is possible by either analyzing the data sets jointly as in data fusion or separately and then combining as in data integration. However, selecting the optimal method to combine and analyze multiset data is an ever-present challenge. The primary reason for this is the difficulty in determining the optimal contribution of each data set to an analysis as well as the amount of potentially exploitable complementary information among data sets. In this paper, we propose a novel classification rate-based technique to unambiguously quantify the contribution of each data set to a fusion result as well as facilitate direct comparisons of fusion methods on real data and apply a new method, independent vector analysis (IVA), to multiset fusion. This classification rate-based technique is used on functional magnetic resonance imaging data collected from 121 patients with schizophrenia and 150 healthy controls during the performance of three tasks. Through this application, we find that though optimal performance is achieved by exploiting all tasks, each task does not contribute equally to the result and this framework enables effective quantification of the value added by each task. Our results also demonstrate that data fusion methods are more powerful than data integration methods, with the former achieving a classification rate of 73.5 % and the latter achieving one of 70.9 %, a difference which we show is significant when all three tasks are analyzed together. Finally, we show that IVA, due to its flexibility, has equivalent or superior performance compared with the popular data fusion method, joint independent component analysis.

Alessandro Arduino;Luca Zilberti;Mario Chiampi;Oriano Bottauscio; "CSI-EPT in Presence of RF-Shield for MR-Coils," vol.36(7), pp.1396-1404, July 2017. Contrast source inversion electric properties tomography (CSI-EPT) is a recently developed technique for the electric properties tomography that recovers the electric properties distribution starting from measurements performed by magnetic resonance imaging scanners. This method is an optimal control approach based on the contrast source inversion technique, which distinguishes itself from other electric properties tomography techniques for its capability to recover also the local specific absorption rate distribution, essential for online dosimetry. Up to now, CSI-EPT has only been described in terms of integral equations, limiting its applicability to homogeneous unbounded background. In order to extend the method to the presence of a shield in the domain—as in the recurring case of shielded radio frequency coils—a more general formulation of CSI-EPT, based on a functional viewpoint, is introduced here. Two different implementations of CSI-EPT are proposed for a 2-D transverse magnetic model problem, one dealing with an unbounded domain and one considering the presence of a perfectly conductive shield. The two implementations are applied on the same virtual measurements obtained by numerically simulating a shielded radio frequency coil. The results are compared in terms of both electric properties recovery and local specific absorption rate estimate, in order to investigate the requirement of an accurate modeling of the underlying physical problem.

Gustavo Carneiro;Tingying Peng;Christine Bayer;Nassir Navab; "Automatic Quantification of Tumour Hypoxia From Multi-Modal Microscopy Images Using Weakly-Supervised Learning Methods," vol.36(7), pp.1405-1417, July 2017. In recently published clinical trial results, hypoxia-modified therapies have shown to provide more positive outcomes to cancer patients, compared with standard cancer treatments. The development and validation of these hypoxia-modified therapies depend on an effective way of measuring tumor hypoxia, but a standardized measurement is currently unavailable in clinical practice. Different types of manual measurements have been proposed in clinical research, but in this paper we focus on a recently published approach that quantifies the number and proportion of hypoxic regions using high resolution (immuno-)fluorescence (IF) and hematoxylin and eosin (HE) stained images of a histological specimen of a tumor. We introduce new machine learning-based methodologies to automate this measurement, where the main challenge is the fact that the clinical annotations available for training the proposed methodologies consist of the total number of normoxic, chronically hypoxic, and acutely hypoxic regions without any indication of their location in the image. Therefore, this represents a weakly-supervised structured output classification problem, where training is based on a high-order loss function formed by the norm of the difference between the manual and estimated annotations mentioned above. We propose four methodologies to solve this problem: 1) a naive method that uses a majority classifier applied on the nodes of a fixed grid placed over the input images; 2) a baseline method based on a structured output learning formulation that relies on a fixed grid placed over the input images; 3) an extension to this baseline based on a latent structured output learning formulation that uses a graph that is flexible in terms of the amount and positions of nodes; and 4) a pixel-wise labeling based on a fully-convolutional neural network. Using a data set of 89 weakly annotated pairs of IF and HE images from eight tumors, we show that the quantitativ- results of methods (3) and (4) above are equally competitive and superior to the naive (1) and baseline (2) methods. All proposed methodologies show high correlation values with respect to the clinical annotations.

David L. Freese;David F. C. Hsu;Derek Innes;Craig S. Levin; "Robust Timing Calibration for PET Using L1-Norm Minimization," vol.36(7), pp.1418-1426, July 2017. Positron emission tomography (PET) relies on accurate timing information to pair two 511-keV photons into a coincidence event. Calibration of time delays between detectors becomes increasingly important as the timing resolution of detector technology improves, as a calibration error can quickly become a dominant source of error. Previous work has shown that the maximum likelihood estimate of these delays can be calculated by least squares estimation, but an approach is not tractable for complex systems and degrades in the presence of randoms. We demonstrate the original problem to be solvable iteratively using the LSMR algorithm. Using the LSMR, we solve for 60 030 delay parameters, including energy-dependent delays, in 4.5 s, using 1 000 000 coincidence events for a two-panel system dedicated to clinical locoregional imaging. We then extend the original least squares problem to be robust to random coincidences and low statistics by implementing <inline-formula> <tex-math notation="LaTeX">$\ell _{1}$ </tex-math></inline-formula>-norm minimization using the alternating direction method of the multipliers (ADMM) algorithm. The ADMM algorithm converges after six iterations, or 20.6 s, and improves the timing resolution from 64.7 ± 0.1s full width at half maximum (FWHM) uncalibrated to 15.63 ± 0.02ns FWHM. We also demonstrate this algorithm’s applicability to commercial systems using a GE Discovery 690 PET/CT. We scan a rotating transmission source, and after subtracting the 511-keV photon time-of-flight due to the source position, we calculate 13 824 per-crystal delays using 5 000 000 coincidence events in 3.78 s with three iterations, while showing a timing resolution improvement that is significantly better than previous calibration methods in the literature.

Satoshi Kondo;Kazuya Takagi;Mutsumi Nishida;Takahito Iwai;Yusuke Kudo;Kouji Ogawa;Toshiya Kamiyama;Hitoshi Shibuya;Kaoru Kahata;Chikara Shimizu; "Computer-Aided Diagnosis of Focal Liver Lesions Using Contrast-Enhanced Ultrasonography With Perflubutane Microbubbles," vol.36(7), pp.1427-1437, July 2017. This paper proposes an automatic classification method based on machine learning in contrast-enhanced ultrasonography (CEUS) of focal liver lesions using the contrast agent Sonazoid. This method yields spatial and temporal features in the arterial phase, portal phase, and post-vascular phase, as well as max-hold images. The lesions are classified as benign or malignant and again as benign, hepatocellular carcinoma (HCC), or metastatic liver tumor using support vector machines (SVM) with a combination of selected optimal features. Experimental results using 98 subjects indicated that the benign and malignant classification has 94.0% sensitivity, 87.1% specificity, and 91.8% accuracy, and the accuracy of the benign, HCC, and metastatic liver tumor classifications are 84.4%, 87.7%, and 85.7%, respectively. The selected features in the SVM indicate that combining features from the three phases are important for classifying FLLs, especially, for the benign and malignant classifications. The experimental results are consistent with CEUS guidelines for diagnosing FLLs. This research can be considered to be a validation study, that confirms the importance of using features from these phases of the examination in a quantitative manner. In addition, the experimental results indicate that for the benign and malignant classifications, the specificity without the post-vascular phase features is significantly lower than the specificity with the post-vascular phase features. We also conducted an experiment on the operator dependency of setting regions of interest and observed that the intra-operator and inter-operator kappa coefficients were 0.45 and 0.77, respectively.

Ali-Reza Mohammadi-Nejad;Gholam-Ali Hossein-Zadeh;Hamid Soltanian-Zadeh; "Structured and Sparse Canonical Correlation Analysis as a Brain-Wide Multi-Modal Data Fusion Approach," vol.36(7), pp.1438-1448, July 2017. Multi-modal data fusion has recently emerged as a comprehensive neuroimaging analysis approach, which usually uses canonical correlation analysis (CCA). However, the current CCA-based fusion approaches face problems like high-dimensionality, multi-collinearity, unimodal feature selection, asymmetry, and loss of spatial information in reshaping the imaging data into vectors. This paper proposes a structured and sparse CCA (ssCCA) technique as a novel CCA method to overcome the above problems. To investigate the performance of the proposed algorithm, we have compared three data fusion techniques: standard CCA, regularized CCA, and ssCCA, and evaluated their ability to detect multi-modal data associations. We have used simulations to compare the performance of these approaches and probe the effects of non-negativity constraint, the dimensionality of features, sample size, and noise power. The results demonstrate that ssCCA outperforms the existing standard and regularized CCA-based fusion approaches. We have also applied the methods to real functional magnetic resonance imaging (fMRI) and structural MRI data of Alzheimer’s disease (AD) patients (n = 34) and healthy control (HC) subjects (n = 42) from the ADNI database. The results illustrate that the proposed unsupervised technique differentiates the transition pattern between the subject-course of AD patients and HC subjects with a p-value of less than <inline-formula> <tex-math notation="LaTeX">$1\times 10^{\mathrm {\mathbf {-6}}}$ </tex-math></inline-formula>. Furthermore, we have depicted the brain mapping of functional areas that are most correlated with the anatomical changes in AD patients relative to HC subjects.

Jürgen Rahmer;Daniel Wirtz;Claas Bontus;Jörn Borgert;Bernhard Gleich; "Interactive Magnetic Catheter Steering With 3-D Real-Time Feedback Using Multi-Color Magnetic Particle Imaging," vol.36(7), pp.1449-1456, July 2017. Magnetic particle imaging (MPI) is an emerging tomographic method that enables sensitive and fast imaging. It does not require ionizing radiation and thus may be a safe alternative for tracking of devices in the catheterization laboratory. The 3-D real-time imaging capabilities of MPI have been demonstrated in vivo and recent improvements in fast online image reconstruction enable almost real-time data reconstruction and visualization. Moreover, based on the use of different magnetic particle types for catheter visualization and blood pool imaging, multi-color MPI enables reconstruction of separate images for the catheter and the vessels from simultaneously measured data. While these are important assets for interventional imaging, MPI field generators can furthermore apply strong forces on a magnetic catheter tip. It is the aim of this paper to give a first demonstration of the combination of real-time multi-color MPI with online reconstruction and interactive field control for the application of forces on a magnetic catheter model in a phantom experiment.

Bulat Ibragimov;Robert Korez;Boštjan Likar;Franjo Pernuš;Lei Xing;Tomaž Vrtovec; "Segmentation of Pathological Structures by Landmark-Assisted Deformable Models," vol.36(7), pp.1457-1469, July 2017. Computerized segmentation of pathological structures in medical images is challenging, as, in addition to unclear image boundaries, image artifacts, and traces of surgical activities, the shape of pathological structures may be very different from the shape of normal structures. Even if a sufficient number of pathological training samples are collected, statistical shape modeling cannot always capture shape features of pathological samples as they may be suppressed by shape features of a considerably larger number of healthy samples. At the same time, landmarking can be efficient in analyzing pathological structures but often lacks robustness. In this paper, we combine the advantages of landmark detection and deformable models into a novel supervised multi-energy segmentation framework that can efficiently segment structures with pathological shape. The framework adopts the theory of Laplacian shape editing, that was introduced in the field of computer graphics, so that the limitations of statistical shape modeling are avoided. The performance of the proposed framework was validated by segmenting fractured lumbar vertebrae from 3-D computed tomography images, atrophic corpora callosa from 2-D magnetic resonance (MR) cross-sections and cancerous prostates from 3D MR images, resulting respectively in a Dice coefficient of 84.7 ± 5.0%, 85.3 ± 4.8% and 78.3 ± 5.1%, and boundary distance of 1.14 ± 0.49mm, 1.42 ± 0.45mm and 2.27 ± 0.52mm. The obtained results were shown to be superior in comparison to existing deformable model-based segmentation algorithms.

Bob D. de Vos;Jelmer M. Wolterink;Pim A. de Jong;Tim Leiner;Max A. Viergever;Ivana Išgum; "ConvNet-Based Localization of Anatomical Structures in 3-D Medical Images," vol.36(7), pp.1470-1481, July 2017. Localization of anatomical structures is a prerequisite for many tasks in a medical image analysis. We propose a method for automatic localization of one or more anatomical structures in 3-D medical images through detection of their presence in 2-D image slices using a convolutional neural network (ConvNet). A single ConvNet is trained to detect the presence of the anatomical structure of interest in axial, coronal, and sagittal slices extracted from a 3-D image. To allow the ConvNet to analyze slices of different sizes, spatial pyramid pooling is applied. After detection, 3-D bounding boxes are created by combining the output of the ConvNet in all slices. In the experiments, 200 chest CT, 100 cardiac CT angiography (CTA), and 100 abdomen CT scans were used. The heart, ascending aorta, aortic arch, and descending aorta were localized in chest CT scans, the left cardiac ventricle in cardiac CTA scans, and the liver in abdomen CT scans. Localization was evaluated using the distances between automatically and manually defined reference bounding box centroids and walls. The best results were achieved in the localization of structures with clearly defined boundaries (e.g., aortic arch) and the worst when the structure boundary was not clearly visible (e.g., liver). The method was more robust and accurate in localization multiple structures.

Gaoming Li;Haijun Li;Xiyu Duan;Quan Zhou;Juan Zhou;Kenn R. Oldham;Thomas D. Wang; "Visualizing Epithelial Expression in Vertical and Horizontal Planes With Dual Axes Confocal Endomicroscope Using Compact Distal Scanner," vol.36(7), pp.1482-1490, July 2017. The epithelium is a thin layer of tissue that lines hollow organs, such as colon. Visualizing in vertical cross sections with sub-cellular resolution is essential to understanding early disease mechanisms that progress naturally in the plane perpendicular to the tissue surface. The dual axes confocal architecture collects optical sections in tissue by directing light at an angle incident to the surface using separate illumination and collection beams to reduce effects of scattering, enhance dynamic range, and increase imaging depth. This configuration allows for images to be collected in the vertical as well as horizontal planes. We designed a fast, compact monolithic scanner based on the principle of parametric resonance. The mirrors were fabricated using microelectromechanical systems (MEMS) technology and were coated with aluminum to maximize near-infrared reflectivity. We achieved large axial displacements <inline-formula> <tex-math notation="LaTeX">$> 400~\mu \text{m}$ </tex-math></inline-formula> and wide lateral deflections >20°. The MEMS chip has a <inline-formula> <tex-math notation="LaTeX">$3.2\times2.9$ </tex-math></inline-formula> mm2 form factor that allows for efficient packaging in the distal end of an endomicroscope. Imaging can be performed in either the vertical or horizontal planes with <inline-formula> <tex-math notation="LaTeX">$430~\mu \text{m}$ </tex-math></inline-formula> depth or <inline-formula> <tex-math notation="LaTeX">$1 \times 1$ </tex-math></inline-formula> mm2 area, respectively, at 5 frames/s. We systemically administered a Cy5.5-labeled peptide that is specific for EGFR, and collected near-infrared fluorescence images ex vivo from pre-malignant mouse colonic epithelium to reveal the spatial distribution of this molecular target. Here, we demonstrate a novel scanning mechanism in a dual axes confocal endomicroscope that collects optical sections of- near-infrared fluorescence in either vertical or horizontal planes to visualize molecular expression in the epithelium.

Geoffrey Jones;Neil T. Clancy;Yusuf Helo;Simon Arridge;Daniel S. Elson;Danail Stoyanov; "Bayesian Estimation of Intrinsic Tissue Oxygenation and Perfusion From RGB Images," vol.36(7), pp.1491-1501, July 2017. Multispectral imaging (MSI) can potentially assist the intra-operative assessment of tissue structure, function and viability, by providing information about oxygenation. In this paper, we present a novel technique for recovering intrinsic MSI measurements from endoscopic RGB images without custom hardware adaptations. The advantage of this approach is that it requires no modification to existing surgical and diagnostic endoscopic imaging systems. Our method uses a radiometric color calibration of the endoscopic camera’s sensor in conjunction with a Bayesian framework to recover a per-pixel measurement of the total blood volume (THb) and oxygen saturation (SO2) in the observed tissue. The sensor’s pixel measurements are modeled as weighted sums over a mixture of Poisson distributions and we optimize the variables SO2 and THb to maximize the likelihood of the observations. To validate our technique, we use synthetic images generated from Monte Carlo physics simulation of light transport through soft tissue containing sub-surface blood vessels. We also validate our method on in vivo data by comparing it to a MSI dataset acquired with a hardware system that sequentially images multiple spectral bands without overlap. Our results are promising and show that we are able to provide surgeons with additional relevant information by processing endoscopic images with our modeling and inference framework.

Jarrod A. Collins;Jared A. Weis;Jon S. Heiselman;Logan W. Clements;Amber L. Simpson;William R. Jarnagin;Michael I. Miga; "Improving Registration Robustness for Image-Guided Liver Surgery in a Novel Human-to-Phantom Data Framework," vol.36(7), pp.1502-1510, July 2017. In open image-guided liver surgery (IGLS), a sparse representation of the intraoperative organ surface can be acquired to drive image-to-physical registration. We hypothesize that uncharacterized error induced by variation in the collection patterns of organ surface data limits the accuracy and robustness of an IGLS registration. Clinical validation of such registration methods is challenged due to the difficulty in obtaining data representative of the true state of organ deformation. We propose a novel human-to-phantom validation framework that transforms surface collection patterns from in vivo IGLS procedures (n = 13) onto a well-characterized hepatic deformation phantom for the purpose of validating surface-driven, volumetric nonrigid registration methods. An important feature of the approach is that it centers on combining workflow-realistic data acquisition and surgical deformations that are appropriate in behavior and magnitude. Using the approach, we investigate volumetric target registration error (TRE) with both current rigid IGLS and our improved nonrigid registration methods. Additionally, we introduce a spatial data resampling approach to mitigate the workflow-sensitive sampling problem. Using our human-to-phantom approach, TRE after routine rigid registration was 10.9 ± 0.6 mm with a signed closest point distance associated with residual surface fit in the range of ±10 mm, highly representative of open liver resections. After applying our novel resampling strategy and improved deformation correction method, TRE was reduced by 51%, i.e., a TRE of 5.3 ± 0.5 mm. This paper reported herein realizes a novel tractable approach for the validation of image-to-physical registration methods and demonstrates promising results for our correction method.

N. Gdaniec;M. Schlüter;M. Möddel;M. G. Kaul;K. M. Krishnan;A. Schlaefer;T. Knopp; "Detection and Compensation of Periodic Motion in Magnetic Particle Imaging," vol.36(7), pp.1511-1521, July 2017. The temporal resolution of the tomographic imaging method magnetic particle imaging (MPI) is remarkably high. The spatial resolution is degraded for measured voltage signal with low signal-to-noise ratio, because the regularization in the image reconstruction step needs to be increased for system-matrix approaches and for deconvolution steps in <inline-formula> <tex-math notation="LaTeX">$x$ </tex-math></inline-formula>-space approaches. To improve the signal-to-noise ratio, blockwise averaging of the signal over time can be advantageous. However, since block-wise averaging decreases the temporal resolution, it prevents resolving the motion. In this paper, a framework for averaging motion-corrupted MPI raw data is proposed. The motion is considered to be periodic as it is the case for respiration and/or the heartbeat. The same state of motion is thus reached repeatedly in a time series exceeding the repetition time of the motion and can be used for averaging. As the motion process and the acquisition process are, in general, not synchronized, averaging of the captured MPI raw data corresponding to the same state of motion requires to shift the starting point of the individual frames. For high-frequency motion, a higher frame rate is potentially required. To address this issue, a binning method for using only parts of complete frames from a motion cycle is proposed that further reduces the motion artifacts in the final images. The frequency of motion is derived directly from the MPI raw data signal without the need to capture an additional navigator signal. Using a motion phantom, it is shown that the proposed method is capable of averaging experimental data with reduced motion artifacts. The methods are further validated on in-vivo data from mouse experiments to compensate the heartbeat.

Luong Nguyen;Akif Burak Tosun;Jeffrey L. Fine;Adrian V. Lee;D. Lansing Taylor;S. Chakra Chennubhotla; "Spatial Statistics for Segmenting Histological Structures in H&E Stained Tissue Images," vol.36(7), pp.1522-1532, July 2017. Segmenting a broad class of histological structures in transmitted light and/or fluorescence-based images is a prerequisite for determining the pathological basis of cancer, elucidating spatial interactions between histological structures in tumor microenvironments (e.g., tumor infiltrating lymphocytes), facilitating precision medicine studies with deep molecular profiling, and providing an exploratory tool for pathologists. This paper focuses on segmenting histological structures in hematoxylin- and eosin-stained images of breast tissues, e.g., invasive carcinoma, carcinoma in situ, atypical and normal ducts, adipose tissue, and lymphocytes. We propose two graph-theoretic segmentation methods based on local spatial color and nuclei neighborhood statistics. For benchmarking, we curated a data set of 232 high-power field breast tissue images together with expertly annotated ground truth. To accurately model the preference for histological structures (ducts, vessels, tumor nets, adipose, etc.) over the remaining connective tissue and non-tissue areas in ground truth annotations, we propose a new region-based score for evaluating segmentation algorithms. We demonstrate the improvement of our proposed methods over the state-of-the-art algorithms in both region- and boundary-based performance measures.

Rongjian Li;Tao Zeng;Hanchuan Peng;Shuiwang Ji; "Deep Learning Segmentation of Optical Microscopy Images Improves 3-D Neuron Reconstruction," vol.36(7), pp.1533-1541, July 2017. Digital reconstruction, or tracing, of 3-D neuron structure from microscopy images is a critical step toward reversing engineering the wiring and anatomy of a brain. Despite a number of prior attempts, this task remains very challenging, especially when images are contaminated by noises or have discontinued segments of neurite patterns. An approach for addressing such problems is to identify the locations of neuronal voxels using image segmentation methods, prior to applying tracing or reconstruction techniques. This preprocessing step is expected to remove noises in the data, thereby leading to improved reconstruction results. In this paper, we proposed to use 3-D convolutional neural networks (CNNs) for segmenting the neuronal microscopy images. Specifically, we designed a novel CNN architecture, that takes volumetric images as the inputs and their voxel-wise segmentation maps as the outputs. The developed architecture allows us to train and predict using large microscopy images in an end-to-end manner. We evaluated the performance of our model on a variety of challenging 3-D microscopy images from different organisms. Results showed that the proposed methods improved the tracing performance significantly when combined with different reconstruction algorithms.

Duygu Sarikaya;Jason J. Corso;Khurshid A. Guru; "Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection," vol.36(7), pp.1542-1549, July 2017. Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human–robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.

Neeraj Kumar;Ruchika Verma;Sanuj Sharma;Surabhi Bhargava;Abhishek Vahadane;Amit Sethi; "A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology," vol.36(7), pp.1550-1560, July 2017. Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

Yuexiang Li;Linlin Shen;Shiqi Yu; "HEp-2 Specimen Image Segmentation and Classification Using Very Deep Fully Convolutional Network," vol.36(7), pp.1561-1572, July 2017. Reliable identification of Human Epithelial-2 (HEp-2) cell patterns can facilitate the diagnosis of systemic autoimmune diseases. However, traditional approach requires experienced experts to manually recognize the cell patterns, which suffers from the inter-observer variability. In this paper, an automatic pattern recognition system using fully convolutional network (FCN) was proposed to simultaneously address the segmentation and classification problem of HEp-2 specimen images. The proposed system transforms the residual network (ResNet) to fully convolutional ResNet (FCRN) enabling the network to perform semantic segmentation task. A sand-clock shape residual module is proposed to effectively and economically improve the performance of FCRN. The publicly available I3A-2014 data set was used to train the FCRN model to classify HEp-2 specimen images into seven catalogs: homogeneous, speckled, nucleolar, centromere, golgi, nuclear membrane, and mitotic spindle. The proposed system achieves a mean class accuracy of 94.94% for leave-one-out tests, which outperforms the winner of ICPR 2014, i.e., 89.93%. At the same time, our model also achieves a segmentation accuracy of 89.03%, which is 19.05% higher than that of the benchmark approach, i.e., 69.98%.

Alan Miranda;Steven Staelens;Sigrid Stroobants;Jeroen Verhaeghe; "Fast and Accurate Rat Head Motion Tracking With Point Sources for Awake Brain PET," vol.36(7), pp.1573-1582, July 2017. To avoid the confounding effects of anesthesia and immobilization stress in rat brain positron emission tomography (PET), motion tracking-based unrestrained awake rat brain imaging is being developed. In this paper, we propose a fast and accurate rat headmotion tracking method based on small PET point sources. PET point sources (3-4) attached to the rat’s head are tracked in image space using 15–32-ms time frames. Our point source tracking (PST) method was validated using a manually moved microDerenzo phantom that was simultaneously tracked with an optical tracker (OT) for comparison. The PST method was further validated in three awake [18F]FDG rat brain scans. Compared with the OT, the PST-based correction at the same frame rate (31.2 Hz) reduced the reconstructed FWHM by 0.39–0.66 mm for the different tested rod sizes of the microDerenzo phantom. The FWHM could be further reduced by another 0.07–0.13 mm when increasing the PST frame rate (66.7 Hz). Regional brain [18F]FDG uptake in the motion corrected scan was strongly correlated (<inline-formula> <tex-math notation="LaTeX">$p<0.0001$ </tex-math></inline-formula>) with that of the anesthetized reference scan for all three cases (<inline-formula> <tex-math notation="LaTeX">$0.94 < r < 0.97$ </tex-math></inline-formula>). The proposed PST method allowed excellent and reproducible motion correction in awake in vivo experiments. In addition, there is no need of specialized tracking equipment or additional calibrations to be performed, the point sources are practically imperceptible to the rat, and PST is ideally suitable for small bore scanners, where optical tracking might be challenging.

Dror Haor;Reuven Shavit;Moshe Shapiro;Amir B. Geva; "Back-Projection Cortical Potential Imaging: Theory and Results," vol.36(7), pp.1583-1595, July 2017. Electroencephalography (EEG) is the single brain monitoring technique that is non-invasive, portable, passive, exhibits high-temporal resolution, and gives a directmeasurement of the scalp electrical potential. Amajor disadvantage of the EEG is its low-spatial resolution, which is the result of the low-conductive skull that “smears” the currents coming from within the brain. Recording brain activity with both high temporal and spatial resolution is crucial for the localization of confined brain activations and the study of brainmechanismfunctionality, whichis then followed by diagnosis of brain-related diseases. In this paper, a new cortical potential imaging (CPI) method is presented. The new method gives an estimation of the electrical activity on the cortex surface and thus removes the “smearing effect” caused by the skull. The scalp potentials are back-projected CPI (BP-CPI) onto the cortex surface by building a well-posed problem to the Laplace equation that is solved by means of the finite elements method on a realistic head model. A unique solution to the CPI problem is obtained by introducing a cortical normal current estimation technique. The technique is based on the same mechanism used in the well-known surface Laplacian calculation, followed by a scalp-cortex back-projection routine. The BP-CPI passed four stages of validation, including validation on spherical and realistic head models, probabilistic analysis (Monte Carlo simulation), and noise sensitivity tests. In addition, the BP-CPI was compared with the minimum norm estimate CPI approach and found superior for multi-source cortical potential distributions with very good estimation results (CC >0.97) on a realistic head model in the regions of interest, for two representative cases. The BP-CPI can be easily incorporated in different monitoring tools and help researchers by maintaining an accurate estimati- n for the cortical potential of ongoing or event-related potentials in order to have better neurological inferences from the EEG.

* "40th Internatonal Conference of the IEEE Engineering in Medicine and Biology Society," vol.36(7), pp.1596-1596, July 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "IEEE Transactions on Medical Imaging information for authors," vol.36(7), pp.C3-C3, July 2017.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

IET Image Processing - new TOC (2017 July 20) [Website]

Yuanping Zhu;Kuang Zhang; "Text segmentation using superpixel clustering," vol.11(7), pp.455-464, 7 2017. Text segmentation is important for text image analysis and recognition; however, it is challenging due to noise and complex background in natural scenes. Superpixel-based image representation can enhance robustness to noise and local disturbances, but conventional superpixel algorithms are difficult to obtain the complete stroke regions and accurate boundaries for text images. In this study, a text segmentation method based on superpixel clustering is proposed. First, to generate accurate superpixels for text images, an adaptive simple linear iterative clustering-based text superpixel generation algorithm is proposed. The adaptive superpixel size and compactness are calculated to enhance boundary adherence. Second, to increase the complete coverage of strokes from superpixels, superpixel clustering merges homogeneous superpixels into larger regions for both strokes and the background. A modified density-based spatial clustering of applications with noise is proposed. Finally, stroke superpixel verification assigns each region to a stroke or to the background and the text segmentation result is obtained. The proposed method shows promising robustness to noise and complex background textures. Experimental results on the Korea Advanced Institute of Science and Technology (KAIST) scene text dataset, International Conference on Document Analysis and Recognition (ICDAR) 2003 natural scene text image dataset and Street View Text dataset verify that this method is effective and significantly outperforms existing methods.

Huicong Wu;Liang Xiao;Hiuk Jae Shim;Songze Tang; "Video stabilisation with total warping variation model," vol.11(7), pp.465-474, 7 2017. This study proposes a robust approach to stabilise videos with a new variational minimising model. In video stabilisation, accumulation error often occurs in cascaded transformation chain-based methods. To alleviate accumulation error, a new total warping variation (TWV) model is proposed, which describes the smoothness of stabilised camera motion and calculates all the warping transformations efficiently. After estimating original motion parameters based on a 2D similarity transformation model, the corresponding warping parameters are calculated under the TWV minimising framework, where the separable property of the motion parameters is utilised to obtain a closed-form solution. The proposed method provides robust, smooth and precise motion trajectories after stabilisation. Furthermore, an iterative TWV method is introduced to reduce high-frequency jitters as well as low-frequency motions. Moreover, an online TWV method is presented for a long video sequence streaming by adopting a sliding windowed approach. Experimental results on various shaky video sequences show the effectiveness of the proposed method.

Amir Reza Sadri;Sepideh Azarianpour;Maryam Zekri;Mehmet Emre Celebi;Saeid Sadri; "WN-based approach to melanoma diagnosis from dermoscopy images," vol.11(7), pp.475-482, 7 2017. A new computer-aided diagnosis (CAD) system for detecting malignant melanoma from dermoscopy images based on a fixed grid wavelet network (FGWN) is proposed. This novel approach is unique in at least three ways: (i) the FGWN is a fixed WN which does not require gradient-type algorithms for its construction, (ii) the construction of FGWN is based on a new regressor selection technique: D-optimality orthogonal matching pursuit (DOOMP), and (iii) the entire CAD system relies on the proposed FGWN. These characteristics enhance the integrity and reliability of the results obtained from different stages of automatic melanoma diagnosis. The DOOMP algorithm optimises the network model approximation ability rapidly while improving the model adequacy and robustness. This FGWN is then used to build a CAD system, which performs image enhancement, segmentation, and classification. To classify the images, in the first stage, 441 features with respect to colour, texture, and shape of each lesion are extracted. By means of feature selection, these 441 features are then reduced to 10. The proposed CAD system achieved an accuracy of 91.82%, sensitivity of 92.61%, specificity of 91%, and area under the curve value of 0.944 on a challenging set of 1039 dermoscopy images.

Mohsen Biglari;Ali Soleimani;Hamid Hassanpour; "Part-based recognition of vehicle make and model," vol.11(7), pp.483-491, 7 2017. Fine-grained recognition is a challenge that the computer vision community faces nowadays. The main category of the object is known in this problem and the goal is to determine the subcategory or fine-grained category. Vehicle make and model recognition (VMMR) is a hard fine-grained classification problem, due to the large number of classes, substantial inner-class and small inter-class distance. In this study, a novel approach has been proposed for VMMR based on latent SVM formulation. This approach automatically finds a set of discriminative parts in each class of vehicles by employing a novel greedy parts localisation algorithm, while learning a model per class using both features extracted from these parts and the spatial relationship between them. An effective and practical multi-class data mining method is proposed to filter out hard negative samples in the training procedure. Employing these trained individual models together, the authors' system can classify vehicles make and model with a high accuracy. For evaluation purposes, a new dataset including more than 5000 vehicles of 28 different makes and models has been collected and fully annotated. The experimental results on this dataset and the CompCars dataset indicate the outstanding performance of the authors' approach.

Xu Qiao;Xiaoqing Liu;Yen-wei Chen;Zhi-Ping Liu; "Multi-dimensional data representation using linear tensor coding," vol.11(7), pp.492-501, 7 2017. Linear coding is widely used to concisely represent data sets by discovering basis functions of capturing high-level features. However, the efficient identification of linear codes for representing multi-dimensional data remains very challenging. In this study, the authors address the problem by proposing a linear tensor coding algorithm to represent multi-dimensional data succinctly via a linear combination of tensor-formed bases without data expansion. Motivated by the amalgamation of linear image coding and multi-linear algebra, each basis function in the authors' algorithm captures some specific variabilities. The basis-associated coefficients can be used for data representation, compression and classification. When the authors apply the algorithm on both simulated phantom data and real facial data, the experimental results demonstrate their algorithm not only preserves the original information of input data, but also produces localised bases with concrete physical meanings.

Hong Liu;Meng Yan;Enmin Song;Yuejing Qian;Xiangyang Xu;Renchao Jin;Lianghai Jin;Chih-Cheng Hung; "[[http://ieeexplore.ieee.org/document/7972768][Label fusion method based on sparse patch representation for the brain MRI image segmentation][Name:_blank]]," vol.11(7), pp.502-511, 7 2017. The multi-Atlas patch-based label fusion method (MAS-PBM) has emerged as a promising technique for the magnetic resonance imaging (MRI) image segmentation. The state-of-the-art MAS-PBM approach measures the patch similarity between the target image and each atlas image using the features extracted from images intensity only. It is well known that each atlas consists of both MRI image and labelled image (which is also called the map). In other words, the map information is not used in calculating the similarity in the existing MAS-PBM. To improve the segmentation result, the authors propose an enhanced MAS-PBM in which the maps will be used for similarity measure. The first component of the proposed method is that an initial segmentation result (i.e. an appropriate map for the target) is obtained by using either the non-local-patch-based label fusion method (NPBM) or the sparse patch-based label fusion method (SPBM) based on the grey scales of patches. Then, the SPBM is applied again to obtain the finer segmentation based on the labels of patches. The authors called these two versions of the proposed fusion method as MAS-PBM-NPBM and MAS-PBM-SPBM. Experimental results show that more accurate segmentation results are achieved compared with those of the majority voting, NPBM, SPBM, STEPS and the hierarchical multi-atlas label fusion with multi-scale feature representation and label-specific patch partition.

Ba Thai;Mukhalad Al-nasrawi;Guang Deng;Zhuo Su; "Semi-guided bilateral filter," vol.11(7), pp.512-521, 7 2017. The bilateral filter (BF) is a non-linear filter that spatially smooths images with awareness of large structures such as edges. The level of smoothness applied to a pixel is constrained by a photometric weight, which can be obtained from the same image to be filtered (in case of the original BF) or from a guided image (in case of the joint/cross BF). In this study, the authors propose a new filter called the semi-guided BF which is derived from solving a non-linear constraint least square problem. The proposed filter's photometric weight incorporates information from the image to be filtered and the guided image. They propose a fast implementation of the filter based on layer approximation. They also study the iterative application of the proposed filter and show that the filter can preserve large structures while smoothing out small structures. This makes the proposed filter an efficient and effective tool for structure-aware image smoothing. Experimental results have demonstrated that performance of the proposed filter is comparable to those of the state-of-the-art algorithms.

Tingya Yang;Houshou Chen; "Matrix embedding in steganography with binary Reed–Muller codes," vol.11(7), pp.522-529, 7 2017. This study presents a modified majority-logic decoding algorithm of Reed-Muller (RM) codes for matrix embedding (ME) in steganography. An ME algorithm uses linear block code to improve the embedding efficiency in steganography. The optimal embedding algorithm in steganography is equivalent to the maximum likelihood decoding (MLD) algorithm in error-correcting codes. The main disadvantage of ME is that the equivalent MLD algorithm of lengthy embedding codes requires highly complex embedding. This study used RM codes to embed data in binary host images. The authors propose a novel low-complexity embedding algorithm that uses a modified majority-logic algorithm to decode RM codes, in which a message-passing algorithm (i.e. sum-product, min-sum, or bias propagation) is performed on the highest order of information bits in the RM codes. The experimental results indicate that integrating bias propagation into the proposed scheme achieves superior embedding efficiency (relative to when the sum-product or min-sum algorithm is used) and can even achieve the embedding bound of RM codes.

K. Raghesh Krishnan;Sudhakar Radhakrishnan; "[[http://ieeexplore.ieee.org/document/7972772][Hybrid approach to classification of focal and diffused liver disorders using ultrasound images with wavelets and texture features][Name:_blank]]," vol.11(7), pp.530-538, 7 2017. This study presents a computer-based approach to classify ten different kinds of focal and diffused liver disorders using ultrasound images. The diseased portion is isolated from the ultrasound image by applying active contour segmentation technique. The segmented region is further decomposed into horizontal, vertical and diagonal component images by applying biorthogonal wavelet transform. From the above wavelet filtered component images, grey level run-length matrix features are extracted and classified using random forests by applying ten-fold cross-validation strategy. The results are compared with spatial feature extraction techniques such as intensity histogram, invariant moment features and spatial texture features such as grey-level co-occurrence matrices, grey-level run length matrices and fractal texture features. The proposed technique, which is an application of texture feature extraction on transform domain images, gives an overall classification accuracy of 91% for a combination of ten classes of similar looking diseases which is appreciable than the spatial domain only techniques for liver disease classification from ultrasound images.

Zhihua Chen;Zhenzhu Wang;Bin Sheng;Chao Li;Ruimin Shen;Ping Li; "Dynamic RGB-to-CMYK conversion using visual contrast optimisation," vol.11(7), pp.539-549, 7 2017. As the standard colour space used by printers, Cyan, Magenta, Yellow, Black (CMYK) colour model is a subtractive colour space used to describe the printing process. Existing CMYK conversion methods rely on static conversion table, which may not preserve the subtle visual structures of images, due to the local visual contrast loss caused by the static colour mapping. Therefore, the authors propose a novel dynamic Red, Green, Blue (RGB)-to-CMYK colour conversion, which utilises the weighted entropy to extract the pixels with filter response change dramatically. They obtain the image activity map by combining these pixels with high skin probability regions, and optimise the colour conversion of each pixel to ensure that the ink used for each pixel can be saved, while the visual contrast can be preserved with ink-saving. In this way, their proposed technique can achieve dynamic CMYK colour conversion, in which the consumption of ink can be reduced without the loss of visual contrast. The experimental results have shown that their dynamic CMYK colour conversion saved 10-25% ink consumption compared with the static conversion method, while with high visual quality for the converted images.

Sagar Shriram Salwe;Karamtot Krishna Naik; "Discrete image data transmission in heterogeneous wireless network using vertical handover mechanism," vol.11(7), pp.550-558, 7 2017. Vertical handover (VHO) plays an important role in providing seamless connectivity between heterogeneous wireless networks. The authors propose VHO mechanism using an image as dynamic discrete data for transmission and received signal strength (RSS)-based switching mechanism. The novelty of work lies in image data used for simulation, RSS calculation using free space propagation model, receive delay calculation and sample-based time series analysis of received data. Results shows that when VHO mechanism is carried out in ISM band operated devices it provides seamless connectivity between diverse communication protocols. Simulation results exhibit the continuous transmission of data and synchronisation of received image data.

IEEE Transactions on Signal Processing - new TOC (2017 July 20) [Website]

Xiliang Luo;Penghao Cai;Xiaoyu Zhang;Die Hu;Cong Shen; "A Scalable Framework for CSI Feedback in FDD Massive MIMO via DL Path Aligning," vol.65(18), pp.4702-4716, Sept.15, 15 2017. Unlike the time-division duplexing systems, the downlink (DL) and uplink (UL) channels are not reciprocal in the case of frequency-division duplexing (FDD). However, some long-term parameters, e.g., the time delays and angles of arrival of the channel paths, enjoy reciprocity. In this paper, by efficiently exploiting the aforementioned limited reciprocity, we address the DL channel state information (CSI) feedback in a practical wideband massive multiple-input multiple-output system operating in the FDD mode. With orthogonal frequency-division multiplexing waveform and assuming frequency-selective fading channels, we propose a scalable framework for the DL pilots design, DL CSI acquisition, and the corresponding CSI feedback in the UL. In particular, the base station (BS) can transmit the FFT-based pilots with carefully selected phase shifts. Then, the user can rely on the so-called time-domain aggregate channel to derive the feedback of reduced dimensionality according to either its own knowledge about the statistics of the DL channels or the instruction from the serving BS. We demonstrate that each user can just feed back one scalar number per DL channel path for the BS to recover the DL CSIs. Comprehensive numerical results further corroborate our designs.

Zhaoqiang Liu;Vincent Y. F. Tan; "Rank-One NMF-Based Initialization for NMF and Relative Error Bounds Under a Geometric Assumption," vol.65(18), pp.4717-4731, Sept.15, 15 2017. We propose a geometric assumption on nonnegative data matrices such that under this assumption, we are able to provide upper bounds (both deterministic and probabilistic) on the relative error of nonnegative matrix factorization (NMF). The algorithm we propose first uses the geometric assumption to obtain an exact clustering of the columns of the data matrix; subsequently, it employs several rank-one NMFs to obtain the final decomposition. When applied to data matrices generated from our statistical model, we observe that our proposed algorithm produces factor matrices with comparable relative errors vis-à-vis classical NMF algorithms but with much faster speeds. On face image and hyperspectral imaging datasets, we demonstrate that our algorithm provides an excellent initialization for applying other NMF algorithms at a low computational cost. Finally, we show on face and text datasets that the combinations of our algorithm and several classical NMF algorithms outperform other algorithms in terms of clustering performance.

Samer Bazzi;Wen Xu; "Downlink Training Sequence Design for FDD Multiuser Massive MIMO Systems," vol.65(18), pp.4732-4744, Sept.15, 15 2017. We consider the problem of downlink training sequence design for frequency-division-duplex multiuser massive multiple-input multiple-output systems in the general case where users have distinct spatial correlations. The training sequences leverage spatial correlations and are designed to minimize the channel estimation weighted sum mean square error (MSE) under the assumption that users employ minimum MSE estimators. Noting that the weighted sum MSE function is invariant to unitary rotations of its argument, a solution is obtained using a steepest descent method on the Grassmannian manifold. We extend the proposed design to scenarios with temporal correlations combined with Kalman filters at the users, and design sequences exploiting the multiuser spatio-temporal channel structure. Finally, we consider scenarios where only a limited number of bits <inline-formula><tex-math notation="LaTeX">$B$</tex-math> </inline-formula> are available at the base station (BS) to inform the users of each chosen sequence, e.g., sequences are chosen from a set of <inline-formula><tex-math notation="LaTeX">$2^B$</tex-math></inline-formula> vectors known to the BS and users, and develop a subspace version of matching pursuit techniques to choose the desired sequences. Simulation results using realistic channel models show that the proposed solutions improve user fairness with a proper choice of weights, lead to accurate channel estimates with training durations that can be much smaller than the number of BS antennas, and show substantial gains over randomly chosen sequences for even small values of <inline-formula> <tex-math notation="LaTeX">$B$</tex-math></inline-formula>.

Seifallah Jardak;Sajid Ahmed;Mohamed-Slim Alouini; "Low Complexity Moving Target Parameter Estimation for MIMO Radar Using 2D-FFT," vol.65(18), pp.4745-4755, Sept.15, 15 2017. In multiple-input multiple-output radar, to localize a target and estimate its reflection coefficient, a given cost function is usually optimized over a grid of points. The performance of such algorithms is directly affected by the grid resolution. Increasing the number of grid points enhances the resolution of the estimator but also increases its computational complexity exponentially. In this paper, two reduced complexity algorithms are derived based on Capon and amplitude and phase estimation (APES) to estimate the reflection coefficient, angular location, and Doppler shift of multiple moving targets. By exploiting the structure of the terms, the cost function is brought into a form that allows us to apply the two-dimensional fast Fourier transform (2D-FFT) and reduce the computational complexity of estimation. Using low resolution 2D-FFT, the proposed algorithm identifies suboptimal estimates and feeds them as initial points to the derived Newton gradient algorithm. In contrast to the grid-based search algorithms, the proposed algorithm can optimally estimate on- and off-the-grid targets in very low computational complexity. A new APES cost function with better estimation performance is also discussed. Generalized expressions of the Cramér–Rao lower bound are derived to assess the performance of the proposed algorithm.

Guolong Cui;Xianxiang Yu;Goffredo Foglia;Yongwei Huang;Jian Li; "Quadratic Optimization With Similarity Constraint for Unimodular Sequence Synthesis," vol.65(18), pp.4756-4769, Sept.15, 15 2017. This paper considers unimodular sequence synthesis under similarity constraint for both the continuous and discrete phase cases. A computationally efficient iterative algorithm for the continuous phase case (IA-CPC) is proposed to sequentially optimize the quadratic objective function. The quadratic optimization problem is turned into multiple one-dimensional optimization problems with closed-form solutions. For the discrete phase case, we present an iterative block optimization algorithm. Specifically, we partition the design variables into <inline-formula> <tex-math notation="LaTeX">$K$</tex-math></inline-formula> blocks, and then, we sequentially optimize each block via exhaustive search while fixing the remaining <inline-formula><tex-math notation="LaTeX">$K-1$</tex-math> </inline-formula> blocks. Finally, we evaluate the computational costs and performance gains of the proposed algorithms in comparison with power method-like and semidefinite relaxation related techniques.

Emilie Chouzenoux;Jean-Christophe Pesquet; "A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation," vol.65(18), pp.4770-4783, Sept.15, 15 2017. Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g., a sparsity promoting function), Majorize-Minimize (MM) methods have recently attracted much interest since they are fast, highly flexible, and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the data fidelity term corresponds to a least squares criterion and the cost function is replaced by a sequence of stochastic approximations of it. In this context, we propose an online version of an MM subspace algorithm and we study its convergence by using suitable probabilistic tools. Simulation results illustrate the good practical performance of the proposed algorithm associated with a memory gradient subspace, when applied to both nonadaptive and adaptive filter identification problems.

Shanxiang Lyu;Cong Ling; "Boosted KZ and LLL Algorithms," vol.65(18), pp.4784-4796, Sept.15, 15 2017. There exist two issues among popular lattice reduction algorithms that should cause our concern. The first one is Korkine–Zolotarev (KZ) and Lenstra–Lenstra–Lovász (LLL) algorithms may increase the lengths of basis vectors. The other is KZ reduction suffers worse performance than Minkowski reduction in terms of providing short basis vectors, despite its superior theoretical upper bounds. To address these limitations, we improve the size reduction steps in KZ and LLL to set up two new efficient algorithms, referred to as boosted KZ and LLL, for solving the shortest basis problem with exponential and polynomial complexity, respectively. Both of them offer better actual performance than their classic counterparts, and the performance bounds for KZ are also improved. We apply them to designing integer-forcing (IF) linear receivers for multi-input multioutput communications. Our simulations confirm their rate and complexity advantages.

Jun Shi;Xiaoping Liu;Xuejun Sha;Qinyu Zhang;Naitong Zhang; "A Sampling Theorem for Fractional Wavelet Transform With Error Estimates," vol.65(18), pp.4797-4811, Sept.15, 15 2017. As a generalization of the ordinary wavelet transform, the fractional wavelet transform (FRWT) is a very promising tool for signal analysis and processing. Many of its fundamental properties are already known; however, little attention has been paid to its sampling theory. In this paper, we first introduce the concept of multiresolution analysis associated with the FRWT, and then propose a sampling theorem for signals in FRWT-based multiresolution subspaces. The necessary and sufficient condition for the sampling theorem is derived. Moreover, sampling errors due to truncation and aliasing are discussed. The validity of the theoretical derivations is demonstrated via simulations.

Zeyu Wang;Ming Li;Hongmeng Chen;Lei Zuo;Peng Zhang;Yan Wu; "Adaptive Detection of a Subspace Signal in Signal-Dependent Interference," vol.65(18), pp.4812-4820, Sept.15, 15 2017. This paper deals with the problem of adaptive detection of subspace signals embedded in thermal noise and clutter that depends on the transmitted signal. To this end, at the design stage, we assume that the signal-dependent (SD) clutter shares the same subspace as the target signals. As customary, a set of secondary data, free of signal components, is also assumed available. Two adaptive detectors, referred to as the SD Rao and SD Wald, are proposed by resorting to the Rao test and Wald test design criteria. Unlike the classical Rao and Wald tests, which are derived by dividing the complex parameter into the real and imaginary parts, the proposed detectors treat the complex parameter as a single quantity to reduce the computational burden. Moreover, we derive the theoretical false alarm probabilities and detection probabilities and show that both the two proposed detectors exhibit the constant false alarm rate property. Numerical results show that the proposed detectors achieve a detection performance improvement over the conventional multidimensional detectors.

Bryan Paul;Christian D. Chapman;Alex Rajan Chiriyath;Daniel W. Bliss; "Bridging Mixture Model Estimation and Information Bounds Using I-MMSE," vol.65(18), pp.4821-4832, Sept.15, 15 2017. We derive bounds on mutual information for arbitrary estimation problems in additive noise, modeled using Gaussian mixtures. Previous work exploiting the I-minimum-mean-squared-error (MMSE) formula to formulate a bridge between bounds on the MMSE for Gaussian mixture model estimation problems and bounds on the mutual information are generalized to allow arbitrary noise modeling. A novel upper bound on estimation information is also developed for the general estimation case. In addition, limits are analyzed to develop bounds on arbitrary entropy, asymptotic behavior of all bounds, and bound errors with some results bridged back to the MMSE domain.

David B. H. Tay;Antonio Ortega; "Bipartite Graph Filter Banks: Polyphase Analysis and Generalization," vol.65(18), pp.4833-4846, Sept.15, 15 2017. The work by Narang and Ortega [“Perfect reconstruction two-channel wavelet filter banks for graph structured data,” IEEE Trans. Signal Process., vol. 60, no. 6, pp. 2786–2799, Jun. 2012], [“Compact support biorthogonal wavelet filterbanks for arbitrary undirected graphs,” IEEE Trans. Signal Process., vol. 61, no. 19, pp. 4673–4685, Oct. 2013] laid the foundations for the two-channel critically sampled perfect reconstruction filter bank for signals defined on undirected graphs. This basic filter bank is applicable only to bipartite graphs but using the notion of separable filtering, the basic filter bank can be applied to any arbitrary undirected graphs. In this paper, several new theoretical results are presented. In particular, the proposed polyphase analysis yields filtering structures in the downsampled domain that are equivalent to those before downsampling and, thus, can be exploited for efficient implementation. These theoretical results also provide new insights that can be exploited in the design of these systems. These insights allow us to generalize these filter banks to directed graphs and to using a variety of graph base matrices, while also providing a link to the <inline-formula><tex-math notation="LaTeX">$\text{DSP}_G$</tex-math></inline-formula> framework of Sandryhaila and Moura [“Discrete signal processing on graphs,” IEEE Trans. Signal Process., vol. 61, no. 7, pp. 1644–1636, Apr. 2013], [“Discrete signal processing on graphs: Frequency analysis,” IEEE Trans. Signal Process., vol. 62, no. 12, pp. 3042–3054- Jun. 2014]. Experiments show evidence that better nonlinear approximation and denoising results may be obtained by a better selection of these base matrices.

Gilberto Oliveira Corrêa;Alvaro Talavera; "Competitive Robust Estimation for Uncertain Linear Dynamic Models," vol.65(18), pp.4847-4861, Sept.15, 15 2017. In this paper, two types of robust linear estimation problems for dynamic channel model uncertainty are considered with the aim of characterizing (in computationally effective ways) competitive robust estimators, i.e., robust estimators that improve on the pointwise performance of minimax MSE (mean-squared error) estimators over the uncertain model set, at the expense of a moderate increase in the worst case MSE. The first one corresponds to the minimization of the worst case value of an approximate-regret function defined by a quadratic approximation of the “lower MSE envelope” on the uncertain model set. The second one corresponds to minimizing the nominal MSE error while ensuring that the worst case estimation error does not exceed a prescribed value. For uncertain classes defined by <inline-formula><tex-math notation="LaTeX">$H_{2}$</tex-math></inline-formula>-norm balls, it is shown that these two types of estimation problems can be recast as “semidefinite programming problems (SDPs, for short).” Numerical examples are presented for both the case of linear, finite-dimensional model classes (FIRs of a given length) and the case of nonparametric uncertain sets of causal, real-rational frequency-responses, suggesting that these two types of estimators can be attractive alternatives to the min–max MSE estimator. For the case of spectral-norm (in the finite-dimensional case) or <inline-formula><tex-math notation="LaTeX">$H_{\infty }$</tex-math> </inline-formula>-norm (in the nonparametric case), the worst case MSE and approximate-regret for each candidate estimator are replaced by upper bounds obtained by Lagrangian relaxation and (somewhat conservative) versions of the estimation problems previously mentioned are posed. It is shown that these problems can also be recast as SDPs.

Alla Tarighati;James Gross;Joakim Jaldén; "Decentralized Hypothesis Testing in Energy Harvesting Wireless Sensor Networks," vol.65(18), pp.4862-4873, Sept.15, 15 2017. We consider the problem of decentralized hypothesis testing in a network of energy harvesting sensors, where sensors make noisy observations of a phenomenon and send quantized information about the phenomenon towards a fusion center. The fusion center makes a decision about the present hypothesis using the aggregate received data during a time interval. We explicitly consider a scenario under which the messages are sent through parallel access channels towards the fusion center. To avoid limited lifetime issues, we assume each sensor is capable of harvesting all the energy it needs for the communication from the environment. Each sensor has an energy buffer (battery) to save its harvested energy for use in other time intervals. Our key contribution is to formulate the problem of decentralized detection in a sensor network with energy harvesting devices. Our analysis is based on a queuing-theoretic model for the battery and we propose a sensor decision design method by considering long-term energy management at the sensors. We show how the performance of the system changes for different battery capacities. We then numerically show how our findings can be used in the design of sensor networks with energy harvesting sensors.

Yanqing Xu;Chao Shen;Zhiguo Ding;Xiaofang Sun;Shi Yan;Gang Zhu;Zhangdui Zhong; "Joint Beamforming and Power-Splitting Control in Downlink Cooperative SWIPT NOMA Systems," vol.65(18), pp.4874-4886, Sept.15, 15 2017. This paper investigates the application of simultaneous wireless information and power transfer (SWIPT) to cooperative non-orthogonal multiple access (NOMA). A new cooperative multiple-input single-output (MISO) SWIPT NOMA protocol is proposed, where a user with a strong channel condition acts as an energy-harvesting (EH) relay by adopting power splitting (PS) scheme to help a user with a poor channel condition. By jointly optimizing the PS ratio and the beamforming vectors, we aim at maximizing the data rate of the “strong user” while satisfying the QoS requirement of the “weak user”. To resolve the formulated nonconvex problem, the semidefinite relaxation (SDR) technique is applied to reformulate the original problem, by proving the rank-one optimality. And then an iterative algorithm based on successive convex approximation (SCA) is proposed for complexity reduction, which can at least attain its stationary point efficiently. In view of the potential application scenarios, e.g., Internet of Things (IoT), the single-input single-output (SISO) case is also studied. The formulated problem is proved to be strictly unimodal with respect to the PS ratio. Hence, a golden section search (GSS) based algorithm with closed-form solution at each step is proposed to find the unique global optimal solution. It is worth pointing out that the SCA method can also converge to the optimal solution in SISO cases. In the numerical simulation, the proposed algorithm is numerically shown to converge within a few iterations, and the SWIPT-aided NOMA protocol outperforms the existing transmission protocols.

Juening Jin;Yahong Rosa Zheng;Wen Chen;Chengshan Xiao; "Generalized Quadratic Matrix Programming: A Unified Framework for Linear Precoding With Arbitrary Input Distributions," vol.65(18), pp.4887-4901, Sept.15, 15 2017. This paper investigates a new class of nonconvex optimization, which provides a unified framework for linear precoding in single/multiuser multiple-input multiple-output channels with arbitrary input distributions. The new optimization is called generalized quadratic matrix programming (GQMP). Due to the nondeterministic polynomial time hardness of GQMP problems, instead of seeking globally optimal solutions, we propose an efficient algorithm that is guaranteed to converge to a Karush-Kuhn-Tucker point. The idea behind this algorithm is to construct explicit concave lower bounds for nonconvex objective and constraint functions, and then solve a sequence of concave maximization problems until convergence. In terms of application, we consider a downlink underlay secure cognitive radio network, where each node has multiple antennas. We design linear precoders to maximize the average secrecy (sum) rate with finite-alphabet inputs and statistical channel state information at the transmitter. The precoding problems under secure multicast/broadcast scenarios are GQMP problems, and thus, they can be solved efficiently by our proposed algorithm. Several numerical examples are provided to show the efficacy of our algorithm.

Ehsan Olfat;Mats Bengtsson; "Joint Channel and Clipping Level Estimation for OFDM in IoT-based Networks," vol.65(18), pp.4902-4911, Sept.15, 15 2017. We consider scenarios such as IoT-based 5G or IoT-based machine type communication, where a low-cost low-power transmitter communicates with a high-quality receiver. Then, digital predistortion of the nonlinear power amplifier may be too expensive. In order to investigate the feasibility of receiver-side compensation of the transmitter RF impairments, we study joint maximum-likelihood estimation of channel and clipping level in multipath fading OFDM systems. In particular, we propose an alternative optimization algorithm, which uses frequency-domain block-type training symbols, and prove that this algorithm always converges, at least to a local optimum point. Then, we calculate the Cramér-Rao lower bound, and show that the proposed estimator attains it for high signal-to-noise ratios. Finally, we perform numerical evaluations to illustrate the performance of the estimator, and show that iterative decoding can be done using the estimated channel and clipping level with almost the same performance as a genie-aided scenario, where the channel and clipping level are perfectly known.

Ziyang Cheng;Zishu He;Shengmiao Zhang;Jian Li; "Constant Modulus Waveform Design for MIMO Radar Transmit Beampattern," vol.65(18), pp.4912-4923, Sept.15, 15 2017. A multiple-input multiple-output radar has great flexibility to design the transmit beampattern via selecting the probing waveform. The idea of current transmit beampattern design is to approximate the disired transimit beampattern and minimize the cross-correlation sidelobes. In this paper, under the constant modulus constraint, two algorithms are proposed to design the probing waveform directly. In the first algorithm, the optimization criterion is minimizing the squared-error between the designed beampattern and the given beampattern. Since the objective function is a nonconvex fourth-order polynomial and the constant modulus constraint can be regarded as many nonconvex quadratic equality constraints, an efficient alternating direction method of multipliers (ADMM) algorithm, whose convergence speed is very fast, is proposed to solve it. In the second algorithm, the criterion is minimizing the absolute-error between the designed beampattern and the given beampattern. This nonconvex problem can be formulated as <inline-formula> <tex-math notation="LaTeX">$l_1$</tex-math></inline-formula>-norm problem, which can be solved through a double-ADMM algorithm. Finally, we assess the performance of the two proposed algorithms via numerical results.

Arash Mohammadi;Konstantinos N. Plataniotis; "Event-Based Estimation With Information-Based Triggering and Adaptive Update," vol.65(18), pp.4924-4939, Sept.15, 15 2017. This paper is motivated by recent advancements of cyber-physical systems and significance of managing limited communication resources in their applications. We propose an open-loop estimation strategy with an information-based triggering mechanism coupled with an adaptive event-based fusion framework. In the open-loop topology considered in this paper, a sensor transfers its measurements to a remote estimator only in occurrence of specific events (asynchronously). Each event is identified using a local stochastic triggering mechanism without incorporation of a feedback from the remote estimator and/or implementation of a local filter at the sensor level. We propose a particular stochastic triggering criterion based on the projection of local observation into the state-space, which in turn is a measure of the achievable gain in the local information state vector. Then, we investigate an unsupervised fusion model at the estimation side where the estimator blindly listens to its communication channel without having a priori information of the triggering mechanism of the sensor. An update mechanism with a Bayesian collapsing strategy is proposed to adaptively form state estimates at the estimator side in an unsupervised fashion. The estimator is adaptive in the sense that it is able to distinguish between having received an actual measurement or noise. The simulation results show that the proposed information-based triggering mechanism significantly outperforms its counterparts specifically in low communication rates, and confirms the effectiveness of the proposed unsupervised fusion methodology.

Po-Chih Chen;Borching Su;Yenming Huang; "Matrix Characterization for GFDM: Low Complexity MMSE Receivers and Optimal Filters," vol.65(18), pp.4940-4955, Sept.15, 15 2017. In this paper, a new matrix-based characterization of generalized-frequency-division-multiplexing (GFDM) transmitter matrices is proposed, as opposed to traditional vector-based characterization with prototype filters. The characterization facilitates deriving properties of GFDM (transmitter) matrices, including conditions for GFDM matrices being nonsingular and unitary, respectively. Using the new characterization, the necessary and sufficient conditions for the existence of a form of low-complexity implementation for a minimum mean square error (MMSE) receiver are derived. Such an implementation exists under multipath channels if the GFDM transmitter matrix is selected to be unitary. For cases where this implementation does not exist, a low-complexity suboptimal MMSE receiver is proposed, with its performance approximating that of an MMSE receiver. The new characterization also enables derivations of optimal prototype filters in terms of minimizing receiver mean square error (MSE). They are found to correspond to the use of unitary GFDM matrices under many scenarios. The use of such optimal filters in GFDM systems does not cause the problem of noise enhancement, thereby demonstrating the same MSE performance as orthogonal frequency division multiplexing. Moreover, we find that GFDM matrices with a size of power of two are verified to exist in the class of unitary GFDM matrices. Finally, while the out-of-band (OOB) radiation performance of systems using a unitary GFDM matrix is not optimal in general, it is shown that the OOB radiation can be satisfactorily low if parameters in the new characterization are carefully chosen.

Jian Lan;X. Rong Li; "Multiple Conversions of Measurements for Nonlinear Estimation," vol.65(18), pp.4956-4970, Sept.15, 15 2017. A multiple conversion approach (MCA) to nonlinear estimation is proposed in this paper. It jointly considers multiple hypotheses on the joint distribution of the quantity to be estimated and its measurement. The overall MCA estimate is a probabilistically weighted sum of the hypothesis conditional estimates. To describe the hypothesized joint distributions used to match the truth, a general distribution form characterized by a (linear or nonlinear) measurement conversion is found. This form is more general than Gaussian and includes Gaussian as a special case. Moreover, the minimum mean square error (MMSE) optimal estimate, given a hypothesized distribution in this form, is simply the linear MMSE (LMMSE) estimate using the converted measurement. LMMSE-based estimators, including the original LMMSE estimator and its generalization—the recently proposed uncorrelated conversion based filter—can all be incorporated into the MCA framework. Given a nonlinear problem, a specific form of the hypothesized distribution can be optimally obtained by quadratic programming using the information in the nonlinear measurement function and the measurement conversion. Then, the MCA estimate can be obtained easily. For dynamic problems, an interacting multiple conversion algorithm is proposed for recursive estimation. The MCA approach has a simple and flexible structure and takes advantage of multiple LMMSE-based nonlinear estimators. The overall estimates are obtained adaptively depending on the performance of the candidate estimators. Simulation results demonstrate the effectiveness of the proposed approach compared with other nonlinear filters.

David A. Castañón;Theodoris Tsiligkaridis;Alfred O. Hero; "Corrections to “On Decentralized Estimation With Active Queries”," vol.65(18), pp.4971-4972, Sept.15, 15 2017. We provide a counterexample to a key lemma used in the proofs of the convergence of decentralized estimation algorithms in [2]. We also provide an alternative lemma that establishes a new proof of the convergence results in the paper [2].

Rémy Boyer;Behtash Babadi;Nicholas Kalouptsidis;Vahid Tarokh; "Corrections to “Asymptotic Achievability of the Cramér–Rao Bound for Noisy Compressive Sampling”," vol.65(18), pp.4973-4974, Sept.15, 15 2017. Given <inline-formula><tex-math notation="LaTeX">$N$</tex-math></inline-formula> noisy measurements denoted by <inline-formula><tex-math notation="LaTeX">${\mathbf y}$</tex-math></inline-formula> and an overcomplete Gaussian dictionary, <inline-formula><tex-math notation="LaTeX">${\mathbf A}$</tex-math></inline-formula>, the authors of [1] establish the existence and the asymptotic statistical efficiency of an unbiased estimator unaware of the locations of the nonzero entries, collected in set <inline-formula> <tex-math notation="LaTeX">$\mathcal {I}$</tex-math></inline-formula>, in the deterministic <inline-formula> <tex-math notation="LaTeX">$L$</tex-math></inline-formula>-sparse signal <inline-formula><tex-math notation="LaTeX"> ${\mathbf x}$</tex-math></inline-formula>. More precisely, there exists an estimator <inline-formula> <tex-math notation="LaTeX">${\hat{\mathbf {x}}}({\mathbf y}, {\mathbf A})$</tex-math></inline-formula> unaware of set <inline-formula><tex-math notation="LaTeX">$\mathcal {I}$</tex-math></inline-formula> with a variance reaching the oracle-Cramér–Rao Bound in the asymptotic scenario, i.e., for <inline-formula><tex-math notation="LaTeX"> $N,L\rightarrow \infty$</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">$L/N \rightarrow \alpha \in (0,1)$</tex-math></inline-formula>. As was noted in the paper “Fundamental limits and constructive methods for estimation and sensing of sparse signals” by B. Babadi, the existence proof remains true even though Lemma 3.5 and (20) the paper “Asymptotic achievability of the Cramer–Rao bound for noisy compressive sampling” are inexact. In this note, the exact closed-form expression of the variance of the estimator <inline-formula><tex-math notation="LaTeX">${\hat{\mathbf {x}}}({\mathbf y}, {\mat- bf A})$</tex-math></inline-formula> is provided, and its practical usefulness is numerically illustrated with the orthogonal matching pursuit estimator.

IEEE Signal Processing Letters - new TOC (2017 July 20) [Website]

Bogdan Dumitrescu; "Designing Incoherent Frames With Only Matrix–Vector Multiplications," vol.24(9), pp.1265-1269, Sept. 2017. Designing frames with low mutual coherence is a challenging problem, with many applications in signal processing. We adopt an atom-by-atom optimization strategy for obtaining frames with a given mutual coherence. The underlying min-max problem is transformed into a weighted least squares problem, approximately solved with the shifted power method. The resulting algorithm is extremely simple, works directly with the frame and consists of almost only matrix–vector multiplications. Numerical experiments shows that it is especially efficient for frames with a large overcompleteness factor and can design in reasonable time frames much larger than those obtained by other existing methods.

Xudong Li;Zilong Liu;Yong Liang Guan;Pingzhi Fan; "Two-Valued Periodic Complementary Sequences," vol.24(9), pp.1270-1274, Sept. 2017. We present a novel transform for periodic complementary sets (PCSs) over two-valued alphabets from a large set of difference families. This is achieved by generalizing Golomb's idea in 1992, which was for transformed perfect sequences with zero autocorrelations only. Based on the properties of difference family, a sufficient condition for such two-valued PCSs is derived. Systematic constructions of two-valued periodic complementary pairs are presented. It is shown that many lengths for which binary PCSs do not exist become admissible for our proposed two-valued PCSs.

Batu K. Chalise;Moeness G. Amin;Braham Himed; "Performance Tradeoff in a Unified Passive Radar and Communications System," vol.24(9), pp.1275-1279, Sept. 2017. Although radar and communication systems so far have been considered separately, recent advances in passive radar systems have motivated us to propose a unified system, capable of fulfilling the requirements of both radar and communications. In this paper, we provide performance tradeoff analysis for a system consisting of a transmitter, a passive radar receiver, and a communication receiver (CR). The total power is allocated for transmitting the radar waveforms and information signals in such a way that the probability of detection (PD) is maximized, while satisfying the information rate requirement of the CR. An exact closed-form expression for the probability of false alarm (PFA) is derived, whereas PD is approximated by assuming that the signal-to-noise ratio corresponding to the reference channel is often much larger than that corresponding to the surveillance channel. The performance tradeoff between the radar and communication subsystems is then characterized by the boundaries of the PFA-rate and PD-rate regions.

Simon Foucart;Guillaume Lecué; "An IHT Algorithm for Sparse Recovery From Subexponential Measurements," vol.24(9), pp.1280-1283, Sept. 2017. A matrix whose entries are independent subexponential random variables is not likely to satisfy the classical restricted isometry property in the optimal regime of parameters. However, it is known that uniform sparse recovery is still possible with high probability in the optimal regime if ones uses <inline-formula><tex-math notation="LaTeX"> $\ell _1$</tex-math></inline-formula>-minimization as a recovery algorithm. We show in this letter that such a statement remains valid if one uses a new variation of iterative hard thresholding as a recovery algorithm. The argument is based on a modified restricted isometry property featuring the <inline-formula><tex-math notation="LaTeX"> $\ell _1$</tex-math></inline-formula>-norm as the inner norm.

Akbar Assa;Konstantinos N. Plataniotis; "Adaptive Kalman Filtering by Covariance Sampling," vol.24(9), pp.1288-1292, Sept. 2017. It is well known that the performance of the Kalman filter deteriorates when the system noise statistics are not available a priori. In particular, the adjustment of measurement noise covariance is deemed paramount as it directly affects the estimation accuracy and plays the key role in applications such as sensor selection and sensor fusion. This letter proposes a novel adaptive scheme by approximating the measurement noise covariance distribution through finite samples, assuming the noise to be white with a normal distribution. Exploiting these samples in approximation of the system state a posteriori leads to a Gaussian mixture model (GMM), the components of which are acquired by Kalman filtering. The resultant GMM is then reduced to the closest normal distribution and also used to estimate the measurement noise covariance. Compared to previous adaptive techniques, the proposed method adapts faster to the unknown parameters and thus provides a higher performance in terms of estimation accuracy, which is confirmed by the simulation results.

Jonatan Ostrometzky;Hagit Messer; "Comparison of Different Methodologies of Parameter-Estimation From Extreme Values," vol.24(9), pp.1293-1297, Sept. 2017. This letter deals with the case where parameter estimation is required, but only observations of extreme values (i.e., the minimum observed value and/or the maximum observed value per interval) are available. We describe the theoretical grounds of the three leading methodologies of estimation from extremes, discuss the relations between them, and analyze the tradeoffs of the different methodologies with respect to the performance (accuracy), complexity, and robustness of the estimates. We then demonstrate our evaluations via a specially designed simulation, which validates our results.

Amjad Saeed Khan;Ioannis Chatzigeorgiou; "Non-Orthogonal Multiple Access Combined With Random Linear Network Coded Cooperation," vol.24(9), pp.1298-1302, Sept. 2017. This letter considers two groups of source nodes. Each group transmits packets to its own designated destination node over single-hop links and via a cluster of relay nodes shared by both groups. In an effort to boost reliability without sacrificing throughput, a scheme is proposed, whereby packets at the relay nodes are combined using two methods; packets delivered by different groups are mixed using non-orthogonal multiple access principles, while packets originating from the same group are mixed using random linear network coding. An analytical framework that characterizes the performance of the proposed scheme is developed, compared to simulation results, and benchmarked against a counterpart scheme that is based on orthogonal multiple access.

Wei Jiang;Alexander M. Haimovich; "Cramer–Rao Bound for Noncoherent Direction of Arrival Estimation in the Presence of Sensor Location Errors," vol.24(9), pp.1303-1307, Sept. 2017. In previous work, we have shown that the noncoherent direction of arrival (DOA) estimation by an array of sensors is insensitive to sensor phase errors. Here, we analyze the effects of sensor location errors on noncoherent DOA estimation. In the presence of additive white Gaussian noise, we show that the distribution of magnitude-only measurements collected by a sensor array may be approximated by a Gaussian distribution. Further, if one of the sources is received with high signal-to-noise ratio, the mean of the Gaussian distribution is a function of location errors while the variance of the Gaussian distribution is invariant with location errors. We derive a closed-form analytical expression for the hybrid Cramer–Rao bound of the spatial frequency difference between two sources. Numerical results illustrate that, unlike phase errors that have no effect on noncoherent DOA estimation, both noncoherent and coherent DOA estimation are affected by location errors.

IEEE Journal of Selected Topics in Signal Processing - new TOC (2017 July 20) [Website]

* "Frontcover," vol.11(4), pp.C1-C1, June 2017.* Presents the front cover for this issue of the publication.

* "IEEE Journal of Selected Topics in Signal Processing publication information," vol.11(4), pp.C2-C2, June 2017.* Presents a listing of the editorial board, current staff, committee members, and society editors for this

* "[Blank page[Name:_blank]]," vol.11(4), pp.B581-B581, June 2017.* This page or pages intentionally left blank.

* "[Blank page[Name:_blank]]," vol.11(4), pp.B582-B582, June 2017.* This page or pages intentionally left blank.

* "Table of Contents," vol.11(4), pp.583-583, June 2017.* Presents the table of contents for this issue of the publication.

* "[Blank page[Name:_blank]]," vol.11(4), pp.B584-B584, June 2017.* This page or pages intentionally left blank.

Junichi Yamagishi;Tomi H. Kinnunen;Nicholas Evans;Phillip De Leon;Isabel Trancoso; "Introduction to the Issue on Spoofing and Countermeasures for Automatic Speaker Verification," vol.11(4), pp.585-587, June 2017. The papers in this special issue focus on automatic speaker verification (ASV) technologies and applications for their use. ASV offers a low-cost and flexible solution to biometric authentication. While there liability of ASV systems is now considered sufficient to support mass-market adoption, there are concerns that the technology is vulnerable to spoofing, also referred to as presentation attacks. Spoofing refers to an attack whereby a fraudster attempts to manipulate a biometric system by masquerading as another, enrolled person. Replayed, synthesized and converted speech spoofing attacks can all be used to present high-quality, convincing speech signals which are representative of other, specific speakers and thus present a genuine threat to the reliability of ASV authentication systems.

Zhizheng Wu;Junichi Yamagishi;Tomi Kinnunen;Cemal Hanilçi;Mohammed Sahidullah;Aleksandr Sizov;Nicholas Evans;Massimiliano Todisco; "ASVspoof: The Automatic Speaker Verification Spoofing and Countermeasures Challenge," vol.11(4), pp.588-604, June 2017. Concerns regarding the vulnerability of automatic speaker verification (ASV) technology against spoofing can undermine confidence in its reliability and form a barrier to exploitation. The absence of competitive evaluations and the lack of common datasets has hampered progress in developing effective spoofing countermeasures. This paper describes the ASV Spoofing and Countermeasures (ASVspoof) initiative, which aims to fill this void. Through the provision of a common dataset, protocols, and metrics, ASVspoof promotes a sound research methodology and fosters technological progress. This paper also describes the ASVspoof 2015 dataset, evaluation, and results with detailed analyses. A review of postevaluation studies conducted using the same dataset illustrates the rapid progress stemming from ASVspoof and outlines the need for further investigation. Priority future research directions are presented in the scope of the next ASVspoof evaluation planned for 2017.

Dipjyoti Paul;Monisankha Pal;Goutam Saha; "Spectral Features for Synthetic Speech Detection," vol.11(4), pp.605-617, June 2017. Recent advancements in voice conversion (VC) and speech synthesis research make speech-based biometric systems highly prone to spoofing attacks. This can provoke an increase in false acceptance rate in such systems and requires countermeasure to mitigate such spoofing attacks. In this paper, we first study the characteristics of synthetic speech vis-à-vis natural speech and then propose a set of novel short-term spectral features that can efficiently capture the discriminative information between them. The proposed features are computed using inverted frequency warping scale and overlapped block transformation of filter bank log energies. Our study presents a detailed analysis of antispoofing performance with respect to the variations in the warping scale for inverted frequency and block size for the block transform. For performance analysis, Gaussian mixture model (GMM) based synthetic speech detector is used as a classifier on a stand-alone basis and also, integrated with automatic speaker verification (ASV) systems. For ASV systems, standard mel-frequency cepstral coefficients are used as feature while GMM with universal background model and i-vector are used as classifiers. The experiments are conducted on ten different kinds of synthetic data from ASVspoof 2015 corpus. The results show that the countermeasures based on the proposed features outperform other spectral features for both known and unknown attacks. An average equal error rate (EER) of 0.00% has been achieved for nine attacks that use VC or SS speech and the best performance of 7.12% EER is arrived at the remaining natural speech concatenation-based spoofing attack.

Tanvina B. Patel;Hemant A. Patil; "Cochlear Filter and Instantaneous Frequency Based Features for Spoofed Speech Detection," vol.11(4), pp.618-631, June 2017. Vulnerability of voice biometrics systems to spoofing attacks by synthetic speech (SS) and voice converted (VC) speech has arose the need of standalone spoofed speech detection (SSD) systems. This paper is an extension of our previously proposed features (used in relatively best performing SSD system) at the first ASVspoof 2015 challenge held at INTERSPEECH 2015. For the challenge, the authors proposed novel features based on cochlear filter cepstral coefficients (CFCC) and instantaneous frequency (IF), i.e., CFCCIF. The basic motivation behind this is that human ear processes speech in subbands. The envelope of each subband and its IF is important for perception of speech. In addition, the transient information also adds to the perceptual information that is captured. We observed that subband energy variations across CFCCIF when estimated by symmetric difference (CFCCIFS) gave better discriminative properties than CFCCIF. The features are extracted at frame level and the Gaussian mixture model based classification system was used. Experiments were conducted on ASVspoof 2015 challenge database with MFCC, CFCC, CFCCIF, and CFCCIFS features. On the evaluation dataset, after score-level fusion with MFCC, the CFCCIFS features gave an overall equal error rate (EER) of 1.45% as compared to 1.87% and 1.61% with CFCCIF and CFCC, respectively. In addition to detecting the known and unknown attacks, intensive experiments have been conducted to study the effectiveness of the features under the condition that either only SS or only VC speech is available for training. It was observed that when only VC speech is used in training, both VC, as well as SS, can be detected. However, when only SS is used in training, VC speech was not detected. In general, amongst vocoder-based spoofs, it was observed that VC speech is relatively difficult to detect than SS by the SSD system. However, vocoder-independent SS was toughest with h- ghest EER (i.e., > 10%).

Kaavya Sriskandaraja;Vidhyasaharan Sethu;Eliathamby Ambikairajah;Haizhou Li; "Front-End for Antispoofing Countermeasures in Speaker Verification: Scattering Spectral Decomposition," vol.11(4), pp.632-643, June 2017. As speaker verification is widely used as a means of verifying personal identity in commercial applications, the study of antispoofing countermeasures has become increasingly important. By choosing appropriate spectral and prosodic feature mapping, spoofing methods based on voice conversion and speech synthesis are both capable of deceiving speaker verification systems that typically rely on these features. Consequently alternative front-ends are required for effective spoofing detection. This paper investigates the use of the recently proposed hierarchical scattering decomposition technique, which can be viewed as a generalization of all constant-Q spectral decompositions, to implement front-ends for stand-alone spoofing detection. The coefficients obtained using this decomposition are converted to a feature vector of Scattering Cepstral Coefficients (SCCs). We evaluate the performance of SCCs on the recent spoofing and Antispoofing (SAS) corpus as well as the ASVspoof 2015 challenge corpus and show that SCCs are superior to all other front-ends that have previously been benchmarked on the ASVspoof corpus.

Tanvina B. Patel;Hemant A. Patil; "Significance of Source–Filter Interaction for Classification of Natural vs. Spoofed Speech," vol.11(4), pp.644-659, June 2017. Countermeasures used to detect synthetic and voice-converted spoofed speech are usually based on excitation source or system features. However, in the natural speech production mechanism, there exists nonlinear source–filter (S–F) interaction as well. This interaction is an attribute of natural speech and is rarely present in synthetic or voice-converted speech. Therefore, we propose features based on the S–F interaction for a spoofed speech detection (SSD) task. To that effect, we estimate the voice excitation source (i.e., differenced glottal flow waveform, <inline-formula><tex-math notation="LaTeX">$\dot{g} (t))$</tex-math></inline-formula> and model it using the well-known Liljencrants–Fant model to get coarse structure, <inline-formula><tex-math notation="LaTeX"> $g_{c}(t)$</tex-math></inline-formula>. The residue or difference, <inline-formula><tex-math notation="LaTeX"> $g_{r}(t)$</tex-math></inline-formula>, between <inline-formula><tex-math notation="LaTeX">$\dot{g} (t)$</tex-math> </inline-formula> and <inline-formula><tex-math notation="LaTeX">$g_{c}(t)$</tex-math></inline-formula> is known to capture the nonlinear S–F interaction. In the time domain, the <inline-formula><tex-math notation="LaTeX"> $L^{2}$</tex-math></inline-formula> norm of <inline-formula><tex-math notation="LaTeX">$g_{r}(t)$</tex-math> </inline-formula> in the closed, open, and return phases of the glottis are considered as features. In the frequency domain, the Mel representation of <inline-formula><tex-math notation="LaTeX">$g_{r}(t)$</tex-math></inline-formula> showed significant contribution in the SSD task. The proposed features are evaluated on the first ASVspoof 2015 challenge database using a Gaussian mixture model based classification system. On the evaluation set, for vocoder-based spoofs (i.e., S1S9), the score-level fusion o- residual energy features, Mel representation of the residual signal, and Mel frequency cepstral coefficients (MFCC) features gave an equal error rate (EER) of 0.017%, which is much less than the 0.319% obtained with MFCC alone. Furthermore, the residues of the spectrogram (as well as the Mel-warped spectrogram) of estimated <inline-formula> <tex-math notation="LaTeX">$\dot{g} (t)$</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX"> $g_{c}(t)$</tex-math></inline-formula> are also explored as features for the SSD task. The features are evaluated for robustness in the presence of additive white, babble, and car noise at various signal-to-noise-ratio levels on the ASVspoof 2015 database and for channel mismatch condition on the Blizzard Challenge 2012 dataset. For both cases, the proposed features gave significantly less EER than that obtained by MFCC on the evaluation set.

Longbiao Wang;Seiichi Nakagawa;Zhaofeng Zhang;Yohei Yoshida;Yuta Kawakami; "Spoofing Speech Detection Using Modified Relative Phase Information," vol.11(4), pp.660-670, June 2017. The detection of human and spoofing (synthetic or converted) speech has started to receive an increasing amount of attention. In this paper, modified relative phase (MRP) information extracted from a Fourier spectrum is proposed for spoofing speech detection. Because original phase information is almost entirely lost in spoofing speech using current synthesis or conversion techniques, some phase information extraction methods, such as the modified group delay feature and cosine phase feature, have been shown to be effective for detecting human speech and spoofing speech. However, existing phase information-based features cannot obtain very high spoofing speech detection performance because they cannot extract precise phase information from speech. Relative phase (RP) information, which extracts phase information precisely, has been shown to be effective for speaker recognition. In this paper, RP information is applied to spoofing speech detection, and it is expected to achieve better spoofing detection performance. Furthermore, two modified processing techniques of the original RP, that is, pseudo pitch synchronization and linear discriminant analysis based full-band RP extraction, are proposed in this paper. In this study, MRP information is also combined with the Mel-frequency cepstral coefficient (MFCC) and modified group delay. The proposed method was evaluated using the ASVspoof 2015: Automatic Speaker Verification Spoofing and Countermeasures Challenge dataset. The results show that the proposed MRP information significantly outperforms the MFCC, modified group delay, and other phase information based features. For the development dataset, the equal error rate (EER) was reduced from 1.883% of the MFCC, 0.567% of the modified group delay to 0.013% of the MRP. By combining the RP with the MFCC and modified group delay, the EER was reduced to 0.003%. For the evaluation dataset, the MRP o- tained much better performance than the magnitude-based feature and other phase-based features, except for S10 spoofing speech.

Cenk Demiroglu;Osman Buyuk;Ali Khodabakhsh;Ranniery Maia; "Postprocessing Synthetic Speech With a Complex Cepstrum Vocoder for Spoofing Phase-Based Synthetic Speech Detectors," vol.11(4), pp.671-683, June 2017. State-of-the-art speaker verification systems are vulnerable to spoofing attacks. To address the issue, high-performance synthetic speech detectors (SSDs) for existing spoofing methods have been proposed. Phase-based SSDs that exploit the fact that most of the parametric speech coders use minimum-phase filters are particularly successful when synthetic speech is generated with a parametric vocoder. Here, we propose a new attack strategy to spoof phase-based SSDs with the objective of increasing the security of voice verification systems by enabling the development of more generalized SSDs. As opposed to other parametric vocoders, the complex cepstrum approach uses mixed-phase filters, which makes it an ideal candidate for spoofing the phase-based SSDs. We propose using a complex cepstrum vocoder as a postprocessor to existing techniques to spoof the speaker verification system as well as the phase-based SSDs. Once synthetic speech is generated with a speech synthesis or a voice conversion technique, for each synthetic speech frame, a natural frame is selected from a training database using a spectral distance measure. Then, complex cepstrum parameters of the natural frame are used for resynthesizing the synthetic frame. In the proposed method, complex cepstrum-based resynthesis is used as a postprocessor. Hence, it can be used in tandem with any synthetic speech generator. Experimental results showed that the approach is successful at spoofing four phase-based SSDs across nine parametric attack algorithms. Moreover, performance at spoofing the speaker verification system did not substantially degrade compared to the case when no postprocessor is employed.

Chunlei Zhang;Chengzhu Yu;John H. L. Hansen; "An Investigation of Deep-Learning Frameworks for Speaker Verification Antispoofing," vol.11(4), pp.684-694, June 2017. In this study, we explore the use of deep-learning approaches for spoofing detection in speaker verification. Most spoofing detection systems that have achieved recent success employ hand-craft features with specific spoofing prior knowledge, which may limit the feasibility to unseen spoofing attacks. We aim to investigate the genuine-spoofing discriminative ability from the back-end stage, utilizing recent advancements in deep-learning research. In this paper, alternative network architectures are exploited to target spoofed speech. Based on this analysis, a novel spoofing detection system, which simultaneously employs convolutional neural networks (CNNs) and recurrent neural networks (RNNs) is proposed. In this framework, CNN is treated as a convolutional feature extractor applied on the speech input. On top of the CNN processed output, recurrent networks are employed to capture long-term dependencies across the time domain. Novel features including Teager energy operator critical band autocorrelation envelope, perceptual minimum variance distortionless response, and a more general spectrogram are also investigated as inputs to our proposed deep-learning frameworks. Experiments using the ASVspoof 2015 Corpus show that the integrated CNN–RNN framework achieves state-of-the-art single-system performance. The addition of score-level fusion further improves system robustness. A detailed analysis shows that our proposed approach can potentially compensate for the issue due to short duration test utterances, which is also an issue in the evaluation corpus.

Pavel Korshunov;Sébastien Marcel; "Impact of Score Fusion on Voice Biometrics and Presentation Attack Detection in Cross-Database Evaluations," vol.11(4), pp.695-705, June 2017. Research in the area of automatic speaker verification (ASV) has been advanced enough for the industry to start using ASV systems in practical applications. However, these systems are highly vulnerable to spoofing or presentation attacks, limiting their wide deployment. Therefore, it is important to develop mechanisms that can detect such attacks, and it is equally important for these mechanisms to be seamlessly integrated into existing ASV systems for practical and attack-resistant solutions. To be practical, however, an attack detection should (i) have high accuracy, (ii) be well-generalized for different attacks, and (iii) be simple and efficient. Several audio-based presentation attack detection (PAD) methods have been proposed recently but their evaluation was usually done on a single, often obscure, database with limited number of attacks. Therefore, in this paper, we conduct an extensive study of eight state-of-the-art PAD methods and evaluate their ability to detect known and unknown attacks (e.g., in a cross-database scenario) using two major publicly available speaker databases with spoofing attacks: AVspoof and ASVspoof. We investigate whether combining several PAD systems via score fusion can improve attack detection accuracy. We also study the impact of fusing PAD systems (via parallel and cascading schemes) with two i-vector and inter-session variability based ASV systems on the overall performance in both bona fide (no attacks) and spoof scenarios. The evaluation results question the efficiency and practicality of the existing PAD systems, especially when comparing results for individual databases and cross-database data. Fusing several PAD systems can lead to a slightly improved performance; however, how to select which systems to fuse remains an open question. Joint ASV-PAD systems show a significantly increased resistance to the attacks at the expense of slightly degraded performance f- r bona fide scenarios.

* "IEEE Journal of Selected Topics in Signal Processing information for authors," vol.11(4), pp.706-707, June 2017.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "Introducing IEEE collabratec," vol.11(4), pp.708-708, June 2017.* IEEE Collabratec is a new, integrated online community where IEEE members, researchers, authors, and technology professionals with similar fields of interest can network and collaborate, as well as create and manage content. Featuring a suite of powerful online networking and collaboration tools, IEEE Collabratec allows you to connect according to geographic location, technical interests, or career pursuits. You can also create and share a professional identity that showcases key accomplishments and participate in groups focused around mutual interests, actively learning from and contributing to knowledgeable communities. All in one place! Learn about IEEE Collabratec at ieeecollabratec.org.

* "Become a published author in 4 to 6 weeks," vol.11(4), pp.709-709, June 2017.* Advertisement, IEEE.

* "Expand Your Network, Get Rewarded," vol.11(4), pp.710-710, June 2017.* Advertisement, IEEE.

* "IEEE Signal Processing Society Information," vol.11(4), pp.C3-C3, June 2017.* Presents a listing of the editorial board, current staff, committee members, and society editors for this

* "Blank Page," vol.11(4), pp.C4-C4, June 2017.* This page or pages intentionally left blank.

IEEE Signal Processing Magazine - new TOC (2017 July 20) [Website]

* "Front Cover," vol.34(4), pp.C1-C1, July 2017.* Presents the front cover for this issue of the publication.

* "ICASSP18 Announcement," vol.34(4), pp.C2-C2, July 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "Table of Contents," vol.34(4), pp.1-2, July 2017.* Presents the table of contents for this issue of the publication.

* "Masthead," vol.34(4), pp.2-2, July 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Min Wu; "Innovations Powered by Signal Processing [From the Editor[Name:_blank]]," vol.34(4), pp.3-9, July 2017. Presents the introductory editorial for this issue of the publication.

Rabab Ward; "Mind the (Gender) Gap [President's Message[Name:_blank]]," vol.34(4), pp.4-5, July 2017. Presents the President’s message for this issue of the publication.

Chungshui Zhang; "Top Downloads in IEEE Xplore [Reader's Choice[Name:_blank]]," vol.34(4), pp.6-7, July 2017. Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles.

* "Nominations Open for 2017 IEEE Signal Processing Society Awards [Society News[Name:_blank]]," vol.34(4), pp.8-9, July 2017.* Presents the guidelines for SPS society awards.

Andres Kwasinski;Min Wu; "What Do You Consider a \"Successful\" Career? : Perspectives from signal processing-trained professionals [Community Voices[Name:_blank]]," vol.34(4), pp.10-12, July 2017. The motivation behind this column is to strengthen ties with readers and members in the signal processing community. In doing so, we set out to collect reflections from diverse members of our community on questions that are of interest to many. Here, examines the training necessary for a career in signal processing and explores how one can determine success in one's career in this industry.

John Edwards; "Innovative Sensors Promise Longer and Healthier Lives: Signal processing leads to devices that provide faster and more insightful monitoring and diagnoses [Special Reports[Name:_blank]]," vol.34(4), pp.14-17, July 2017. Reports on new and innovative sensor technologies for the health care market.The health-care and medical applications sensor market is projected to expand at a compound annual growth rate of 13.1% between 2016 and 2022, according to a report issued in March 2017 by the research firm Frost & Sullivan. A key factor driving sensor sales is the growing availability of consumer and clinical devices that use sensor technology to diagnose, monitor, and track disease and fitness. Within the next few years, an emerging generation of smaller, less expensive, and highly sophisticated sensors will find their way into a wide range of personal and professional devices. With more patient care moving out of hospitals, the use of sensor-enabled home diagnostic and monitoring devices is expected to soar, the report notes. The market for sensors used in wearable health and fitness devices is also poised to grow rapidly.

* "Membership filler," vol.34(4), pp.17-17, July 2017.* Advertisement, IEEE.

Michael M. Bronstein;Joan Bruna;Yann LeCun;Arthur Szlam;Pierre Vandergheynst; "Geometric Deep Learning: Going beyond Euclidean data," vol.34(4), pp.18-42, July 2017. Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural-language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure and in cases where the invariances of these structures are built into networks used to model them.

Soheil Kolouri;Se Rim Park;Matthew Thorpe;Dejan Slepcev;Gustavo K. Rohde; "Optimal Mass Transport: Signal processing and machine-learning applications," vol.34(4), pp.43-59, July 2017. Transport-based techniques for signal and data analysis have recently received increased interest. Given their ability to provide accurate generative models for signal intensities and other data distributions, they have been used in a variety of applications, including content-based retrieval, cancer detection, image superresolution, and statistical machine learning, to name a few, and they have been shown to produce state-of-the-art results. Moreover, the geometric characteristics of transport-related metrics have inspired new kinds of algorithms for interpreting the meaning of data distributions. Here, we provide a practical overview of the mathematical underpinnings of mass transport-related methods, including numerical implementation, as well as a review, with demonstrations, of several applications. Software accompanying this article is available from [43].

Monica F. Bugallo;Victor Elvira;Luca Martino;David Luengo;Joaquin Miguez;Petar M. Djuric; "Adaptive Importance Sampling: The past, the present, and the future," vol.34(4), pp.60-79, July 2017. A fundamental problem in signal processing is the estimation of unknown parameters or functions from noisy observations. Important examples include localization of objects in wireless sensor networks [1] and the Internet of Things [2]; multiple source reconstruction from electroencephalograms [3]; estimation of power spectral density for speech enhancement [4]; or inference in genomic signal processing [5]. Within the Bayesian signal processing framework, these problems are addressed by constructing posterior probability distributions of the unknowns. The posteriors combine optimally all of the information about the unknowns in the observations with the information that is present in their prior probability distributions. Given the posterior, one often wants to make inference about the unknowns, e.g., if we are estimating parameters, finding the values that maximize their posterior or the values that minimize some cost function given the uncertainty of the parameters. Unfortunately, obtaining closed-form solutions to these types of problems is infeasible in most practical applications, and therefore, developing approximate inference techniques is of utmost interest.

Yubin Deng;Chen Change Loy;Xiaoou Tang; "Image Aesthetic Assessment: An experimental survey," vol.34(4), pp.80-106, July 2017. This article reviews recent computer vision techniques used in the assessment of image aesthetic quality. Image aesthetic assessment aims at computationally distinguishing high-quality from low-quality photos based on photographic rules, typically in the form of binary classification or quality scoring. A variety of approaches has been proposed in the literature to try to solve this challenging problem. In this article, we summarize these approaches based on visual feature types (hand-crafted features and deep features) and evaluation criteria (data set characteristics and evaluation metrics). The main contributions and novelties of the reviewed approaches are highlighted and discussed. In addition, following the emergence of deep-learning techniques, we systematically evaluate recent deep-learning settings that are useful for developing a robust deep model for aesthetic scoring.

Zixing Zhang;Nicholas Cummins;Bjoern Schuller; "Advanced Data Exploitation in Speech Analysis: An overview," vol.34(4), pp.107-129, July 2017. With recent advances in machine-learning techniques for automatic speech analysis (ASA)-the computerized extraction of information from speech signals-there is a greater need for high-quality, diverse, and very large amounts of data. Such data could be game-changing in terms of ASA system accuracy and robustness, enabling the extraction of feature representations or the learning of model parameters immune to confounding factors, such as acoustic variations, unrelated to the task at hand. However, many current ASA data sets do not meet the desired properties. Instead, they are often recorded under less than ideal conditions, with the corresponding labels sparse or unreliable.

John H.L. Hansen;Carlos Busso;Yang Zheng;Amardeep Sathyanarayana; "Driver Modeling for Detection and Assessment of Driver Distraction: Examples from the UTDrive Test Bed," vol.34(4), pp.130-142, July 2017. Vehicle technologies have advanced significantly over the past 20 years, especially with respect to novel in-vehicle systems for route navigation, information access, infotainment, and connected vehicle advancements for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) connectivity and communications. While there is great interest in migrating to fully automated, self-driving vehicles, factors such as technology performance, cost barriers, public safety, insurance issues, legal implications, and government regulations suggest it is more likely that the first step in the progression will be multifunctional vehicles. Today, embedded controllers as well as a variety of sensors and high-performance computing in present-day cars allow for a smooth transition from complete human control toward semisupervised or assisted control, then to fully automated vehicles. Next-generation vehicles will need to be more active in assessing driver awareness, vehicle capabilities, and traffic and environmental settings, plus how these factors come together to determine a collaborative safe and effective driver-vehicle engagement for vehicle operation. This article reviews a range of issues pertaining to driver modeling for the detection and assessment of distraction. Examples from the UTDrive project are used whenever possible, along with a comparison to existing research programs. The areas addressed include 1) understanding driver behavior and distraction, 2) maneuver recognition and distraction analysis, 3) glance behavior and visual tracking, and 4) mobile platform advancements for in-vehicle data collection and human-machine interface. This article highlights challenges in achieving effective modeling, detection, and assessment of driver distraction using both UTDrive instrumented vehicle data and naturalistic driving data

Craig T. Jin;Matthew E.P. Davies;Patrizio Campisi; "Embedded Systems Feel the Beat in New Orleans: Highlights from the IEEE Signal Processing Cup 2017 Student Competition [SP Competitions[Name:_blank]]," vol.34(4), pp.143-170, July 2017. Presents information and highlights from the IEEE Signal Processing Cup 2017 Student Competition.

Gianni Pasolini;Alessandro Bazzi;Flavio Zabini; "A Raspberry Pi-Based Platform for Signal Processing Education [SP Education[Name:_blank]]," vol.34(4), pp.151-158, July 2017. One of the most important application areas of signal processing (SP) is, without a doubt, the software-defined radio (SDR) field [1]-[3]. Although their introduction dates back to the 1980s, SDRs are now becoming the dominant technology in radio communications, thanks to the dramatic development of SP-optimized programmable hardware, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs). Today, the computational throughput of these devices is such that sophisticated SP tasks can be efficiently handled, so that both the baseband and intermediate frequency (IF) sections of current communication systems are usually implemented, according to the SDR paradigm, by the FPGA's reconfigurable circuitry (e.g., [4]-[6]), or by the software running on DSPs.

Waheed U. Bajwa; "On \"Flipping\" a Large Signal Processing Class [SP Education[Name:_blank]]," vol.34(4), pp.158-170, July 2017. Modern academy traces its roots back to the medieval universities established between the 12th and the 14th centuries [1]. Much has changed in the world of academia during the millennium that separates a modern university from a medieval one.

Heinrich Edgar Arnold Laue; "Demystifying Compressive Sensing [Lecture Notes[Name:_blank]]," vol.34(4), pp.171-176, July 2017. The conventional Nyquist-Shannon sampling theorem has been fundamental to the acquisition of signals for decades, relating a uniform sampling rate to the bandwidth of a signal. However, many signals can be compressed after sampling, implying a high level of redundancy. The theory of compressive sensing/sampling (CS) presents a sampling framework based on the rate of information of a signal and not the bandwidth, thereby minimizing redundancy during sampling. This means that a signal can be recovered from far fewer samples than conventionally required.

Ljubisa Stankovic;Milos Dakovic;Ervin Sejdic; "Vertex-Frequency Analysis: A Way to Localize Graph Spectral Components [Lecture Notes[Name:_blank]]," vol.34(4), pp.176-182, July 2017. Currently, brain and social networks are examples of new data types that are massively acquired and disseminated [1]. These networks typically consist of vertices (nodes) and edges (connections between nodes). Usually, information is conveyed through the strength of connection among nodes, but in recent years, it has been discovered that valuable information may also be conveyed in signals that occur on each vertex. However, traditional signal processing often does not offer reliable tools and algorithms to analyze such new data types. This is especially true for cases where networks (e.g., the strength of connections), or signals on vertices, have properties that change over the network.

Richard Lyons; "Digital Envelope Detection: The Good, the Bad, and the Ugly [Tips and Tricks[Name:_blank]]," vol.34(4), pp.183-187, July 2017. During a recent consulting job to analyze acoustic telemetry signals transmitted by a deep-sea drill pipe, I was forced to investigate a process called digital envelope detection. This process is used to estimate the instantaneous magnitude of a zero-mean fluctuating-amplitude digital signal. While much tutorial information regarding envelope detection is available, that information is spread out over a number of communications textbooks and many websites. The purpose of this article is to collect and describe various digital envelope detection methods in one concise and consistent lesson.

Magdy Bayoumi; "It was Really Lagniappe!: Highlights from ICASSP 2017 in New Orleans [Conference Highlights[Name:_blank]]," vol.34(4), pp.188-191, July 2017. Presents highlights from the ICASSP 2017 Conference.

* "Calendar [Dates Ahead[Name:_blank]]," vol.34(4), pp.192-192, July 2017.* Presents upcoming SPS society calendar of events and meetings.

* "ICIP Call for Participation," vol.34(4), pp.193-193, July 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

Nasir Memon; "How Biometric Authentication Poses New Challenges to Our Security and Privacy [In the Spotlight[Name:_blank]]," vol.34(4), pp.196-194, July 2017. Discusses the challenges that face biometric authentication in the areas of privacy and network security. The use of biometric data — an individual’s measurable physical and behavioral characteristics — isn’t new. Government and law enforcement agencies have long used it. The Federal Bureau of Investigation (FBI) has been building a biometric recognition database; the U.S. Department of Homeland Security is sharing its iris and facial recognition of foreigners with the FBI. But the use of biometric data by consumer goods manufacturers for authentication purposes has skyrocketed in recent years. For example, Apple’s iPhone allows users to scan their fingerprints to unlock the device, secure mobile bill records, and authenticate payments. Lenovo and Dell are companies that leverage fingerprints to enable users to sign onto their computers with just a swipe. Using biometric data to access our personal devices is increasing as a way to get around the limitations of the commonly used password-based mechanism: it’s easier, more convenient, and (theoretically) more secure. But biometric data can also be stolen and used in malicious ways. Capturing fingerprints at scale isn’t as easy as lifting a credit card or Social Security number, but experience and history tells us that once something is used extensively, criminals will figure out how to misuse and monetize it.

IET Signal Processing - new TOC (2017 July 20) [Website]

Mojtaba AminiOmam;Farah Torkamani-Azar;Seyed Ali Ghorashi; "Generalised Kalman-consensus filter," vol.11(5), pp.495-502, 7 2017. In this study, the authors propose a distributed form of Kalman filter for non-linear dynamics as a generalised Kalman consensus filter (GKCF) and prove its stability, analytically. More specifically, the authors obtain the sufficient condition for asymptotical convergence using Lyapunov analysis. For this purpose, the authors propose four lemmas and show that Kalman consensus filter (KCF) in a linear system is a special case of the authors' proposed GKCF.

Li Wanchun;Tang Qiu;Huang Chengfeng;Li Yingxiang; "Location algorithms for moving target in non-coherent distributed multiple-input multiple-output radar systems," vol.11(5), pp.503-514, 7 2017. In this study, for the problem of estimating the location and speed of a moving target in the non-coherent multiple-input multiple-output (MIMO) radar systems with widely separated antennas, the authors propose two new methods, in which the parameters used are the joint of bearing, elevation, frequency-of-arrival (FOA) and time-of-arrival (TOA). The two proposed methods are based on non-coherent MIMO radar systems, but method 1 centralises all measuring parameters in one linear equation and processes together, while method 2 divides the measurements into several groups according to the different transmitter-receiver pairs. In this study, the authors assume that bearing, elevation, FOA and TOA parameters have already been measured by a preprocessing algorithm. For the both methods, an initial guess is acquired through a best linear unbiased estimator. Then for method 1, a more explicit solution can be acquired by employing a maximum likelihood estimator for decorrelation, while the method 2 applies maximum likelihood estimation of a first-order Taylor expansion for a better solution. The simulations show that these two methods are effective and both of them can attain to Cramér-Rao lower bound at sufficiently moderate noise conditions.

Mingyu You;Huihui Wang;Zeqin Liu;Chong Chen;Jiaming Liu;Xiang-Huai Xu;Zhong-Min Qiu; "Novel feature extraction method for cough detection using NMF," vol.11(5), pp.515-520, 7 2017. Cough is a common symptom in respiratory diseases. To provide valuable clinical information for cough diagnosis and monitoring, objectively evaluating the quantity and intensity of cough based on cough detection by pattern recognition technologies is needed. Cough detection aims to extract the boundaries of cough events from an audio stream. From spectral visualisation, it is found that the energy spectrum of cough signal spreads widely in the whole frequency band, which is very different from a speech signal. However, almost all feature extraction methods for cough detection in the previous work are derived from speech recognition region. In this study, to find the difference of cough and other audios in a more compact representation, non-negative matrix factorisation (NMF) is exploited to extract the spectral structure from signals. Furthermore, the spectral structure from cough signal can be used as filter banks of feature extraction methods, which makes the filter banks more suitable for cough detection than manually designed ones. Besides, parameterisation for the spectral structure also provides an optimising strategy for the authors' NMF-based feature extraction method. Experiments are conducted on real data. The results demonstrate that NMF-based feature extraction method has considerable potential in improving performance for cough detection.

Jun-Zheng Jiang;Bingo Wing-Kuen Ling;Shan Ouyang; "Efficient design of prototype filter for large scale filter bank-based multicarrier systems," vol.11(5), pp.521-526, 7 2017. This study presents a new property of the filter bank-based multicarrier (FBMC) system. Also, an efficient iterative algorithm for designing the system with a large number of subcarriers and a prototype filter with a very long length are proposed. For the system, the compact from conditions are derived for both the intersymbol interference free and the interchannel interference (ICI) free. Based on these new conditions, the design of the prototype filter is formulated as an unconstrained optimisation problem where the objective function is the weighted sum of the total distortion of the system and the stopband energy. By deriving the gradient vector of the objective function, an efficient iterative algorithm is proposed for finding the solution of the optimisation problem. In addition, an efficient matrix inversion approach is presented to greatly reduce the computational complexity of the iterative algorithm. As a result, it is feasible to design the FBMC system with thousands of subcarriers. The convergence of the iterative algorithm is proved. Computer numerical simulation results with the comparisons to the existing methods are presented. It is shown that the proposed design algorithm is more effective and efficient than the existing methods.

Sabita Langkam;Alok Kanti Deb; "Dual estimation approach to blind source separation," vol.11(5), pp.527-536, 7 2017. A dual estimation approach has been proposed towards solving the blind source separation (BSS) problem. The states and the parameters of the dynamical system are estimated simultaneously when only noisy observations are available. The framework assumed for dual estimation is that of dual Kalman filtering. Two separate Kalman filters run simultaneously, the state filter which runs for state estimation and uses the current estimate of the parameters and the parameter filter which runs for parameter estimation and uses the current state estimates. The study of the information theoretical analysis of Kalman filter shows that the filter in its process maximises the mutual information between a state and the estimate indicating that the filter may be a potential solution for BSS problems. The proposed state-space-based dual estimation approach, dual Kalman BSS, has been studied for linear instantaneous BSS and the simulation results validate the proposed approach. The performance comparison carried out against the FastICA, joint-approximate diagonalisation of Eigenmatrices, k-temporal decorrelation separation and BSS algorithm by Zhang, Woo and Dlay for post-non-linear convolutive modelling in terms of signal-to-interference ratio and signal-to-distortion ratio shows that for a good intuitive initialization the proposed approach does better separation.

Wenbo Cai;Chen Chen;Lin Bai;Ye Jin;Jinho Choi; "Power allocation scheme and spectral efficiency analysis for downlink non-orthogonal multiple access systems," vol.11(5), pp.537-543, 7 2017. In this study, the authors investigate the power allocation (PA) problem and spectral efficiency (SE) analysis for the single antenna downlink non-orthogonal multiple access (NOMA) system under the sum rate maximising criteria with minimum rate constraints (SRMC-MRC). For the PA problem, they propose the duality scheme for SRMC-MRC which is considered as the optimal solution for the PA problem on one orthogonal subband in the single antenna NOMA system. They propose the specific user rate maximising criteria with minimum rate constraints (SURMC-MRC) scheme to decrease the computational complexity of SRMC-MRC. The PA problem on one subband under SURMC-MRC is proved to be equivalent to SRMC-MRC. Numerical results show that both the SE and fairness performance for SURMC-MRC can strictly approach to those of the duality scheme in the whole signal-to-noise ratio (SNR) region. They prove that NOMA under SRMC-MRC can obtain SE performance advantage in the high SNR region but accompany with some disadvantages in the low SNR region over the orthogonal multiple access system. This conclusion is verified through numerical results under different parameter conditions.

Keqing Duan;Zetao Wang;Wenchong Xie;Hui Chen;Yongliang Wang; "Sparsity-based STAP algorithm with multiple measurement vectors via sparse Bayesian learning strategy for airborne radar," vol.11(5), pp.544-553, 7 2017. To improve the performance of the recently developed parameter-dependent sparse recovery (SR) space-time adaptive processing (STAP) algorithms in real-world applications, the authors propose a novel clutter suppression algorithm with multiple measurement vectors (MMVs) using sparse Bayesian learning (SBL) strategy. First, the necessary and sufficient condition for uniqueness of sparse solutions to the SR STAP with MMV is derived. Then the SBL STAP algorithm in MMV case is introduced, and the process for hyperparameters estimation via expectation maximisation is given. Finally, a computational complexity comparison with the existing algorithms and an analysis of the proposed algorithm are conducted. Results with both simulated and the Mountain-Top data demonstrate the fast convergence and good performance of the proposed algorithm.

Wei Guo;Yili Yan;Min Jia;Xiaoming Wu;Chi Tang; "Cascaded interpolation-filter-delay-decimation algorithm without additional delay," vol.11(5), pp.554-565, 7 2017. Accurate time delay is indispensable for many sorts of signal processing applications particular in the field of communication and detection. Traditionally, due to easy operation, interpolation-filter-delay-decimation (IFDD) algorithm was regarded as an intuitive and straightforward way to delay specific signals. However, the computational complexity of this approach will increase sharply with the increase of interpolation factor. In addition, the IFDD algorithm will also induce a fixed additional delay in the delayed signal, limiting its widespread application. To solve these issues, this study presented a cascaded IFDD algorithm without additional delay (C-IFDD-WAD) by decomposing the large interpolation factor into several small interpolation factors. The authors obtained the expression of C-IFDD-WAD algorithm and analysed the spectrum characteristics of C-IFDD-WAD algorithm in this study. Their theoretical analysis combined with the simulation results show that the computation cost of C-IFDD-WAD significantly decreased in the form of cascade based on their present new method. Moreover, the additional delay of traditional IFDD algorithm was also eliminated using the present method.

Foad Fereidoony;Somayyeh Chamaani;Syed Abdullah Mirtaheri;Mohammad Ali Sebt; "Continuous basis compressive time-delay estimation in overlapped echoes," vol.11(5), pp.566-571, 7 2017. High-accuracy time-delay estimation is basically noted in several research areas. L1-minimisation is a compressive sensing (CS) approach which solves this problem with high resolution and accuracy in the case of spars signals. Band excluded orthogonal matching pursuit is another CS method which uses a greedy algorithm to retrieve time delays and has lower complexity compared with the L1-minimisation method; however, it is only applicable when the signals are well spaced or orthogonal. Moreover, both approaches are established on a discrete basis which inherently limits their accuracy for the constraint on the sampling rate of the system. To mitigate these challenges in this study, the authors first incorporate the L1-minimisation method in a greedy algorithm to achieve a high resolution in the discrete grid. In the next step, to overcome the limitation caused by the sampling rate and refine the obtained time delays, the algorithm is combined with a complex continuous basis pursuit (CCBP) by using a polar interpolation. Their simulation and experiment results show that the proposed combination of L1-minimisation-CCBP can recover time delays in very closely spaced echoes not only with high accuracy but also with low computational time and sampling rate.

Donghun Lee; "Performance analysis of scheduled TAS-MRC MIMO systems with multiple interferers," vol.11(5), pp.572-578, 7 2017. The authors present the performance analysis of a scheduled transmit antenna selection with maximal ratio combining (TAS-MRC) for multiple-input multiple-output transmission in the presence of multiple interferers. They derive the probability density function of the instantaneous post-processing signal to interference plus noise ratio for the scheduled TAS with MRC over the Rayleigh fading channels under multiple interferers. Using the distribution, the closed-form expressions of the scheduled TAS with MRC are derived for symbol error rate (SER), outage probability and ergodic capacity in the presence of multiple interferers. From the analysis results, the performance of the scheduled TAS-MRC in the presence of multiple interferers is mainly determined by the signal-to-interference ratio. The performance of SER and ergodic capacity is less sensitive to the number of interferers.

Prasanna Kumar Mundodu Krishna;Kumaraswamy Ramaswamy; "Single Channel speech separation based on empirical mode decomposition and Hilbert Transform," vol.11(5), pp.579-586, 7 2017. In this study, the authors discuss unsupervised separation of two speakers from single microphone recording using empirical mode decomposition (EMD) and Hilbert transform (HT) generally known as Hilbert-Huang transform. A two-stage separation procedure is proposed for single-channel (SC) speech separation. Initial stage of separation is done using EMD, HT and instantaneous frequencies. EMD decomposes the mixed signal into oscillatory functions known as intrinsic mode functions (IMFs). Suitable IMFs are selected using successive EMD decomposition and HT is applied to extract the instantaneous frequencies. The speech frames are grouped into two speakers using correlation of instantaneous frequencies between mixed signal and selected IMFs. Second-stage separation is done by further decomposing the estimated speakers into IMFs and finding the instantaneous amplitudes using HT. A ratio of instantaneous amplitudes of mixed speech and stage 1 recovered speech signal is computed for both speakers. Histogram of the ratio obtained can be used to estimate the ideal binary mask for each speaker. These masks are applied to the speech mixture and the underlying speakers are estimated. The proposed method was compared with the existing unsupervised SC source separation algorithms. The results show significant improvement in objective measures.

Wissam A. Jassim;Raveendran Paramesran;Naomi Harte; "Speech emotion classification using combined neurogram and INTERSPEECH 2010 paralinguistic challenge features," vol.11(5), pp.587-595, 7 2017. Recently, increasing attention has been directed to study and identify the emotional content of a spoken utterance. This study introduces a method to improve emotion classification performance under clean and noisy environments by combining two types of features: the proposed neural-responses-based features and the traditional INTERSPEECH 2010 paralinguistic emotion challenge features. The neural-responses-based features are represented by the responses of a computational model of the auditory system for listeners with normal hearing. The model simulates the responses of an auditory-nerve fibre with a characteristic frequency to a speech signal. The simulated responses of the model are represented by the 2D neurogram (time-frequency representation). The neurogram image is sub-divided into non-overlapped blocks and the averaged value of each block is computed. The neurogram features and the traditional emotion features are combined together to form the feature vector for each speech signal. The features are trained using support vector machines to predict the emotion of speech. The performance of the proposed method is evaluated on two well-known databases: the eNTERFACE and Berlin emotional speech data set. The results show that the proposed method performed better when compared with the classification results obtained using neurogram and INTERSPEECH features separately.

Zhibin Yan;Yanhua Yuan; "Particle filter based on one-step smoothing with adaptive iteration," vol.11(5), pp.596-603, 7 2017. A new one-step particle smoother is explicitly given in the form of proper weighted samples. It is employed iteratively to improve the importance sampling in particle filtering through incorporating the current measurement information into the a priori distribution. An adaptive iteration strategy is proposed to accelerate the running, which introduces a parameter into the weight increment to adjust the iteration process. Then, new particle filtering method can be constructed through combining the one-step smoothing and the adaptive iteration strategy.

Parvathi Sangeetha;Sathish Hemamalini; "Dyadic wavelet transform-based acoustic signal analysis for torque prediction of a three-phase induction motor," vol.11(5), pp.604-612, 7 2017. Condition monitoring and predictive maintenance play a key role in the maintenance of an electrical machine. In this study, to monitor the condition of the induction machine, torque is predicted from the acquired acoustic signals. The acoustic signals collected from different locations of an induction motor are analysed using dyadic wavelet transform for various load conditions. The predicted torque is computed using multiple regression method by extracting the root mean square and mean statistical features of the processed acoustic signal. The percentage error is approximately 5-10% at different location validating the feasibility of using acoustic signals for condition monitoring of machines. In addition, the harmonics induced in the healthy machine due to various acoustic sources is verified using acoustic spectrum. Also, using Pseudo spectrum MUltiple SIgnal Classification algorithm, the pattern and peak decibel for the acquired acoustic signal at different locations are analysed. For any number of samples, the patterns are unique for each location at different speeds. The results obtained validate that pattern analysis method can also be used for condition monitoring and predictive maintenance in electric machines.

Junjun Guo;Xianghui Yuan;Chongzhao Han; "Sensor selection based on maximum entropy fuzzy clustering for target tracking in large-scale sensor networks," vol.11(5), pp.613-621, 7 2017. This study proposes a sensor selection approach based on maximum entropy fuzzy clustering to address the target tracking problem in large-scale sensor networks. The authors try to deal with this problem at two levels: (i) sensor-level tracking: data association problem and sensor-level tracking are carried out at the local level, and only the track outputs are transmitted to the fusion centre for data fusion; (ii) global-level fusion: two sensor selection strategies are adopted at the fusion centre, which seek to only choose a subset of reliable sensors for track-to-track fusion and bias registration. In addition, an improved sensor selection approach is proposed for data fusion in both sparse and dense target environments, and a new fuzzy membership reconstruction strategy is introduced for data association in dense target environments. Furthermore, the proposed sensor selection strategy is also effective in the presence of the possible changing sensor biases. Simulation results are given to evaluate the performance of the proposed approaches.

Mohammad Mahdi Chitgarha;Mojtaba Radmard;Mohammad Nazari Majd;Mohammad Mahdi Nayebi; "Improving MIMO radar's performance through receivers' positioning," vol.11(5), pp.622-630, 7 2017. By employing the MIMO (multiple-input-multiple-output) technology in radar, some new problems emerged, that, in order to benefit the MIMO gains in radar, it was necessary to solve them suitably. One of such obstacles is determining the positions of receive antennas in a MIMO radar system with widely separated antennas (WS MIMO radar), since it is shown that the antennas' positions affect the whole system's performance considerably. In this study, a proper receivers' positioning procedure is proposed. To do this end, four criteria are developed based on the proposed MIMO detector and the MIMO ambiguity function. The simulations verify that the proposed positioning procedure improves the radar's performance in many aspects, such as the overall probability of detection.

Bokamoso Basutli;Sangarapillai Lambotharan; "Game-theoretic beamforming techniques for multiuser multi-cell networks under mixed quality of service constraints," vol.11(5), pp.631-639, 7 2017. The authors propose a game-theoretic approach for the downlink beamformer design for a multiuser multi-cell wireless network under a mixed quality of services (QoS) criterion. The network has real time users (RTUs) that must attain a specific set of signal-to-interference-plus-noise ratios (SINRs), and non-RTUs whose SINRs should be balanced and maximised. They propose a mixed QoS strategic non-cooperative game wherein base stations determine their downlink beamformers in a fully distributed manner. In the case of infeasibility, they have proposed a fallback mechanism which converts the problem to a pure max-min optimisation. They further propose the mixed QoS bargain game to improve the Nash equilibrium operating point through Egalitarian and Kalai-Smorodinsky bargaining solutions. They have shown that the results of bargaining games are comparable to that of the optimal solutions.

Ning Wang;Weiwei Li;Ting Jiang;Shichao Lv; "Physical layer spoofing detection based on sparse signal processing and fuzzy recognition," vol.11(5), pp.640-646, 7 2017. Spoofing attacks is one of the most critical attacks in wireless communication security. Traditional solutions are based on cryptology which is performed in the upper layers, and face many challenges especially in resource-limited application. To overcome this hurdle, physical-layer security has been received a lot of attention recently. In this study, the authors propose a physical-layer spoofing detecting scheme, where signal processing and feature recognition are utilised to improve the detection performance. In this study, they present a pretreatment process based on sparse representation (SR) to reinforce the characteristic of the signal. Furthermore, they formulate the problem of spoofing detection as one of the feature extraction and recognition, and employ a developed fuzzy C-mean algorithm to further increase the recognition accuracy. In addition, in order to verify the proposed method, they conduct experiments and use numerical simulation and analysis to evaluate the detection performance. Results showed that the proposed approach can improve the recognition accuracy significantly (increased by one order of magnitude) and the complexity is acceptable (polynomial complexity). Their findings showed that combining SR and feature extraction and recognition, the proposed method provided a good access to achieve a higher accuracy scheme of spoofing detection.

IEEE Transactions on Geoscience and Remote Sensing - new TOC (2017 July 20) [Website]

Jian Kang;Yuanyuan Wang;Marco Körner;Xiao Xiang Zhu; "Robust Object-Based Multipass InSAR Deformation Reconstruction," vol.55(8), pp.4239-4251, Aug. 2017. Deformation monitoring by multipass synthetic aperture radar (SAR) interferometry (InSAR) is, so far, the only imaging-based method to assess millimeter-level deformation over large areas from space. Past research mostly focused on the optimal retrieval of deformation parameters on the basis of a single pixel or a pixel cluster. Only until recently, the first demonstration of object-based urban infrastructure monitoring by fusing InSAR and the semantic classification labels derived from optical images was presented by Wang et al. Given such classification labels in the SAR image, we propose a general framework for object-based InSAR parameter retrieval, where the parameters of the whole object are jointly estimated by the inversion of a regularized tensor model instead of pixelwise. Our approach does not assume the stationarity of each sample in the object, which is usually assumed in other pixel cluster-based methods, such as SqueeSAR. In addition, to handle outliers in real data, a robust phase recovery step prior to parameter retrieval is also introduced. In typical settings, the proposed method outperforms the current pixelwise estimators, e.g., periodogram, by a factor of several tens in the accuracy of the linear deformation estimates. Last but not least, for a practical demonstration on bridge monitoring, we present a full workflow of long-term bridge monitoring using the proposed approach.

Xiangyu Wang;Robert Wang;Yunkai Deng;Pei Wang;Ning Li;Weidong Yu;Wei Wang; "Precise Calibration of Channel Imbalance for Very High Resolution SAR With Stepped Frequency," vol.55(8), pp.4252-4261, Aug. 2017. Synthetic aperture radar (SAR) images require a high-resolution system for accurate interpretations. This high range resolution can be achieved by stepped frequency chirp signals. To reconstruct a wideband waveform from each subband signal, amplitude/phase/delay imbalance between the channels should be precisely compensated. In this paper, the system configuration is first presented. Further, a calibration strategy was proposed based on three calibration loops: the reference calibration, transmitting calibration, and receiving calibration loops for coarsely compensating the channel imbalance. Then, two different methods based on the cost functions were proposed to remove the residual channel imbalance. The proposed methods were validated using stepped frequency SAR data acquired by an X-band airborne SAR system with a total bandwidth of 3.6 GHz, yielding (unweighted) a 3-dB range resolution of 4 cm.

Yiqing Guo;Xiuping Jia;David Paull; "Superpixel-Based Adaptive Kernel Selection for Angular Effect Normalization of Remote Sensing Images With Kernel Learning," vol.55(8), pp.4262-4271, Aug. 2017. Considering that satellites rarely acquire data from the exact nadir direction, angular effect normalization needs to be conducted as an important preprocessing step to correct reflectance observations from off-nadir directions into the nadir direction. Kernel-based bidirectional reflectance distribution function models have been employed for angular effect correction. The kernels used in the model are often predetermined and fixed for an entire image. However, the fixed kernels are unable to accommodate the various reflective characteristics of different ground cover types present in the imaged area. In this paper, we propose a kernel learning procedure that enables the flexible selection of kernels for different land cover types within a scene. The kernels are selected from kernel dictionaries that contain multiple candidate kernels. The selection is conducted on the superpixel level instead of the pixel level in order to reduce within-class variation and overcome the overfitting problem. Experiments are conducted on multiangular images acquired by the Sentinel-2A satellite over a rural area in southeastern Australia. Cross-validation results show that the proposed method is able to adaptively select appropriate kernels for different land cover types, leading to an improved performance for image normalization.

Xiaohua Tong;Zhen Ye;Lingyun Li;Shijie Liu;Yanmin Jin;Peng Chen;Huan Xie;Songlin Zhang; "Detection and Estimation of Along-Track Attitude Jitter From Ziyuan-3 Three-Line-Array Images Based on Back-Projection Residuals," vol.55(8), pp.4272-4284, Aug. 2017. High-resolution satellite images (HRSIs) obtained from linear array charge-coupled device sensors always suffer from geometric instability in the presence of attitude jitter. Therefore, detection and compensation of spacecraft attitude jitter in both the cross-track and along-track directions are crucial to improve the geometric accuracy of HRSIs. A number of reports have been made on the detection and estimation of cross-track attitude jitter. However, the detection of the attitude jitter in the along-track direction is more complicated due to the impact of topographic change. This paper presents a novel approach to achieve accurate estimation of the along-track attitude jitter by eliminating the influence of topographic information based on the back-projection residuals of three-line-array (TLA) images. The principle of detection and estimation of along-track attitude jitter is described, and the proposed approach consists of three main components as follows: 1) dense image matching of the TLA images using a comprehensive matching strategy; 2) detection of the back-projection residuals in the line direction caused by attitude jitter; and 3) estimation of the along-track attitude jitter from the back-projection residuals using a genetic algorithm. Experiments were conducted using China’s Ziyuan-3 (ZY-3) TLA images, and the experimental results reveal that the frequency of the attitude jitter in the along-track direction ranges between 0.6 and 0.7 Hz, which is consistent with the frequency in the cross-track direction observed in our previous study. In addition, a comparison of the results of the proposed approach with those from direct attitude observations shows good consistency, with as little as 0.1-pixel disparity, which demonstrates the feasibility and reliability of the proposed approach. Furthermore, the geometric accuracy is further improved from a pixel level to a subpixel level and the periodic trend is removed - ith the compensation of the estimated attitude jitter in addition to the conventional affine compensation, which validates the potential of the proposed approach for geometric accuracy improvement with ZY-3 TLA images.

Xudong Jin;Yanfeng Gu; "Superpixel-Based Intrinsic Image Decomposition of Hyperspectral Images," vol.55(8), pp.4285-4295, Aug. 2017. In this paper, we propose a novel superpixel-based intrinsic image decomposition (SIID) framework for hyperspectral images. Intrinsic images are usually referred to the separation of shading and reflectance components from an input image. Considering the high dimensionality of hyperspectral images, we further decompose the shading component into the product of environment illumination and surface orientation changes, thus modeling the problem more properly. The proposed method consists of the following steps. First, we build two superpixel segmentation maps of different scales, i.e., a finer one that is oversegmented and a coarser one that is undersegmented. Based on the observation that the finer superpixel map achieves a higher segmentation accuracy, whereas the coarser superpixel map tends to reserve the objectness of the original image, we model the SIID decomposition problem in a matrix form based on the finer superpixel map and define a constraint matrix by integrating the information in the coarser superpixel map. The constraint matrix is introduced as a secondary constraint in order to make the ill-posed IID problem solvable. Finally, we transform the original decomposition problem into minimizing the Frobenius norm of the proposed matrix energy function and iteratively derive the solution. Our experimental results demonstrate that the proposed method is able to achieve a performance outperforming the state-of-the-art while making a great improvement in efficiency.

David Bebbington;Laura Carrea; "Geometric Polarimetry—Part II: The Antenna Height Spinor and the Bistatic Scattering Matrix," vol.55(8), pp.4296-4313, Aug. 2017. This paper completes the fundamental development of the basic coherent entities in radar polarimetry for coherent reciprocal scattering involving polarized wave states, antenna states, and scattering matrices. The concept of antenna polarization states as contravariant spinors is validated from fundamental principles in terms of Schelkunoff’s reaction theorem and the Lorentz reciprocity theorem. In the general bistatic case, polarization states of different wavevectors must be related by the linear scattering matrix. It is shown that the relationship can be expressed geometrically, and that each scattering matrix has a unique complex scalar invariant characterizing a homographic mapping relating pairs of transmit/receive states for which the scattering amplitude vanishes. We show how the scalar invariant is related to the properties of the bistatic Huynen fork in both its conventional form and according to a new definition. Results are presented illustrating the invariant <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> for a range of spheroidal Rayleigh scatterers.

Tetsuya Fukuhara;Toru Kouyama;Soushi Kato;Ryosuke Nakamura;Yukihiro Takahashi;Hiroaki Akiyama; "Detection of Small Wildfire by Thermal Infrared Camera With the Uncooled Microbolometer Array for 50-kg Class Satellite," vol.55(8), pp.4314-4324, Aug. 2017. The thermal infrared camera with the uncooled microbolometer array based on commercial products has been developed in a laboratory of a Japanese university and mounted to a 50-kg class small satellite specialized for discovering wildfire. It has been launched in 2014 and successfully detected considerable hotspots not only wildfire but also volcanoes. Brightness temperature derived from observation has been verified, and the scale of observed wildfire has been provisionally presumed; the smallest wildfire ever detected has a flame zone less than <inline-formula> <tex-math notation="LaTeX">$\sim 300~\text{m}^{\mathrm {\mathbf {2}}}$ </tex-math></inline-formula> and the fire radiative power = ~35.4 mW. It is 1/30th the size of the initial requirement estimated in the design process. Our thermal infrared camera developed in a short time with low cost has attained enough ability to discover small wildfire which is suppressive at initial attack.

Andrea Marinoni;Antonio Plaza;Paolo Gamba; "A Novel Preunmixing Framework for Efficient Detection of Linear Mixtures in Hyperspectral Images," vol.55(8), pp.4325-4333, Aug. 2017. In order to provide reliable information about the instantaneous field of view considered in hyperspectral images through spectral unmixing, understanding the kind of mixture that occurs over each pixel plays a crucial role. In this paper, in order to detect nonlinear mixtures, a method for fast identification of linear mixtures is introduced. The proposed method does not need statistical information and performs an a priori test on the spectral linearity of each pixel. It uses standard least squares optimization to achieve estimates of the likelihood of occurrence of linear combinations of endmembers by taking advantage of the geometrical properties of hyperspectral signatures. Experimental results on both real and synthetic data sets show that the aforesaid algorithm is actually able to deliver a reliable and thorough assessment of the kind of mixtures present in the pixels of the scene.

Han Ma;Qiang Liu;Shunlin Liang;Zhiqiang Xiao; "Simultaneous Estimation of Leaf Area Index, Fraction of Absorbed Photosynthetically Active Radiation, and Surface Albedo From Multiple-Satellite Data," vol.55(8), pp.4334-4354, Aug. 2017. Leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR), and surface broadband albedo are three routinely generated land-surface parameters from satellite observations, which have been widely used in land-surface modeling and environmental monitoring. Currently, most global land products are retrieved separately from individual satellite data. Many issues, such as data gaps, spatial and temporal inconsistencies, and insufficient accuracy under certain conditions resulting from the inadequacies of single-sensor observations, have made the incorporation of multiple sensors a reasonable solution. In this paper, an approach to simultaneous estimation of LAI, broadband albedo, and FAPAR from multiple-satellite sensors is further refined. The method, improved from that proposed in an earlier study using Moderate Resolution Imaging Spectroradiometer (MODIS) data, consists of several steps. First, a coupled dynamic and radiative-transfer model based on MODIS, SPOT/VEGETATION, and Multiangle Imaging SpectroRadiometer data was developed to retrieve LAI values and use them to construct a time-evolving dynamic model. Second, an iteration process with predefined exit criteria was developed to obtain consistent gap-filled LAI estimates. Third, a spectral albedo based on the retrieved LAI values was simulated using a radiative-transfer model and then converted to a broadband albedo using empirical methods. Snow-covered pixels identified by normalized difference snow index thresholds were adjusted to the weighted average of the underlying albedo and the maximum snow albedo. Finally, the FAPAR of green vegetation was calculated as a combination of the albedo at the top of the canopy, the soil albedo, and the transmittance of the PAR down to the background. Validation of retrieved LAI, albedo, and FAPAR values obtained from multiple-satellite data over ten study sites has demonstrated that the proposed method can produce more accurate products than pr- sently distributed global products.

Saurabh Prasad;Demetrio Labate;Minshan Cui;Yuhang Zhang; "Morphologically Decoupled Structured Sparsity for Rotation-Invariant Hyperspectral Image Analysis," vol.55(8), pp.4355-4366, Aug. 2017. Hyperspectral imagery has emerged as a popular sensing modality for a variety of applications, and sparsity-based methods were shown to be very effective to deal with challenges coming from high dimensionality in most hyperspectral classification problems. In this paper, we challenge the conventional approach to hyperspectral classification that typically builds sparsity-based classifiers directly on spectral reflectance features or features derived directly from the data. We assert that hyperspectral image (HSI) processing can benefit very significantly by decoupling data into geometrically distinct components since the resulting decoupled components are much more suitable for sparse representation-based classifiers. Specifically, we apply morphological separation to decouple data into texture and cartoon-like components, which are sparsely represented using local discrete cosine bases and multiscale shearlets, respectively. In addition to providing a structured sparse representation, this approach allows us to build classifiers with invariance properties specific to each geometrically distinct component of the data. The experimental results using real-world HSI data sets demonstrate the efficacy of the proposed framework for classifying multichannel imagery under a variety of adverse conditions—in particular, small training sample size, additive noise, and rotational variabilities between training and test samples.

Yan Huang;Guisheng Liao;Jingwei Xu;Jie Li; "GMTI and Parameter Estimation via Time-Doppler Chirp-Varying Approach for Single-Channel Airborne SAR System," vol.55(8), pp.4367-4383, Aug. 2017. Conventionally, a single-channel synthetic aperture radar (SC-SAR) system can hardly detect weak moving targets simply. In this paper, a time-Doppler chirp-varying (TDCV) filter is proposed for ground moving target indication and parameter estimation with the airborne SC-SAR system. The proposed method is easy to implement and mainly includes three steps. First, a traditional 2-D frequency range-Doppler algorithm is used to generate an original image. Second, the second-order range cell migration (RCM) phase term is partly compensated in the range frequency and azimuth time domain, and the rest of second-order RCM phase term is compensated in 2-D frequency domain. The whole processing, which is referred to as the TDCV approach, is employed to acquire a new TDCV image. Third, compared the original image with the new image, the clutter scatterers are nearly motionless while the moving targets are translated along the range direction due to their nonzero radial velocities. After the cancellation between two normalized images, the clutter background would be significantly suppressed since the two images generated by the same data. As a result, the moving targets can be indicated and the range difference of the moving target between two images can be exploited to estimate their radial velocities. The results obtained by applying the proposed method into a set of real SAR data are consistent with the analysis presented in this paper.

Satoru Yamamoto;Tsuneo Matsunaga;Ryosuke Nakamura;Yasuhito Sekine;Naru Hirata;Yasushi Yamaguchi; "An Automated Method for Crater Counting Using Rotational Pixel Swapping Method," vol.55(8), pp.4384-4397, Aug. 2017. We develop a fully automated algorithm for determining the geological ages by crater counting from the digital terrain model (DTM) and the digital elevation model (DEM) taken by remote-sensing observations. The algorithm is based on the rotational pixel swapping method, which uses a multiplication operation between the original DTM/DEM data and the rotated data to detect impact craters. Our method does not need binarization and/or noise reduction, because noise components are automatically erased. We show that our method can detect not only simple craters but also complex circular structures such as imperfect, degraded, or overlapping craters. We demonstrate that this method succeeds in the automatic detection of hundreds to thousands of impact craters, and the estimated ages are consistent with those by manual counting in previous works. In addition, it is shown that the calculation time by this method is more than several hundred times faster than by previous methods.

Ting Lu;Shutao Li;Leyuan Fang;Xiuping Jia;Jón Atli Benediktsson; "From Subpixel to Superpixel: A Novel Fusion Framework for Hyperspectral Image Classification," vol.55(8), pp.4398-4411, Aug. 2017. Supervised classification of hyperspectral images (HSI) is a very challenging task due to the existence of noisy and mixed spectral characteristics. Recently, the widely developed spectral unmixing techniques offer the possibility to extract spectral mixture information at a subpixel level, which can contribute to the categorization of seriously mixed spectral pixels. Besides, it has been demonstrated that the discrimination between different materials will be improved by integrating the geometry and structure information, which can be derived from the variance between neighboring pixels. Furthermore, by incorporating the spatial context, the superpixel-based spectral–spatial similarity information can be used to smooth classification results in homogeneous regions. Therefore, a novel fusion framework for HSI classification that combines subpixel, pixel, and superpixel-based complementary information is proposed in this paper. Here, both feature fusion and decision fusion schemes are introduced. For the feature fusion scheme, the first step is to extract subpixel-level, pixel-level, and superpixel-level features from HSI, respectively. Then, the multiple feature-induced kernels are fused to form one composite kernel, which is incorporated with a support vector machine (SVM) classifier for label assignment. For the decision fusion scheme, class probabilities based on three different features are estimated by the probabilistic SVM classifier first. Then, the class probabilities are adaptively fused to form a probabilistic decision rule for classification. Experimental results tested on different real HSI images can demonstrate the effectiveness of the proposed fusion schemes in improving discrimination capability, when compared with the classification results relied on each individual feature.

Tian Hu;Biao Cao;Yongming Du;Hua Li;Cong Wang;Zunjian Bian;Donglian Sun;Qinhuo Liu; "Estimation of Surface Upward Longwave Radiation Using a Direct Physical Algorithm," vol.55(8), pp.4412-4426, Aug. 2017. Surface upward longwave radiation (SULR) is a significant component of the surface radiation budget and is closely linked with evapotranspiration, soil moisture, and surface cooling on clear nights. Therefore, accurately estimating SULR is essential to better understand its spatiotemporal dynamics or to characterize the thermal environment of a given land surface. Currently, most methods for estimating SULR (including the physical and hybrid methods) fail to account for the thermal anisotropy, which can introduce significant errors into the calculation. We previously proposed the combined algorithm that considers the thermal anisotropy to more accurately estimate the SULR. However, this proposed method has several shortcomings. For example, it considers the directionality of the emissivity and the effective temperature separately under the support of a parametric directional emissivity model. However, the directional emissivity model is not maturely developed for different land surface types, especially on nonvegetated surfaces. And the separation of land surface temperature and emissivity may undermine the estimation accuracy. Furthermore, this proposed method requires a series of input parameters that is not always available, limiting its applicability. In this paper, we present a refined algorithm that uses a kernel-driven model and the technique of band conversion to calculate the SULR directly based on surface-leaving radiances. This direct physical algorithm is then applied to the Wide-angle infrared Dual-mode line/area Array Scanner data set and validated using longwave radiation data collected by automatic meteorological stations from the Heihe Watershed Allied Telemetry Experimental Research experiment. The results of these tests suggest that the direct algorithm works effectively. The root-mean-square error (RMSE) and mean bias error (MBE) of the direct algorithm on maize surfaces are 4.417 and 0.474 <inline-formula> <tex-math notation="LaTeX">$\text {W}- cdot ~\text {m}^{-2}$ </tex-math></inline-formula>, respectively. When the thermal anisotropy is incorporated, the RMSE and absolute MBE decrease by a maximum of 4.734 and 7.414 <inline-formula> <tex-math notation="LaTeX">${\mathrm{ W}}\cdot {\mathrm{ m}}^{\mathrm {-2}}$ </tex-math></inline-formula>, respectively. Different land types yield different results: for vegetable surfaces, the estimation biases of the direct model are approximately <inline-formula> <tex-math notation="LaTeX">$-2{\mathrm{ W}}\cdot {\mathrm{ m}}^{\mathrm {-2}}$ </tex-math></inline-formula>, whereas orchard surfaces yield biases are between −2 and <inline-formula> <tex-math notation="LaTeX">$-3.5{\mathrm{ W}}\cdot {\mathrm{ m}}^{-2}$ </tex-math></inline-formula>, and village surfaces yield biases exceeding <inline-formula> <tex-math notation="LaTeX">$- 10~{\mathrm{ W}}\cdot {\mathrm{ m}}^{-2}$ </tex-math></inline-formula>. These differences can be attributed to the varying effects of the kernel-driven model across different types of land surfaces. The RMSE and absolute MBE obtained using the direct algorithm are slightly smaller (0.587 and 1.685 <inline-formula> <tex-math notation="LaTeX">${\mathrm{ W}}\cdot {\mathrm{ m}}^{\mathrm {-2}}$ </tex-math></inline-formula>, respectively) than those obtained using the combined algorithm; they are also smaller than the results of the traditional temperature–emissivity algorithm (by 8.7 and 11.7 <inline-formula> <tex-math notation="LaTeX">${\mathrm{ W}}\cdot {\mathrm{ m}}^{-2}$ </tex-math></inline-formula>, respectively).

Yuebin Wang;Liqiang Zhang;Hao Deng;Jiwen Lu;Haiyang Huang;Liang Zhang;Jun Liu;Hong Tang;Xiaoyue Xing; "Learning a Discriminative Distance Metric With Label Consistency for Scene Classification," vol.55(8), pp.4427-4440, Aug. 2017. To achieve high scene classification performance of high spatial resolution remote sensing images (HSR-RSIs), it is important to learn a discriminative space in which the distance metric can precisely measure both similarity and dissimilarity of features and labels between images. While the traditional metric learning methods focus on preserving interclass separability, label consistency (LC) is less involved, and this might degrade scene images classification accuracy. Aiming at considering intraclass compactness in HSR-RSIs, we propose a discriminative distance metric learning method with LC (DDML-LC). The DDML-LC starts from the dense scale invariant feature transformation features extracted from HSR-RSIs, and then uses spatial pyramid maximum pooling with sparse coding to encode the features. In the learning process, the intraclass compactness and interclass separability are enforced while the global and local LC after the feature transformation is constrained, leading to a joint optimization of feature manifold, distance metric, and label distribution. The learned metric space can scale to discriminate out-of-sample HSR-RSIs that do not appear in the metric learning process. Experimental results on three data sets demonstrate the superior performance of the DDML-LC over state-of-the-art techniques in HSR-RSI classification.

Esam Othman;Yakoub Bazi;Farid Melgani;Haikel Alhichri;Naif Alajlan;Mansour Zuair; "Domain Adaptation Network for Cross-Scene Classification," vol.55(8), pp.4441-4456, Aug. 2017. In this paper, we present a domain adaptation network to deal with classification scenarios subjected to the data shift problem (i.e., labeled and unlabeled images acquired with different sensors and over completely different geographical areas). We rely on the power of pretrained convolutional neural networks (CNNs) to generate an initial feature representation of the labeled and unlabeled images under analysis, referred as source and target domains, respectively. Then we feed the resulting features to an extra network placed on the top of the pretrained CNN for further learning. During the fine-tuning phase, we learn the weights of this network by jointly minimizing three regularization terms, which are: 1) the cross-entropy error on the labeled source data; 2) the maximum mean discrepancy between the source and target data distributions; and 3) the geometrical structure of the target data. Furthermore, to obtain robust hidden representations we propose a mini-batch gradient-based optimization method with a dynamic sample size for the local alignment of the source and target distributions. To validate the method, in the experiments we use the University of California Merced data set and a new multisensor data set acquired over several regions of the Kingdom of Saudi Arabia. The experiments show that: 1) pretrained CNNs offer an interesting solution for image classification compared to state-of-the-art methods; 2) their performances can be degraded when dealing with data sets subjected to the data shift problem; and 3) how the proposed approach represents a promising solution for effectively handling this issue.

Fernando Luis Bordignon;Leandro Passos de Figueiredo;Leonardo Azevedo;Amilcar Soares;Mauro Roisenberg;Guenther Schwedersky Neto; "Hybrid Global Stochastic and Bayesian Linearized Acoustic Seismic Inversion Methodology," vol.55(8), pp.4457-4464, Aug. 2017. Seismic inversion is an important technique for reservoir modeling and characterization due to its potential in inferring the spatial distribution of the subsurface elastic properties of interest. Two of the most common seismic inversion methodologies within the oil and gas industry are iterative geostatistical seismic inversion and Bayesian linearized seismic inversion. Although the first technique is able to explore the uncertainty space related with the inverse solution in a more comprehensive way, it is also very computationally expensive compared with the Bayesian linearized approach. In this paper, we introduce a novel hybrid seismic inversion procedure that takes advantage of both the frameworks: an iterative geostatistical seismic inversion methodology is started from an initial guess model provided by a Bayesian inversion solution. Also, we propose a new approach to model the uncertainty of the retrieved inverse solution by means of kernel density estimation. The proposed approach is implemented in two different real data sets with different signal-to-noise ratios. The results show the robustness of the hybrid inverse methodology and the usefulness of modeling the uncertainty of the retrieved inverse solution.

Nariman Firoozy;Thomas Neusitzer;Durell S. Desmond;Tyler Tiede;Marcos J. L. Lemes;Jack Landy;Puyan Mojabi;Søren Rysgaard;Gary Stern;David G. Barber; "An Electromagnetic Detection Case Study on Crude Oil Injection in a Young Sea Ice Environment," vol.55(8), pp.4465-4475, Aug. 2017. This paper presents a multidisciplinary case study on a crude oil injection experiment in an artificially grown young sea ice environment under controlled conditions. In particular, the changes in the geophysical and electromagnetic responses of the sea ice to oil introduction are investigated for this experiment. Furthermore, we perform a preliminary study on the detection of oil spills utilizing the normalized radar cross section (NRCS) data collected by a C-band scatterometer is presented. To this end, an inversion scheme is introduced that retrieves the effective complex permittivity of the domain prior and after oil injection by comparing the simulated and calibrated measured NRCS data, while roughness parameters calculated using lidar are utilized as prior information. Once the complex permittivity values are obtained, the volume fraction of oil within the sea ice is found using a mixture formula. Based on this volume fraction, a binary detection of oil presence seems to be possible for this test case. Finally, the possible sources of error in the retrieved effective volume fraction, which is an overestimate of the actual value, are identified and discussed by macrolevel and microlevel analyses through bulk salinity measurements and X-ray imagery of the samples, as well as a brief chemical analysis.

Qing Cheng;Huiqing Liu;Huanfeng Shen;Penghai Wu;Liangpei Zhang; "A Spatial and Temporal Nonlocal Filter-Based Data Fusion Method," vol.55(8), pp.4476-4488, Aug. 2017. The tradeoff in remote sensing instruments that balances the spatial resolution and temporal frequency limits our capacity to monitor spatial and temporal dynamics effectively. The spatiotemporal data fusion technique is considered as a cost-effective way to obtain remote sensing data with both high spatial resolution and high temporal frequency, by blending observations from multiple sensors with different advantages or characteristics. In this paper, we develop the spatial and temporal nonlocal filter-based fusion model (STNLFFM) to enhance the prediction capacity and accuracy, especially for complex changed landscapes. The STNLFFM method provides a new transformation relationship between the fine-resolution reflectance images acquired from the same sensor at different dates with the help of coarse-resolution reflectance data, and makes full use of the high degree of spatiotemporal redundancy in the remote sensing image sequence to produce the final prediction. The proposed method was tested over both the Coleambally Irrigation Area study site and the Lower Gwydir Catchment study site. The results show that the proposed method can provide a more accurate and robust prediction, especially for heterogeneous landscapes and temporally dynamic areas.

Miguel Ángel Manso-Callejo;Fu-Kiong Kenneth Chan;Teresa Iturrioz-Aguirre;María Teresa Manrique-Sancho; "Using Bivariate Gaussian Distribution Confidence Ellipses of Lightning Flashes for Efficiently Computing Reliable Large Area Density Maps," vol.55(8), pp.4489-4499, Aug. 2017. This paper presents an enhanced method used to deliver the 50th percentile location uncertainty ellipse given by lightning locating systems for lightning flash density maps with very fine detail of about 100 m <inline-formula> <tex-math notation="LaTeX">$\times100$ </tex-math></inline-formula> m (or less) over a vast land region spanning thousands of kilometers. Because it is founded on a rigorous mathematical method (without using convolution operators), this new technique for computing the probabilistic lightning density map coverage represents a significant improvement in terms of precision compared with other techniques identified in the literature. The technique was adapted from a method used to calculate the probability of debris collision for spacecraft, and is based on a probabilistic approach to lightning flash impact over a facility. As a proof of concept, a Java language algorithm was implemented. This algorithm was applied to a set of 11.34 million lightning flashes over the territory of Spain registered between 2003 and 2012, producing a lightning density map coverage with a cell size of 2 arcsecond (~60 m).

Steffen Wollstadt;Paco López-Dekker;Francesco De Zan;Marwan Younis; "Design Principles and Considerations for Spaceborne ATI SAR-Based Observations of Ocean Surface Velocity Vectors," vol.55(8), pp.4500-4519, Aug. 2017. This paper presents a methodology to design a spaceborne dual-beam along-track synthetic aperture radar interferometer to retrieve ocean surface velocity vectors. All related aspects and necessary tradeoffs are identified and discussed or reviewed, respectively. This includes a review of the measurement principle and the relation between baseline and sensitivity, the relation between wind and radar backscatter, a discussion of the observation geometry, including the antenna concept, polarization diversity, and all main error contributions. The design methodology consists of a sensitivity-based derivation of explicit instrument requirements from scientific requirements. In turn, this derivation is based on a statistical model for the interferometric phase error. This allows a quantitative, well-grounded instrument design offering an additional degree of freedom to the approach, which we call “noise-equivalent-sigma-zero requirement space.” Crucial tradeoffs for the system design, such as the resolution, the number of independent looks, the minimum wind speed, and the coherence and ambiguities, are pointed out and discussed. Finally, this paper concludes with a single platform system concept operating in Ku-band, which provides the measurement quality needed to achieve a surface velocity estimation accuracy of 5 cm/s, 200-km swath coverage, for 4 <inline-formula> <tex-math notation="LaTeX">$\times$ </tex-math></inline-formula> 4 km2 L2-product resolution and winds starting at 3 m/s.

Shaohui Mei;Jingyu Ji;Junhui Hou;Xu Li;Qian Du; "Learning Sensor-Specific Spatial-Spectral Features of Hyperspectral Images via Convolutional Neural Networks," vol.55(8), pp.4520-4533, Aug. 2017. Convolutional neural network (CNN) is well known for its capability of feature learning and has made revolutionary achievements in many applications, such as scene recognition and target detection. In this paper, its capability of feature learning in hyperspectral images is explored by constructing a five-layer CNN for classification (C-CNN). The proposed C-CNN is constructed by including recent advances in deep learning area, such as batch normalization, dropout, and parametric rectified linear unit (PReLU) activation function. In addition, both spatial context and spectral information are elegantly integrated into the C-CNN such that spatial-spectral features are learned for hyperspectral images. A companion feature-learning CNN (FL-CNN) is constructed by extracting fully connected feature layers in this C-CNN. Both supervised and unsupervised modes are designed for the proposed FL-CNN to learn sensor-specific spatial-spectral features. Extensive experimental results on four benchmark data sets from two well-known hyperspectral sensors, namely airborne visible/infrared imaging spectrometer (AVIRIS) and reflective optics system imaging spectrometer (ROSIS) sensors, demonstrate that our proposed C-CNN outperforms the state-of-the-art CNN-based classification methods, and its corresponding FL-CNN is very effective to extract sensor-specific spatial-spectral features for hyperspectral applications under both supervised and unsupervised modes.

Qi Wei;Marcus Chen;Jean-Yves Tourneret;Simon Godsill; "Unsupervised Nonlinear Spectral Unmixing Based on a Multilinear Mixing Model," vol.55(8), pp.4534-4544, Aug. 2017. In the community of remote sensing, nonlinear mixture models have recently received particular attention in hyperspectral image processing. In this paper, we present a novel nonlinear spectral unmixing method following the recent multilinear mixing model of Heylen and Scheunders, which includes an infinite number of terms related to interactions between different endmembers. The proposed unmixing method is unsupervised in the sense that the endmembers are estimated jointly with the abundances and other parameters of interest, i.e., the transition probability of undergoing further interactions. Nonnegativity and sum-to-one constraints are imposed on abundances while only nonnegativity is considered for endmembers. The resulting unmixing problem is formulated as a constrained nonlinear optimization problem, which is solved by a block coordinate descent strategy, consisting of updating the endmembers, abundances, and transition probability iteratively. The proposed method is evaluated and compared with existing linear and nonlinear unmixing methods for both synthetic and real hyperspectral data sets acquired by the airborne visible/infrared imaging spectrometer sensor. The advantage of using nonlinear unmixing as opposed to linear unmixing is clearly shown in these examples.

Cheng Wang;Liang Chen;Lu Liu; "A New Analytical Model to Study the Ionospheric Effects on VHF/UHF Wideband SAR Imaging," vol.55(8), pp.4545-4557, Aug. 2017. With a view to detecting foliage-obscured/ground-obscured targets on a global scale, low frequency (i.e., very high frequency (VHF)/ultrahigh frequency (UHF) band) and wide bandwidth is a trend in future spaceborne synthetic aperture radar (SAR) system design. However, due to the dispersion of ionosphere, VHF/UHF wide bandwidth SAR signals inevitably experience adverse effects. In contrast to narrow bandwidth SAR at VHF/UHF, quadratic and cubic ionospheric phase errors will introduce noticeable effects on future wide-bandwidth SAR systems. Traditional evaluation models based on Taylor series expansion may become inaccurate when an extremely wide bandwidth is considered. With a focus on this this issue, first, the shortcoming of Taylor series expansion of ionospheric phase errors is briefly discussed in this paper. Then, a new analytical model based on Legendre orthogonal polynomials is developed, which is expected to be widely applicable, especially for low-frequency and wide-bandwidth SAR systems. Finally, compared with previous models based on Taylor series expansion, numerical simulations and evaluations show the superiority of the new model.

David M. Tratt;Stephen J. Young;John A. Hackwell;Donald J. Rudy;David W. Warren;Adam G. Vore;Patrick D. Johnson; "MAHI: An Airborne Mid-Infrared Imaging Spectrometer for Industrial Emissions Monitoring," vol.55(8), pp.4558-4566, Aug. 2017. An airborne hyperspectral imager operating in the midwave-infrared spectral range is described. The Mid-infrared Airborne Hyperspectral Imager (MAHI) features 3.3-nm spectral sampling over its 3.3–<inline-formula> <tex-math notation="LaTeX">$5.4~\mu \text{m}$ </tex-math></inline-formula> wavelength range. MAHI operates in a roll-stabilized pushbroom configuration with 480 cross-track pixels, each with an instantaneous field-of-view (IFOV) of 0.94 mrad, to provide for a total FOV of 25.8°. The sensor spectroradiometric performance is illustrated by case studies featuring the detection, identification, and quantification of a number of fugitive gaseous emissions from industrial sources.

Guillaume Tochon;Jocelyn Chanussot;Mauro Dalla Mura;Andrea L. Bertozzi; "Object Tracking by Hierarchical Decomposition of Hyperspectral Video Sequences: Application to Chemical Gas Plume Tracking," vol.55(8), pp.4567-4585, Aug. 2017. It is now possible to collect hyperspectral video sequences at a near real-time frame rate. The wealth of spectral, spatial, and temporal information of those sequences is appealing for various applications, but classical video processing techniques must be adapted to handle the high dimensionality and huge size of the data to process. In this paper, we introduce a novel method based on the hierarchical analysis of hyperspectral video sequences to perform object tracking. This latter operation is tackled as a sequential object detection process, conducted on the hierarchical representation of the hyperspectral video frames. We apply the proposed methodology to the chemical gas plume tracking scenario and compare its performances with state-of-the-art methods, for two real hyperspectral video sequences, and show that the proposed approach performs at least equally well.

Derek A. Houtz;William Emery;Dazhen Gu;Karl Jacob;Axel Murk;David K. Walker;Richard J. Wylde; "Electromagnetic Design and Performance of a Conical Microwave Blackbody Target for Radiometer Calibration," vol.55(8), pp.4586-4596, Aug. 2017. A conical cavity has been designed and fabricated for use as a broadband passive microwave calibration source, or blackbody, at the National Institute of Standards and Technology. The blackbody will be used as a national primary standard for brightness temperature and will allow for the prelaunch calibration of spaceborne radiometers and calibration of ground-based systems to provide traceability among radiometric data. The conical geometry provides performance independent of polarization, minimizing reflections, and standing waves, thus having a high microwave emissivity. The conical blackbody has advantages over typical pyramidal array geometries, including reduced temperature gradients and excellent broadband electromagnetic performance over more than a frequency decade. The blackbody is designed for use between 18 and 230 GHz, at temperatures between 80 and 350 K, and is vacuum compatible. To approximate theoretical blackbody behavior, the design maximizes emissivity and thus minimizes reflectivity. A newly developed microwave absorber is demonstrated that uses cryogenically compatible, thermally conductive two-part epoxy with magnetic carbonyl iron (CBI) powder loading. We measured the complex permittivity and permeability properties for different CBI-loading percentages; the conical absorber is then designed and optimized with geometric optics and finite-element modeling, and finally, the reflectivity of the resulting fabricated structure is measured. We demonstrated normal incidence reflectivity considerably below −40 dB at all relevant remote sensing frequencies.

Gaofei Yin;Ainong Li;Wei Zhao;Huaan Jin;Jinhu Bian;Shengbiao Wu; "Modeling Canopy Reflectance Over Sloping Terrain Based on Path Length Correction," vol.55(8), pp.4597-4609, Aug. 2017. Sloping terrain induces distortion of canopy reflectance (CR), and the retrieval of biophysical variables from remote sensing data needs to account for topographic effects. We developed a 1-D model (the path length correction (PLC)-based model) for simulating CR over sloping terrain. The effects of sloping terrain on single-order and diffuse scatterings are accounted for by PLC and modification of the fraction of incoming diffuse irradiance, respectively. The PLC model was validated via both Monte Carlo and remote sensing image simulations. The comparison with the Monte Carlo simulation revealed that the PLC model can capture the pattern of slope-induced reflectance distortion with high accuracy (red band: <inline-formula> <tex-math notation="LaTeX">$R^{2} = 0.88$ </tex-math></inline-formula>; root-mean-square error (RMSE) = 0.0045; relative RMSE (RRMSE) = 15%; near infrared response (NIR) band: <inline-formula> <tex-math notation="LaTeX">$R^{2} = 0.79$ </tex-math></inline-formula>; RMSE = 0.041; RRMSE = 16%). The comparison of the PLC-simulated results with remote sensing observations acquired by the Landsat8-OLI sensor revealed an accuracy similar to that with the Monte Carlo simulation (red band: <inline-formula> <tex-math notation="LaTeX">$R^{2} = 0.83$ </tex-math></inline-formula>; RMSE = 0.0053; RRMSE = 13%; NIR band: <inline-formula> <tex-math notation="LaTeX">$R^{2} = 0.77$ </tex-math></inline-formula>; RMSE = 0.023; RRMSE = 8%). To further validate the PLC model, we used it to implement topographic normalization; the results showed a large reduction in topographic effects after normalization, wh- ch implied that the PLC model captures reflectance variations caused by terrain. The PLC model provides a promising tool to improve the simulation of CR and the retrieval of biophysical variables over mountainous regions.

Nicolas Brodu; "Super-Resolving Multiresolution Images With Band-Independent Geometry of Multispectral Pixels," vol.55(8), pp.4610-4617, Aug. 2017. A new resolution enhancement method is presented for multispectral and multiresolution images, such as those provided by the Sentinel-2 satellites. Starting from the highest resolution bands, band-dependent information (reflectance) is separated from information that is common to all bands (geometry of scene elements). This model is then applied to unmix low-resolution bands, preserving their reflectance, while propagating band-independent information to preserve the subpixel details. A reference implementation is provided, with an application example for super-resolving Sentinel-2 data.

Hadi AliAkbarpour;Kannappan Palaniappan;Guna Seetharaman; "Parallax-Tolerant Aerial Image Georegistration and Efficient Camera Pose Refinement—Without Piecewise Homographies," vol.55(8), pp.4618-4637, Aug. 2017. We describe a fast and efficient camera pose refinement and Structure from Motion (SfM) method for sequential aerial imagery with applications to georegistration and 3-D reconstruction. Inputs to the system are 2-D images combined with initial noisy camera metadata measurements, available from on-board sensors (e.g., camera, global positioning system, and inertial measurement unit). Georegistration is required to stabilize the ground-plane motion to separate camera-induced motion from object motion to support vehicle tracking in aerial imagery. In the proposed approach, we recover accurate camera pose and (sparse) 3-D structure using bundle adjustment for sequential imagery (BA4S) and then stabilize the video from the moving platform by analytically solving for the image-plane-to-ground-plane homography transformation. Using this approach, we avoid relying upon image-to-image registration, which requires estimating feature correspondences (i.e., matching) followed by warping between images (in a 2-D space) that is an error prone process for complex scenes with parallax, appearance, and illumination changes. Both our SfM (BA4S) and our analytical ground-plane georegistration method avoid the use of iterative consensus combinatorial methods like RANdom SAmple Consensus which is a core part of many published approaches. BA4S is very efficient for long sequential imagery and is more than 130 times faster than VisualSfM, 35 times faster than MavMap, and about 274 times faster than Pix4D. Various experimental results demonstrate the efficiency and robustness of the proposed pipeline for the refinement of camera parameters in sequential aerial imagery and georegistration.

Benson Kipkemboi Kenduiywo;Damian Bargiel;Uwe Soergel; "Higher Order Dynamic Conditional Random Fields Ensemble for Crop Type Classification in Radar Images," vol.55(8), pp.4638-4654, Aug. 2017. The rising food demand requires regular agriculture land-cover updates to support food security initiatives. Agricultural areas undergo dynamic changes throughout the year, which manifest varying radar backscatter due to crop phenology. Certain crops can show similar backscatter if their phenology intersects, but vary later when their phenology differs. Hence, classification techniques based on single-date remote sensing images may not offer optimal results for crops with similar phenology. Moreover, methods that stack images within a cropping season as composite bands for classification limit discrimination to one feature space vector, which can suffer from overlapping classes. Nonetheless, phenology can aid classification of crops, because their backscatter varies with time. This paper fills this gap by introducing a crop sequence-based ensemble classification method where expert knowledge and TerraSAR-X multitemporal image-based phenological information are explored. We designed first-order and higher order dynamic conditional random fields (DCRFs) including an ensemble technique. The DCRF models have a duplicated structure of temporally connected CRFs, which encode image-based phenology and expert-based phenology knowledge during classification. On the other hand, our ensemble generates an optimal map based on class posterior probabilities estimated by DCRFs. These techniques improved crop delineation at each epoch, with higher order DCRFs (HDCRFs) giving the best accuracy. The ensemble method was evaluated against the conventional technique of stacking multitemporal images as composite bands for classification using maximum likelihood classifier (MLC) and CRFs. It surpassed MLC and CRFs based on class posterior probabilities estimated by both first-order DCRFs and HDCRFs.

Jiapeng Yin;Christine M. H. Unal;Herman W. J. Russchenberg; "Narrow-Band Clutter Mitigation in Spectral Polarimetric Weather Radar," vol.55(8), pp.4655-4667, Aug. 2017. In this paper, a new clutter suppression method, named the moving double spectral linear depolarization ratio (MDsLDR) filter, is put forward to mitigate narrow-band clutter in weather radars. The narrow-band clutter observed in the Doppler domain includes: 1) stationary clutter such as ground clutter and 2) nonstationary clutter such as artifacts caused by the radar system itself or external sources. These artifacts are difficult to remove, because they are not confined to specific azimuth and range bins. Based on the difference of the spectral-polarization feature and the spectral continuity of precipitation and clutter, the MDsLDR filter can remove ground clutter, artifacts, and noise. The performance of the newly proposed filter is assessed by data collected by the Doppler-polarimetric IRCTR Drizzle Radar. Three precipitation cases are considered in this paper: moderate/light precipitation, convective precipitation with hook-echo signature, and light precipitation with severe artifact contamination. Furthermore, the implementation of the MDsDLR filter requires relatively low computation complexity, so that the MDsLDR filter can be operated in real time.

Guanghui Li;Yue Li;Baojun Yang; "Seismic Exploration Random Noise on Land: Modeling and Application to Noise Suppression," vol.55(8), pp.4668-4681, Aug. 2017. In seismic exploration, random noise suppression is one of the key problems in seismic data processing. For random noise attenuation, the most important thing is the understanding of seismic random noise generation and propagation. Seismic random noise is considered as temporal and spatial random processes, and it can be analyzed only qualitatively for now, due to its high variability. In this paper, we classify seismic random noise sources by their generation factors and simulate the random noise of the desert in West China. According to Green’s function, it can be assumed that seismic random noise sources are point-like sources that are distributed around geophones. A seismic random noise record is taken as the superimposed wave field exited by all the independent sources in a homogeneous isotropy half-infinite surface. Based on the wind vibration theory and preliminary study about ambient vibrations, the noise source functions are determined. We obtain the waveforms of different kinds of noise by solving the inhomogeneous wave equations and analyze the characteristics qualitatively and quantitatively. The seismic synthetic record with a 1.6-s time and 250-m distances is obtained, and the characteristics are compared between the simulated and the real noise record in time domain and space domain, respectively. The comparative results show the same characteristics of the simulated noise and the real noise, which demonstrates the feasibility of the proposed method. According to the noise modeling, it is known that the near-field cultural noise is the main component of the random noise in the desert, on the basis of which complex diffusion filtering is selected. The filtered results by complex diffusion filtering is compared with the results of time–frequency peak filtering, which is a popular filtering method of seismic random noise suppression in recent years. The comparative results show that complex diffusion filtering is mo- e suitable for the noise of the desert in the Tarim Basin. This result proves that seismic random noise modeling can provide the guidance for noise attenuation. It lays a foundation for researching the propagation characteristics and better attenuation of seismic random noise in the future.

Luciano Alparone;Andrea Garzelli;Gemine Vivone; "Intersensor Statistical Matching for Pansharpening: Theoretical Issues and Practical Solutions," vol.55(8), pp.4682-4695, Aug. 2017. In this paper, the authors investigate the statistical matching of the panchromatic (Pan) image to the multispectral (MS) bands, also known as the histogram matching, for the two main classes of pansharpening methods, i.e., those based on component substitution (CS) or spectral methods and those based on multiresolution analysis (MRA) or spatial methods. Also, hybrid methods combining CS with MRA, like the widespread additive wavelet luminance proportional (AWLP), are investigated. It is shown that all spectral, spatial, and hybrid methods must perform a dynamics matching of the enhancing Pan to the individual MS bands for MRA or a combination of them (the component that shall be substituted) for CS. For hybrid methods, the problem is more complex and both types of histogram matching may be suitable. Such an intersensor balance may be either explicit or implicitly performed by the detail-injection model, e.g., the popular projective and multiplicative injection models. An experimental setup exploiting IKONOS and WorldView-2 data sets demonstrates that a correct histogram matching is the key to attain extra performance from established methods. As a first result of this paper, the AWLP method has been revisited and its performance significantly improved by simply performing the histogram matching of Pan to the individual MS bands, rather than to the intensity component, thereby losing the original proportionality feature.

Yangkang Chen;Yatong Zhou;Wei Chen;Shaohuan Zu;Weilin Huang;Dong Zhang; "Empirical Low-Rank Approximation for Seismic Noise Attenuation," vol.55(8), pp.4696-4711, Aug. 2017. The low-rank approximation method is one of the most effective approaches recently proposed for attenuating random noise in seismic data. However, the low-rank approximation approach assumes that the seismic data has low rank for its <inline-formula> <tex-math notation="LaTeX">$f-x$ </tex-math></inline-formula> domain Hankel matrix. This assumption is seldom satisfied for the complicated seismic data. Besides, the low-rank approximation approach is usually implemented in local windows in order to satisfy the principal assumption required by the algorithm itself. When implemented in local windows, the rank is even more difficult to choose because the seismic data is highly nonstationary in both time and spatial dimensions and the optimal rank for different local windows is not consistent with each other. In order to preserve enough useful energy, one needs to set a relatively large rank when implementing the low-rank approximation method, which makes the traditional method incapable of attenuating enough noise. Considering such difficulties described above, we propose an empirical low-rank approximation approach. We adaptively decompose the input data into several components that have truly low ranks via empirical mode decomposition. An interpretation of the proposed empirical low-rank approximation method is that we empirically decompose a multi-dip seismic image that is not of low rank into multiple single-dip seismic images that are low-rank individually. We use both synthetic and field data examples to demonstrate the superior performance of the proposed approach over traditional alternatives.

Hanwen Yu;Yang Lan;Junyi Xu;Daoxiang An;Hyongki Lee; "Large-Scale ${L} ^{0}$ -Norm and ${L} ^{1}$ -Norm 2-D Phase Unwrapping," vol.55(8), pp.4712-4728, Aug. 2017. Two-dimensional phase unwrapping (PU) is a crucial processing step of synthetic aperture radar interferometry (InSAR). With the rapid advance of InSAR technology, the scale of interferograms is becoming increasingly larger. When the size of the input interferogram exceeds computer hardware capabilities, PU becomes more problematic in terms of computational and memory requirements. In the case of “big-data” PU, the input interferogram needs to be first tiled into a number of subinterferograms, unwrapped separately, and then spliced together. Hence, whether the PU result of each subinterferogram is consistent with that of the whole interferogram is critical to the large-scale PU process. To effectively solve this problem, the <inline-formula> <tex-math notation="LaTeX">$L^{1}$ </tex-math></inline-formula>-norm envelope-sparsity theorem, which gives a sufficient condition to exactly guarantee the consistency between local and global <inline-formula> <tex-math notation="LaTeX">$L^{1}$ </tex-math></inline-formula>-norm PU solutions, is put forward and proved. Furthermore, the <inline-formula> <tex-math notation="LaTeX">$L^{0}$ </tex-math></inline-formula>-norm envelope-sparsity theorem, which gives a sufficient condition to exactly guarantee the consistency between local and global <inline-formula> <tex-math notation="LaTeX">${L} ^{0}$ </tex-math></inline-formula>-norm PU solutions, is also proposed and proved. Afterward, based on these two theorems, two tiling strategies are put forward for the large-scale <inline-formula> <tex-math notation="LaTeX">$L^{0}$ </tex-math></inline-formula>-norm and <inline-formula> <tex-math notation="LaTeX">$L^{1}$ </tex-math></inline-formula>-norm PU methods. In addition, this paper presents the concepts of the tiling accuracy and the tiling resolution, which are the criteria used to evaluate the effectiveness of a tiling strategy, and we use them to quantitatively analyze the aforementioned tiling s- rategies. Both theoretical analysis and experimental results show that the proposed tiling strategies are effective for the large-scale <inline-formula> <tex-math notation="LaTeX">$L^{0}$ </tex-math></inline-formula>-norm and <inline-formula> <tex-math notation="LaTeX">$L^{1}$ </tex-math></inline-formula>-norm PU problems.

Jingxiang Yang;Yong-Qiang Zhao;Jonathan Cheung-Wai Chan; "Learning and Transferring Deep Joint Spectral–Spatial Features for Hyperspectral Classification," vol.55(8), pp.4729-4742, Aug. 2017. Feature extraction is of significance for hyperspectral image (HSI) classification. Compared with conventional hand-crafted feature extraction, deep learning can automatically learn features with discriminative information. However, two issues exist in applying deep learning to HSIs. One issue is how to jointly extract spectral features and spatial features, and the other one is how to train the deep model when training samples are scarce. In this paper, a deep convolutional neural network with two-branch architecture is proposed to extract the joint spectral–spatial features from HSIs. The two branches of the proposed network are devoted to features from the spectral domain as well as the spatial domain. The learned spectral features and spatial features are then concatenated and fed to fully connected layers to extract the joint spectral–spatial features for classification. When the training samples are limited, we investigate the transfer learning to improve the performance. Low and mid-layers of the network are pretrained and transferred from other data sources; only top layers are trained with limited training samples extracted from the target scene. Experiments on Airborne Visible/Infrared Imaging Spectrometer and Reflective Optics System Imaging Spectrometer data demonstrate that the learned deep joint spectral–spatial features are discriminative, and competitive classification results can be achieved when compared with state-of-the-art methods. The experiments also reveal that the transferred features boost the classification performance.

Ji Zhou;Xiaodong Zhang;Wenfeng Zhan;Frank-Michael Göttsche;Shaomin Liu;Folke-Sören Olesen;Wenxing Hu;Fengnan Dai; "A Thermal Sampling Depth Correction Method for Land Surface Temperature Estimation From Satellite Passive Microwave Observation Over Barren Land," vol.55(8), pp.4743-4756, Aug. 2017. Satellite passive microwave (MW) remote sensing has a better ability to observe land surface temperature (LST) in cloudy conditions than thermal infrared (TIR) remote sensing. Due to the much greater thermal sampling depth (TSD) of MW, currently available MW LST do not represent the thermodynamic temperature of the land surface and, therefore, yield systematic differences from TIR LST. The TSD effect is particularly prominent over barren land and sparsely vegetated surfaces. Here, we present a novel TSD correction (TSDC) method to estimate the MW LST over barren land. The core of this method is a new formulation of the passive MW radiation balance equation, which allows linking MW effective physical temperature to the soil temperature at a specific depth. The TSDC method is applied to the 6.9-GHz channel of AMSR-E in northwestern China-western Mongolia and western Namibia (WN). Evaluation shows that LST estimated by the TSDC method agrees well with the MODIS LST. Validation based on in situ LSTs measured at the Gobabeb site in WN demonstrates the high accuracy of the TSDC method: it yields a root mean squared error of about 2–3 K and slight systematic error. In contrast, other methods without TSDC yield lower accuracies and significantly underestimate LST. Therefore, the TSDC method has the potential to generate MW LST with the same physical meaning and similar accuracy as TIR LST. This study provides implications for developing practical and accurate methods to estimate MW LST over other land surface types and at the global scale.

Gang Chen;Han Jin;Jing-Ye Yan;Xiao Cui;Shao-Dong Zhang;Chun-Xiao Yan;Guo-Tao Yang;Ai-Lan Lan;Wan-Lin Gong;Lei Qiao;Chen Wu;Jin Wang; "Hainan Coherent Scatter Phased Array Radar (HCOPAR): System Design and Ionospheric Irregularity Observations," vol.55(8), pp.4757-4765, Aug. 2017. The Hainan coherent scatter phased array radar (HCOPAR) is one of the most important radio systems of the Chinese Meridian Space Weather Monitoring Project (Meridian Project). The radar is located at Fuke station (19.5°N, 109.1°E, dip latitude 8.1°N), Hainan island, China, to observe the field-aligned irregularities in ionospheric <inline-formula> <tex-math notation="LaTeX">$E$ </tex-math></inline-formula>- and <inline-formula> <tex-math notation="LaTeX">$F$ </tex-math></inline-formula>-regions. It is operated with a peak power of 54 kW and uses the Barker codes for good sensitivity. Its antenna array is composed of <inline-formula> <tex-math notation="LaTeX">$18 \times 4$ </tex-math></inline-formula> five-element Yagi antennas and is used for both transmitting and receiving. The Yagi antennas are arranged in a rectangular grid 100 m long and 20 m wide, covering an area of 2000 m2. This antenna arrangement forms seven radar beams from −22.5° to 22.5° in azimuth with an angle step of 7.5°. The central beam points to due geographic north. A brief description of the radar system, data processing, and signal characteristics is also provided. The intensity, Doppler velocities, and spectral widths of the quasi-periodic echoes measured in the nighttime of June 23, 2013 are presented as typical data of type II irregularities in the <inline-formula> <tex-math notation="LaTeX">$E$ </tex-math></inline-formula>-region. The data recorded by the HCOPAR in the fan-beam mode on September 15, 2013 are displayed as the typical <inline-formula> <tex-math notation="LaTeX">$F$ </tex-math></inline-formula>-region plasma bubbles.

Guosheng Zhang;Xiaofeng Li;William Perrie;Paul A. Hwang;Biao Zhang;Xiaofeng Yang; "A Hurricane Wind Speed Retrieval Model for C-Band RADARSAT-2 Cross-Polarization ScanSAR Images," vol.55(8), pp.4766-4774, Aug. 2017. A hybrid backscattering model is built to provide a consistent description for C-band VH- and VV-polarized normalized radar cross sections (NRCSs). Ocean surface co- and cross-polarized NRCS are both treated as a sum of Bragg and non-Bragg scattering components. To better understand the synthetic aperture radar (SAR) observed NRCS signals under high-wind conditions, five C-band RADARSAT-2 dual-polarization SAR hurricane images and the collocated wind vectors measured by the airborne stepped-frequency microwave radiometer (SFMR) are collected. Based on the match-up data, we add a non-Bragg term in the composite Bragg theory to explain the discrepancy between the measurements in the cross-polarization channel and the existing theory results. The non-Bragg scattering to Bragg scattering ratio (<inline-formula> <tex-math notation="LaTeX">$B_{r}$ </tex-math></inline-formula>) is found to be a constant. We build the hybrid backscattering model with <inline-formula> <tex-math notation="LaTeX">$B_{r}$ </tex-math></inline-formula> and establish a relationship between the cross-polarization NRCS and the radar incidence angle under different wind conditions. The NRCS dependence on incidence angle is simulated by the hybrid backscattering model. Finally, a C-band Cross-Polarization Coupled-Parameters Ocean (C-3PO) model is developed to retrieve hurricane winds using VH-polarized ScanSAR by including the radar incidence angle. The collocated SAR and SFMR data sets are separated into two parts: data set-A, for hybrid backscattering model derivation and C-3PO model coefficients tuning, and data set-B, for hurricane wind validation. C-3PO model validation results show that the model is suitable for ocean surface wind mapping from RADARSAT-2 cross-polarization ScanSAR images. The retrieval has a root-mean-square error less than 3 m/s for wind speed up to 40 m/s.

Souleyman Chaib;Huan Liu;Yanfeng Gu;Hongxun Yao; "Deep Feature Fusion for VHR Remote Sensing Scene Classification," vol.55(8), pp.4775-4784, Aug. 2017. The rapid development of remote sensing technology allows us to get images with high and very high resolution (VHR). VHR imagery scene classification has become an important and challenging problem. In this paper, we introduce a framework for VHR scene understanding. First, the pretrained visual geometry group network (VGG-Net) model is proposed as deep feature extractors to extract informative features from the original VHR images. Second, we select the fully connected layers constructed by VGG-Net in which each layer is regarded as separated feature descriptors. And then we combine between them to construct final representation of the VHR image scenes. Third, discriminant correlation analysis (DCA) is adopted as feature fusion strategy to further refine the original features extracting from VGG-Net, which allows a more efficient fusion approach with small cost than the traditional feature fusion strategies. We apply our approach to three challenging data sets: 1) UC MERCED data set that contains 21 different areal scene categories with submeter resolution; 2) WHU-RS data set that contains 19 challenging scene categories with various resolutions; and 3) the Aerial Image data set that has a number of 10 000 images within 30 challenging scene categories with various resolutions. The experimental results demonstrate that our proposed method outperforms the state-of-the-art approaches. Using feature fusion technique achieves a higher accuracy than solely using the raw deep features. Moreover, the proposed method based on DCA fusion produces good informative features to describe the images scene with much lower dimension.

Christopher McGuinness;Eric Balster; "Enabling Reliable Change Detection for Independently Compressed SAR Images," vol.55(8), pp.4785-4794, Aug. 2017. This paper develops distortion metrics for compressed synthetic aperture radar (SAR) imagery from change detection test statistics. These metrics are used to predict lossy image compression’s impact on change detection performance. The metrics do not require the intended change detection comparison image to provide these benefits. An SAR compression system leveraging the distortion metrics is proposed. The system generates a bad-pixel mask highlighting potential false alarms that are generated due to compression and are subsequently discarded in the change detection process. The proposed system’s performance is demonstrated through noncoherent change detection analysis after JPEG2000 and JPEG image compression. Similarly, a coherent change detection system is evaluated after JPEG2000 image compression. For noncoherent change detection at large compression ratios (CRs) using JPEG2000, the proposed system provides a 33% reduction in false alarms at a 0.1 probability of detection as well as the ability to maintain near-distortionless false alarm rates across a wide range of CRs. At a 0.1 probability of detection for coherent change detection, the system provides a 37% reduction in false alarms at modest CRs. The coherent change detection system is also demonstrated to maintain low false alarm rates across a range of CRs.

Gui Gao;Gongtao Shi; "Ship Detection in Dual-Channel ATI-SAR Based on the Notch Filter," vol.55(8), pp.4795-4810, Aug. 2017. Synthetic aperture radar (SAR) in along-track interferometry (ATI) mode has been extensively applied in velocity measurements of ocean currents and ship detection. A notch filter was recently proposed and was demonstrated to be a promising tool for ship detection exploiting quad- or dual-polarization SAR information. In this paper, the investigation of the notch filter is extended to the dual-channel ATI-SAR mechanism for ship detection on the ocean surface. First, a theoretical proof that the interferometric magnitude performs better in ship detection than single-channel amplitude/intensity is given based on the signal-clutter-ratio improvement, which validates the advantages of interferometric SAR against conventional single-channel SAR for the purpose of ship observations on the ocean surface. Second, based on the proof, the version of the notch filter that is suitable for the ATI-SAR, that is, the interferometric notch filter (INF), is proposed. We then analyze the statistical models of the INF distance, which can facilitate the adaptive realization of INF for ship detection in ATI-SAR. Finally, the experimental results for dual-channel ATI-SAR data measured by NASA/JPL L-band AIRSAR verify the accuracy of the theoretical analysis and effectiveness of the INF.

Gui Gao;Gongtao Shi; "CFAR Ship Detection in Nonhomogeneous Sea Clutter Using Polarimetric SAR Data Based on the Notch Filter," vol.55(8), pp.4811-4824, Aug. 2017. Synthetic aperture radar (SAR) ship detection is an important research topic in the field of maritime applications. The geometrical perturbation-polarimetric notch filter (GP–PNF) was recently proposed to be a promising tool and its usefulness in exploiting polarimetric SAR information for ship detection was demonstrated. The work in this paper is devoted to developing a statistical model of the filter in nonhomogeneous sea clutter to achieve constant false alarm rate (CFAR) detection based on the model. First, within the framework of a multiplicative model, the reciprocal of the gamma distribution is used to describe the texture component of sea clutter in nonhomogeneous background. As a result, a statistical model of the GP–PNF is analytically derived and found suitable for sea clutter scenes with a wide range of homogeneity. Second, we theoretically demonstrate that CFAR detection using GP–PNF is unrelated to the parameter in the original GP–PNF. Therefore, a simplified version of the GP–PNF is given. Third, the CFAR threshold of the simplified filter is mathematically derived. Experiments performed on measured L-band ALOS-PALSAR and C-band RADARSAT-2 SAR data verify the good performance of the developed statistical model and demonstrate the usefulness of the CFAR detection based on the simplified filter.

Joan Bartrina-Rapesta;Ian Blanes;Francesc Aulí-Llinàs;Joan Serra-Sagristà;Victor Sanchez;Michael W. Marcellin; "A Lightweight Contextual Arithmetic Coder for On-Board Remote Sensing Data Compression," vol.55(8), pp.4825-4835, Aug. 2017. The Consultative Committee for Space Data Systems (CCSDS) has issued several data compression standards devised to reduce the amount of data transmitted from satellites to ground stations. This paper introduces a contextual arithmetic encoder for on-board data compression. The proposed arithmetic encoder checks the causal adjacent neighbors, at most, to form the context and uses only bitwise operations to estimate the related probabilities. As a result, the encoder consumes few computational resources, making it suitable for on-board operation. Our coding approach is based on the prediction and mapping stages of CCSDS-123 lossless compression standard, an optional quantizer stage to yield lossless or near-lossless compression and our proposed arithmetic encoder. For both lossless and near-lossless compression, the achieved coding performance is superior to that of CCSDS-123, M-CALIC, and JPEG-LS. Taking into account only the entropy encoders, fixed-length codeword is slightly better than MQ and interleaved entropy coding.

IEEE Geoscience and Remote Sensing Letters - new TOC (2017 July 20) [Website]

* "Front Cover," vol.14(8), pp.C1-C1, Aug. 2017.*

* "IEEE Geoscience and Remote Sensing Letters publication information," vol.14(8), pp.C2-C2, Aug. 2017.*

* "Table of contents," vol.14(8), pp.1181-1428, Aug. 2017.*

Peng Zhang;Xin Niu;Yong Dou;Fei Xia; "Airport Detection on Optical Satellite Images Using Deep Convolutional Neural Networks," vol.14(8), pp.1183-1187, Aug. 2017. This letter proposes a method using convolutional neural networks (CNNs) for airport detection on optical satellite images. To efficiently build a deep CNN with limited satellite image samples, a transfer learning approach had been employed by sharing the common image features of the natural images. To decrease the computing cost, an efficient region proposal method had been proposed based on the prior knowledge of the line segments distribution in an airport. The transfer learning ability on deep CNN for airport detection on satellite images had been first evaluated in this letter. The proposed method was tested on an image data set, including 170 different airports and 30 nonairports. The detection rate could reach 88.8% in experiments with seconds’ computation time, which showed a great improvement over other the state-of-the-art methods.

Chun Wu;Qianqing Qin;Guorui Ma;Zhitao Fu;Zhenliang Xu; "Large-Rotation-Angle Photogrammetric Resection Based on Least-Squares Homotopy Iteration Method," vol.14(8), pp.1188-1192, Aug. 2017. Inclined imaging is an advanced sensing technique, and has been extensively used. Therefore, large-rotation angle photogrammetric resection has become an important topic. However, traditional iterative methods are limited by requiring good initial values. By contrast, noniterative methods do not require an initial value, although they exhibit relatively low accuracy and robustness. To obtain results with superior precision and universality, this letter proposes an improved approach by modifying the initial value acquisition and iterative methods. This algorithm uses nonlinear iteration to reduce the model error, thereby possibly achieving an exceptional convergence for large-rotation-angle photogrammetric resection. Experimental results on the real data indicate that the proposed algorithm outperforms the previous methods.

Ettien Lazare Kpré;Cyril Decroze; "Passive Coding Technique Applied to Synthetic Aperture Interferometric Radiometer," vol.14(8), pp.1193-1197, Aug. 2017. For real-time and high-resolution radiometric imaging, the synthetic aperture interferometric radiometer (SAIR) technique seems to solve the tradeoff between imaging resolution and aperture size. Indeed, it can synthesize a large antenna array by sparsely arranging a small number of antennas to achieve high spatial resolution. Nevertheless, a conventional interferometric radiometer requires as many receivers as antennas needed. This constraint is one of the limitations of SAIR regarding the hardware cost, the system complexity, and the computation load. In this letter, a compressed acquisition technique is proposed to collect the antenna signals. In this method, the antenna’s signals are coded and combined red with a passive microwave device. From the measured waveforms at the receiver’s outputs, a decoding process is performed to estimate the received signal by each antenna. This allows the computation of the visibility function required for image reconstruction. The effectiveness of the proposed method has been tested by means of simulations analysis and has experimentally been applied to a punctual noise source detection. The results reveal that this method can be an efficient alternative for the simplification of the conventional interferometric radiometers’ architectures.

Jiaxing Zhang;Chao Tao;Zhengrong Zou; "An On-Road Vehicle Detection Method for High-Resolution Aerial Images Based on Local and Global Structure Learning," vol.14(8), pp.1198-1202, Aug. 2017. With the continuous improvement of image resolution, details on aerial images provide abundant available information for vehicle detection. Nevertheless, traditional works mainly exploited the overall information of the vehicles ignoring the local details, such as front and rear windshields, and thus, there were usually more than 15% false alarms in the final vehicle detection results. In this letter, we propose a vehicle detection method making full use of high level details on aerial images. In the training stage, we choose front windshield samples to train a part detector and whole vehicle samples to train a root detector. In the matching stage, we first use the root detector to define an entire vehicle obtaining the root response, then the part detector is scanned in the root bounding box to decide a front windshield and get the part response. Afterward, the part response is transformed by setting weight w based on the part position offset. More importantly, contextual information is appropriately used in the process of determining the part position offset. Final detection score is the combination of root response and the transformed part response. We have demonstrated that the proposed method has achieved better performance with more than 6.43% increase of correct detection rate and more than 5.63% decrease of false detection rate compared with the state-of-the-art approaches.

Naoufal Amrani;Joan Serra-Sagristà;Pascal Peter;Joachim Weickert; "Diffusion-Based Inpainting for Coding Remote-Sensing Data," vol.14(8), pp.1203-1207, Aug. 2017. Inpainting techniques based on partial differential equations (PDEs), such as diffusion processes, are gaining growing importance as a novel family of image compression methods. Nevertheless, the application of inpainting in the field of hyperspectral imagery has been mainly focused on filling in missing information or dead pixels due to sensor failures. In this letter, we propose a novel PDE-based inpainting algorithm to compress hyperspectral images. The method inpaints separately the known data in the spatial and spectral dimensions. Then, it applies a prediction model to the final inpainting solution to obtain a representation much closer to the original image. Experimental results over a set of hyperspectral images indicate that the proposed algorithm can perform better than a recent proposed extension to prediction-based standard CCSDS-123.0 at low bit rate, better than JPEG 2000 Part 2 with the DWT 9/7 as a spectral transform at all bit rates, and competitive to JPEG 2000 with principal component analysis, the optimal spectral decorrelation transform for Gaussian sources.

Guoqiang Tang;Ziyue Zeng;Meihong Ma;Ronghua Liu;Yixin Wen;Yang Hong; "Can Near-Real-Time Satellite Precipitation Products Capture Rainstorms and Guide Flood Warning for the 2016 Summer in South China?," vol.14(8), pp.1208-1212, Aug. 2017. In the summer of 2016, severe storms caused serious casualties and destruction of facilities and properties over South China. Near-real-time (NRT) satellite precipitation products are attractive to rainstorm monitoring and flood warning guidance owing to its combination of timeliness, high spatiotemporal resolution, and broad coverage. We evaluate the performance of four NRT satellite products, i.e., Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks, 3B42RT, Global Satellite Mapping of Precipitation (GSMaP) NRT, and Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (IMERG) Late run using a high-quality merged product in the rainy June over South China. In addition, a method based on an empirical flash flood guidance and the Flash Flood Potential Index is proposed to examine the applicability of satellite products in guiding flood warning. The IMERG Late run and GSMaP NRT perform the closest-to-ground observations. 3B42RT detects the most flood warning events due to its notable overestimation of actual precipitation. We recommend that the IMERG Late run is the best NRT satellite product in capturing flood hazard events according to the Pareto Efficiency of jointly optimizing higher hit ratio and lower false alarms.

Jianyong Xie;Wei Chen;Dong Zhang;Shaohuan Zu;Yangkang Chen; "Application of Principal Component Analysis in Weighted Stacking of Seismic Data," vol.14(8), pp.1213-1217, Aug. 2017. Optimal stacking of multiple data sets plays a significant role in many scientific domains. The quality of stacking will affect the signal-to-noise ratio and amplitude fidelity of the stacked image. In seismic data processing, the similarity-weighted stacking makes use of the local similarity between each trace and a reference trace as the weight to stack the flattened prestack seismic data after normal moveout correction. The traditional reference trace is an approximated zero-offset trace that is calculated from a direct arithmetic mean of the data matrix along the spatial direction. However, in the case that the data matrix contains abnormal misaligned trace, erratic, and non-Gaussian random noise, the accuracy of the approximated zero-offset trace would be greatly affected, and thereby further influence the quality of stacking. We propose a novel weighted stacking method that is based on principal component analysis. The principal components of the data matrix, namely, the useful signals, are extracted based on a low-rank decomposition method by solving an optimization problem with a low-rank constraint. The optimization problem is solved via a common singular value decomposition algorithm. The low-rank decomposition of the data matrix will alleviate the influence of abnormal trace, erratic, and non-Gaussian random noise, and thus will be more robust than the traditional alternatives. We use both synthetic and field data examples to show the successful performance of the proposed approach.

Wenxue Fu;Huadong Guo;Pengfei Song;Bangsen Tian;Xinwu Li;Zhongchang Sun; "Combination of PolInSAR and LiDAR Techniques for Forest Height Estimation," vol.14(8), pp.1218-1222, Aug. 2017. Forests are simplified as homogeneous volumes constituted of randomly uniform particles, characterized by a constant extinction coefficient in the random volume over ground (RVoG) model, which has been extensively applied to polarimetric synthetic aperture radar (SAR) interferometry for forest height estimation. This letter takes into account the heterogeneous vertical structure reflected by the vertically varying extinction coefficient curve in the forest volume layer, and modifies the RVoG model to make it be more suitable for height inversion of forests with complex structures. For this purpose, the normalized extinction coefficient curve is fit by large-footprint light detection and ranging full waveform data using the Gaussian function. Finally, the varying extinction RVoG model is applied to forest height estimation using airborne L-band SAR data acquired by the E-SAR system. The results are compared with in situ measurements, which indicate that the varying extinction RVoG model can obtain more accurate results for forest height inversion.

M. Parrens;A. Al Bitar;A. Mialon;R. Fernandez-Moran;P. Ferrazzoli;Y. Kerr;J.-P. Wigneron; "Estimation of the L-Band Effective Scattering Albedo of Tropical Forests Using SMOS Observations," vol.14(8), pp.1223-1227, Aug. 2017. This letter aims to estimate the effective scattering albedo (<inline-formula> <tex-math notation="LaTeX">$\omega _{p}$ </tex-math></inline-formula>) over the tropical forests using L-band (1.4 GHz) microwave remote sensing. It is carried out using Soil Moisture and Ocean Salinity (SMOS) mission data over five years (2011–2015). We find similar values of <inline-formula> <tex-math notation="LaTeX">$\omega _{p}$ </tex-math></inline-formula> computed over the Congo and Amazon forests. The <inline-formula> <tex-math notation="LaTeX">$\omega _{p }$ </tex-math></inline-formula> values depend slightly on the polarization. The values of <inline-formula> <tex-math notation="LaTeX">$\omega _{p }$ </tex-math></inline-formula> at H-polarization and at 52° ± 5° (40° ± 5°) of incidence angle are within the range 0.064 – 0.069 ± 0.01 (0.061 – 0.067 ± 0.012). At V-polarization, the values of <inline-formula> <tex-math notation="LaTeX">$\omega _{p }$ </tex-math></inline-formula> are slightly lower (0.060 – 0.061 ± 0.013 at 52° ± 5° of incidence angle and 0.052 – 0.055 ± 0.013 at 40° ± 5° of incidence angle). These findings should contribute to a better calibration of the value of <inline-formula> <tex-math notation="LaTeX">$\omega _{p }$ </tex-math></inline-formula> over the tropical forests in both the SMOS and SM active and passive retrieval algorithms, leading to increase the SM retrieval accuracy over heterogeneous pixels.

Hongzhu Cai;Michael S. Zhdanov; "Joint Inversion of Gravity and Magnetotelluric Data for the Depth-to-Basement Estimation," vol.14(8), pp.1228-1232, Aug. 2017. It is well known that both gravity and magnetolluric (MT) methods can be used for the depth-to-basement estimation due to the density and conductivity contrast between the sedimentary basin and the underlaid basement rocks. In this case, the primary targets for both methods are the interface between the basement and sedimenary rocks as well as the physical properties of the rocks (density and conductivity). The solution of this inverse problem is typically nonunique and unstable, especially for gravity inversion. In order to overcome this difficulty and provide a more robust solution, we have developed a method of joint inversion to recover both the depth to the basement and the physical properties of the sediments and basement using gravity and MT data simultaneously. The joint inversion algorithm is based on the regularized conjugate gradient method. To speed up the inversion, we use an effective forward modeling method based on the surface Cauchy-type integrals for the gravity field and the surface integral equation representations for the MT field, respectively. We demonstrate the effectiveness of the developed method using several realistic model studies.

Yongcai Liu;Wei Wang;Shaoqi Dai;Bin Rao;Guoyu Wang; "A Unified Multimode SAR Raw Signal Simulation Method Based on Acquisition Mode Mutation," vol.14(8), pp.1233-1237, Aug. 2017. Raw signal simulators for synthetic aperture radar (SAR) in different acquisition modes have been studied individually. This letter is dedicated to presenting a method for simulating multimode SAR raw signal in a unified framework. To this end, we first simulate stripmap SAR raw signal, and then mutate the raw signal from stripmap mode into the desired acquisition mode. The acquisition mode mutation is implemented by azimuth-time-variant and range-frequency-dependent bandpass filtering. While processing in range-frequency domain brings forth high accuracy, processing in azimuth-time domain makes the method applicable for flexible antenna steering laws specified by multiple SAR acquisition modes, such as staring spotlight mode, sliding spotlight mode, and terrain observation by progressive scans mode. The proposed method is validated by the simulation results.

Zhen Lv;Yonghong Jia;Qian Zhang;Yifu Chen; "An Adaptive Multifeature Sparsity-Based Model for Semiautomatic Road Extraction From High-Resolution Satellite Images in Urban Areas," vol.14(8), pp.1238-1242, Aug. 2017. Despite its ability to handle occlusions and noise, sparse tracking may be inadequate to describe complex noise corruption, for instance, in urban road tracking, where road surfaces are often significantly disrupted by the existence of occlusions and noise in high-resolution (HR) satellite imagery. To address this issue, this letter presents a semiautomatic approach for road extraction from HR satellite images. Firstly, a multifeature sparse model is introduced to represent the road target appearance. Next, a novel sparse constraint regularized mean-shift algorithm is used to support the road tracking. Furthermore, multiple features are combined by weighting their contributions using a novel reliability measure derived to distinguish target from background. The experiments confirm that the proposed method performs better than the current state-of-the-art methods for the extraction of roads from HR imagery, in terms of reliability, robustness, and accuracy.

Sen Lei;Zhenwei Shi;Zhengxia Zou; "Super-Resolution for Remote Sensing Images via Local–Global Combined Network," vol.14(8), pp.1243-1247, Aug. 2017. Super-resolution is an image processing technology that recovers a high-resolution image from a single or sequential low-resolution images. Recently deep convolutional neural networks (CNNs) have made a huge breakthrough in many tasks including super-resolution. In this letter, we propose a new single-image super-resolution algorithm named local–global combined networks (LGCNet) for remote sensing images based on the deep CNNs. Our LGCNet is elaborately designed with its “multifork” structure to learn multilevel representations of remote sensing images including both local details and global environmental priors. Experimental results on a public remote sensing data set (UC Merced) demonstrate an overall improvement of both accuracy and visual performance over several state-of-the-art algorithms.

Qian Bao;Yun Lin;Wen Hong;Wenjie Shen;Yue Zhao;Xueming Peng; "Holographic SAR Tomography Image Reconstruction by Combination of Adaptive Imaging and Sparse Bayesian Inference," vol.14(8), pp.1248-1252, Aug. 2017. In this letter, we propose an imaging algorithm for the holographic synthetic aperture radar tomography in the circumstance of sparse and nonuniform elevation circular passes. Considering the anisotropic behavior of scatterers and the off-grid effect of sparse signal recovery, the algorithm combines the 2-D adaptive imaging method for circular SAR and the sparse Bayesian inference-based method for elevation reconstruction. For each circular pass, the azimuth-range 2-D image can be formed by the adaptive imaging method, which depends on the preretrieved maximum azimuth response angle and the azimuth persistence width. To deal with the off-grid effect in elevation reconstruction, which is caused by the deviation between the true scatterers and the discretized imaging grids, the off-grid sparse Bayesian inference method jointly estimates the scatterers and elevation off-grid error by applying their hierarchical priors. Compared with the conventional compressive sensing method that does not concern the off-grid effect, the proposed algorithm can provide more accurate 3-D reconstruction for pointlike targets, which is verified by the real-data experiments.

Yushi Chen;Chunyang Li;Pedram Ghamisi;Xiuping Jia;Yanfeng Gu; "Deep Fusion of Remote Sensing Data for Accurate Classification," vol.14(8), pp.1253-1257, Aug. 2017. The multisensory fusion of remote sensing data has obtained a great attention in recent years. In this letter, we propose a new feature fusion framework based on deep neural networks (DNNs). The proposed framework employs deep convolutional neural networks (CNNs) to effectively extract features of multi-/hyperspectral and light detection and ranging data. Then, a fully connected DNN is designed to fuse the heterogeneous features obtained by the previous CNNs. Through the aforementioned deep networks, one can extract the discriminant and invariant features of remote sensing data, which are useful for further processing. At last, logistic regression is used to produce the final classification results. Dropout and batch normalization strategies are adopted in the deep fusion framework to further improve classification accuracy. The obtained results reveal that the proposed deep fusion model provides competitive results in terms of classification accuracy. Furthermore, the proposed deep learning idea opens a new window for future remote sensing data fusion.

Xiao Wang;Craig Glennie;Zhigang Pan; "An Adaptive Ellipsoid Searching Filter for Airborne Single-Photon Lidar," vol.14(8), pp.1258-1262, Aug. 2017. Recent light detection and ranging (lidar) systems using photon-counting technology are able to collect data with significantly higher efficiency compared with the current commercially available linear-mode lidar systems. However, the high quantum sensitivity of single-photon lidar (SPL) systems results in noisy point clouds due to the influence of solar noise and dark count returns. Therefore, an effective noise removal algorithm is required to interpret SPL data. The uneven distribution of noise returns and the removal of noise close to signal returns are two significant challenges for SPL filtering. In this letter, a novel adaptive ellipsoid searching (AES) method is proposed. The AES uses a spherical noise density estimation model and a morphing ellipsoid determined by local principal components. The proposed method was tested on Sigma Space high-resolution quantum lidar system (HRQLS) SPL data sets and the results were compared with voxel-based filtering of the same data. Independent comparisons of each filtered result with coincident linear-mode airborne lidar data were also undertaken. We find that the root mean square error of the AES results on solid planes is 0.09 versus 0.11 m for voxel-based, 0.12 versus 0.14 m for bare ground, and 2.07 versus 2.55 m for vegetation canopy. We also used manually selected solid planar surfaces as a reference and find that the proposed method successfully removed twice as many noise points as the voxel-based method.

Cheng-Wei Sang;Hong Sun; "Two-Step Sparse Decomposition for SAR Image Despeckling," vol.14(8), pp.1263-1267, Aug. 2017. In this letter, we propose a new despeckling method based on two-step sparse decomposition. First, the grouping by block matching method identifies similar image patches and stacks them into a group, so that the group of similar patches are mostly homogeneous, which is suitable for the followed sparse decomposition method. And then, the proposed two-step sparse decompositions are applied to each group. The first sparse decomposition is a classical sparse representation to obtain an overcomplete dictionary and the sparse coefficients. The second sparse decomposition is a subspace decomposition over the dictionary. We proposed a measurement from the sparse coefficients as the criterion to identify a principal signal subdictionary. Finally, the image is reconstructed by the linear combination of the atoms of the principal subdictionary. The proposed method takes benefits from learned overcomplete dictionary, which fully explores details and from the principal subdictionary, which reduces strong noises. Experimental results demonstrate the efficiency of the proposed method to denoise synthetic aperture radar images. Our method can achieve high performances in terms of both structure details preservation and speckle noise reduction.

Ping Zhou;Xinming Tang;Zhenming Wang;Ning Cao;Xia Wang; "Vertical Accuracy Effect Verification for Satellite Imagery With Different GCPs," vol.14(8), pp.1268-1272, Aug. 2017. This letter expands the block adjustment method by, respectively, adopting high-precision global positioning system (GPS) points, geoscience laser altimeter system (GLAS) data, and Shuttle Radar Topography Mission (SRTM)-digital elevation model (DEM) as vertical control data. Block adjustment experiments were conducted with these control data, with 388 ZY-3 satellite stereo image pairs across over 18600 km2 of land in Hubei province of China utilized as experimental images and 180 GPS points as checkpoints. The experimental results obtained show that, with the adoption of 23 GPS points as control points, the horizontal root-mean-square error (RMSE) and vertical RMSE of the ZY-3 image improved from 10.97 to 5.72 m and from 7.12 to 1.65 m, respectively. Furthermore, with the adoption of 326 GLAS data points as vertical control points, the vertical RMSE of the ZY-3 images of the flat terrain areas and mountain terrain areas improved to 1.69 and 3.62 m, respectively. Through block adjustment constrained by SRTM-DEM, the vertical RMSE of the ZY-3 images was 1.44 m for flat terrain areas and 3.05 m for mountain terrain areas, respectively. The vertical accuracy of the satellite images was significantly enhanced, especially on the flat terrain area, when GLAS and STRM-DEM were used for vertical control block adjustment, which is similar to the vertical accuracy of block adjustments adopting GPS points as control.

Yuqi Meng;Yue Li;Chao Zhang;Haitao Zhao; "A Time Picking Method Based on Spectral Multimanifold Clustering in Microseismic Data," vol.14(8), pp.1273-1277, Aug. 2017. P-wave time picking is of great significance in microseismic data processing. However, traditional time picking methods do not consider the difference of low-dimensional manifold features between signal and noise which can be extracted more effectively in low signal to noise ratio scenarios. In this letter, we develop a new method named spectral multimanifold clustering for picking P-wave arrivals. It can extract the low-dimensional manifold features from a suitable affinity matrix. In this approach, the manifold features of data are concentrated by residual statics estimation and a suitable affinity matrix is constructed using structural similarity and local similarity. Then, by using unnormalized spectral clustering, the low-dimensional manifold features extracted from the affinity matrix can be classified into noise cluster and signal cluster. Finally, the initial time of the signal cluster is considered to be the first arrival time in microseismic data. We design a series of experiments using both synthetic and field microseismic data. Our proposed method demonstrates higher accuracy, better stability, and noise immunity than either the short and long time average method or the akaike information criterion method.

Horst Hammer;Silvia Kuny;Karsten Schulz; "Simulation-Based Signature Analysis of Fuel Storage Tanks in High-Resolution SAR Images," vol.14(8), pp.1278-1282, Aug. 2017. Many synthetic aperture radar (SAR) image exploitation tasks rely on the presence of certain features in the SAR images to be investigated. SAR simulation is an important tool to study such signatures for a wide range of aspect and incidence angles, for which usually real data cannot be provided. Furthermore, SAR simulation offers the opportunity to change the 3-D model so that many features can be traced back directly to features in the underlying 3-D scene. In this letter, the second approach is used to investigate the SAR signature of oil tanks with fixed and floating roofs. The knowledge gained by SAR simulation is then used to estimate the height of both types of tanks and the filling height of floating roof tanks.

Junhao Xie;Guowei Yao;Minglei Sun;Zhenyuan Ji;Gaopeng Li;Jun Geng; "Ocean Surface Wind Direction Inversion Using Shipborne High-Frequency Surface Wave Radar," vol.14(8), pp.1283-1287, Aug. 2017. Shipborne high-frequency surface wave radar (SHFSWR) has exhibited great advantages over onshore HFSWR (OHFSWR) in ocean remote sensing. Unlike OHFSWR, SHFSWR suffers the problem of Doppler spectrum spread owing to platform movement, which is a great challenge preventing the extraction of ocean surface parameters for SHFSWR. To address this challenge, in this letter, the mathematical model of ocean surface wind direction is first investigated based on the first-order SHFSWR cross section. Furthermore, a method for the wind direction inversion without ambiguity from the spread Doppler spectrum is proposed using a single receiving antenna. Meanwhile, the wind directions of the sea area covered by radar can be obtained by sequentially utilizing the proposed method, which is more appropriate for the application of SHFSWR with limited deck space and less cost. Experimental results of the real data collected in Taiwan Strait preliminarily verify the detection accuracy and the distance limit of the wind direction inversion, as the root-mean-square error and the detection range are 9.85° and 120 km, respectively

Shaobo Xia;Ruisheng Wang; "A Fast Edge Extraction Method for Mobile Lidar Point Clouds," vol.14(8), pp.1288-1292, Aug. 2017. Edges in mobile light detection and ranging (lidar) point clouds are important for many applications but usually overlooked. In this letter, we propose a fast edge extraction method for mobile lidar. First, an edge index based on geometric center is introduced and then gradients in unorganized 3-D point clouds are defined. By analyzing the ratio between eigenvalues, edge candidates can be detected. Finally, an edge linking algorithm named graph snapping is proposed. The method is tested extensively and the experimental results demonstrate that the proposed method is able to quickly extract most of 3-D edges with higher accuracy than the existing methods.

Yiding Yang;Yin Zhuang;Fukun Bi;Hao Shi;Yizhuang Xie; "M-FCN: Effective Fully Convolutional Network-Based Airplane Detection Framework," vol.14(8), pp.1293-1297, Aug. 2017. Airplane detection is a challenging problem in complex remote sensing imaging. In this letter, an effective airplane detection framework called Markov random field-fully convolutional network (M-FCN) is proposed. The M-FCN uses a cascade strategy that consists of an FCN-based coarse candidate extraction stage, a multi-Markov random field (multi-MRF)-based region proposal (RP) generation stage, and a final classification stage. In the first stage, the FCN model is trained to be sensitive to airplanes, and a coarse candidate map is generated. This model is scale-, direction-, and color-invariant and does not require many training examples. After the first stage, the coarse candidate map is used as the initial labeling field for a multi-MRF algorithm, and RPs are generated according to the multi-MRF output. This RP-generating strategy can yield more accurate locations with fewer RPs. In the last stage, a convolutional neural network-based classifier is used to improve the precision of the entire framework. Experiments show that the M-FCN has high precision, recall, and location accuracy.

Gilson A. O. P. Costa;Cristiana Bentes;Rodrigo S. Ferreira;Raul Q. Feitosa;Dário A. B. Oliveira; "Exploiting Different Types of Parallelism in Distributed Analysis of Remote Sensing Data," vol.14(8), pp.1298-1302, Aug. 2017. The vast amount of data obtained from current remote sensing data acquisition technologies represents a wealth of useful and affordable geospatial data for policy and decision makers. However, the consequent computational cost of analyzing these data may become prohibitive. This letter extends previous efforts in exploiting distributed processing to speed up the image interpretation process. In this letter, we propose and evaluate a mechanism to exploit task parallelism in addition to data parallelism. Experiments conducted on cloud computing infrastructure, following an object-based interpretation model, demonstrated that substantial performance gains can be obtained with the proposed mechanism.

Miao Li;Xiongbin Wu;Lan Zhang;Xianchang Yue;Chuan Li;Jianfei Liu; "A New Algorithm for Surface Currents Inversion With High-Frequency Over-the-Horizon Radar," vol.14(8), pp.1303-1307, Aug. 2017. The conventional method of extracting ocean surface currents by high-frequency over-the-horizon radar is based on the fixed first-order Bragg frequency formula and ignores the effects caused by the environment, especially in near-shore areas. In this letter, a current inversion model based on 2-D Fourier series expansion was developed. The first-order Bragg frequency and the Doppler offset induced by radial current are dealt with the bivariate functions of group distance and azimuth angle in the proposed method. By solving an overdetermined matrix equation with the least-square fitting method, the current at each detection grid can be estimated. As the Bragg frequency obtained by this new algorithm is adaptive to the environment, the accuracy of current measurement will be improved. The feasibility and effectiveness of the new method are verified with simulations and experimental results. The currents estimated by the traditional method and the new algorithm are compared with two in situ buoys. Results indicate that the new algorithm possesses comparable accuracy for far-shore areas and better accuracy for near-shore areas when compared with the conventional method.

Ting Yang;Wei Wan;Xiuwan Chen;Tianxing Chu;Yang Hong; "Using BDS SNR Observations to Measure Near-Surface Soil Moisture Fluctuations: Results From Low Vegetated Surface," vol.14(8), pp.1308-1312, Aug. 2017. The feasibility of using China’s Beidou Navigation Satellite System (BDS) signal-to-noise ratio (SNR) data for estimating soil moisture is demonstrated in this letter. Previous studies for the Global Navigation Satellite System (GNSS) multipath for estimating soil moisture have concentrated on the Global Positioning System (GPS) SNR data. The SNR data recorded by BDS receivers can also be strongly impacted by the relative permittivity, which however, has not been comprehensively investigated. This letter used a commercially available geodetic-quality BDS/GPS receiver to collect BDS SNR data on both B1 and B2 frequencies. Two methods were investigated to demonstrate the relationship between BDS SNR metrics and soil moisture. One was the phase method proposed by Larson et al., and the other was an interference model proposed by Peng et al. The results show that both B1 and B2 frequencies perform well to reflect the moisture fluctuations, particularly for reflecting big precipitation events. Comparisons between BDS B1 and B2 with GPS L2C and L5 demonstrate the results from these two constellations are comparable. BDS could be a new data source for producing global high-temporal soil moisture products using GNSS-based approaches.

Maryam Imani; "RX Anomaly Detector With Rectified Background," vol.14(8), pp.1313-1317, Aug. 2017. An improved version of Reed–Xiaoli (RX) detector is proposed in this letter, which uses the benefits of median-mean line (MML) metric. The background data may be contaminated by anomalies. The anomalous outliers contributed in the estimate of background statistics decrease the differences between anomalous targets and background clutter. So, the performance of an RX detector is degraded. To deal with the negative effects of anomalous outliers, and to rectify the position of background data, the MML metric is used for providing more reliable background samples. Therefore, more stable background statistics (mean and covariance matrix) are estimated. The experimental results show the better performance of the proposed MML-RX method compared with some state-of-the-art anomaly detection methods with reasonable computation time.

Yulang Liu;Zhiqin Zhao;Xiaozhang Zhu;Wei Yang;Zaiping Nie;Qing-Huo Liu; "A Diagonal Subspace-Based Optimization Method for Reconstruction of 2-D Isotropic and Uniaxial Anisotropic Dielectric Objects," vol.14(8), pp.1318-1322, Aug. 2017. In this letter, a diagonal approximation has been introduced in the framework of subspace-based optimization method (SOM), for reducing computational complexity. Due to this approximation, the operator which relates the electric field and equivalent current becomes a diagonal one, instead of the nonlinear one in full-wave inversion. Consequently, the proposed method is named as diagonal SOM (DSOM). Compared with the original SOM, DSOM has a more simplified objective function with much less computational cost. DSOM can be applied for solving inverse scattering problems involving not only isotropic objects, but also uniaxial anisotropic objects, which is demonstrated by numerical examples. Furthermore, DSOM provides reconstruction results that are comparable in quality to the ones obtained using SOM, but with much less computation load.

Leping Chen;Daoxiang An;Xiaotao Huang; "Extended Autofocus Backprojection Algorithm for Low-Frequency SAR Imaging," vol.14(8), pp.1323-1327, Aug. 2017. Since the trajectory deviations of a radar platform cause serious phase errors that degrade the focusing quality of synthetic aperture radar (SAR) imagery, an autofocus method is very important for high-resolution airborne SAR imaging. In this letter, an extended autofocus backprojection (EABP) algorithm is developed to accommodate the phase errors. Under the criterion of maximum image sharpness, the traditional ABP algorithm supports a broader class of collection and imaging geometries. However, it neglects the influence of SAR image energy distribution on the estimation of phase errors that make it inapplicable for SAR imaging, which has high dynamic range, such as low-frequency SAR imaging. By choosing regions and balancing the energy distribution of the data, the EABP algorithm is more efficient and useful to avoid the estimation error caused by the unbalanced energy distribution. Its performance has been demonstrated by using the experimental data that are acquired by a P-band airborne SAR system with a low-accuracy global positioning system.

Lian He;Qiming Qin;Rocco Panciera;Mihai Tanase;Jeffrey P. Walker;Yang Hong; "An Extension of the Alpha Approximation Method for Soil Moisture Estimation Using Time-Series SAR Data Over Bare Soil Surfaces," vol.14(8), pp.1328-1332, Aug. 2017. The objective of this letter is to extend the alpha approximation method, a method proposed by Balenzano et al., for soil moisture retrieval from multitemporal synthetic aperture radar (SAR) data. The original alpha approach requires an initial estimate of the upper and lower bound soil moisture values to constrain the soil moisture retrieval. This letter demonstrates an extension of the alpha approach by employing the juxtaposition method to adaptively set the soil moisture bounds using the absolute radar backscatter values. This extended alpha method was tested using an airborne time series of L-band SAR data and coincident ground measurements acquired during the SMAPEx-3 experiment over bare agricultural fields. The agreement between estimated and measured soil moisture values was within a root-mean-square error of 0.07 cm3/cm3 for each of the three polarization combinations used (i.e., HH, VV, and HH and VV). Moreover, inclusion of the two-polarization combination (HH and VV) slightly improved the retrieval performance. The proposed extension to the alpha method makes the most of the information contained in the SAR data time series by using dynamic, spatially explicit soil moisture bounds retrieved from the SAR data themselves.

Yanguang Bi;Xiangzhi Bai;Ting Jin;Sheng Guo; "Multiple Feature Analysis for Infrared Small Target Detection," vol.14(8), pp.1333-1337, Aug. 2017. Detection of small target has been an important and challenging task in infrared systems. Most detection algorithms which only use single metric are difficult to separate target from clutter completely. The false alarm may be high when there exists complex backgrounds. In this letter, multiple novel features are proposed from four aspects to establish elaborate description. Each feature reflects specific characteristic of small target. The best feature vector is selected to apply these features for detection. Then, learning-based classifier is trained to screen candidate targets which are obtained by initial segmentation. Experimental results demonstrate that the proposed features could discriminate small targets from various clutters effectively. The better detection performance is achieved compared with other methods in different infrared backgrounds.

Xiaoji Song;Deliang Xiang;Kai Zhou;Yi Su; "Improving RPCA-Based Clutter Suppression in GPR Detection of Antipersonnel Mines," vol.14(8), pp.1338-1342, Aug. 2017. Detecting shallow buried antipersonnel mines (APMs) with a ground-penetrating radar (GPR) is a challenging task because of clutter contamination, which often obscures the APM response. In this letter, a novel method combining migration imaging with the low-rank and sparse representation method to suppress clutter and extract target image is presented. The proposed method first focuses and strengthens the target response with migration imaging. Then, since the focused target response and clutter, respectively, constitute the sparse component and the low-rank component of the recorded data, the recently proposed robust principal component analysis (RPCA) can be applied to the recorded data to separate the target response (sparse component) from the clutter (low-rank component). Numerical simulation and experiments with real GPR systems are conducted. Results demonstrate the effectiveness of the proposed method in improving signal-to-clutter ratio and retrieving geometrical information of the target, which permits a better APM identification in heavy clutter environment.

DaHan Liao; "Application of Discrete Scatterer Technique for Scene Response Estimation in FOPEN Radar Simulations," vol.14(8), pp.1343-1347, Aug. 2017. An analytical solver is developed for characterizing the coherent scattering responses of tree scenes. Realistic 3-D tree structures are first constructed using an open-source random tree generation engine. The trees are then parsed into discrete, canonical scatterers, such as cylinders and disks, and a multiray approach is applied to calculate the aggregate response of the scene, with the transmissivity of each ray determined from a cell-based representation of the computational domain. As each scatterer in the outlined framework is assigned a deterministic position, the spatial distribution of the trees and their canopy structures is fully preserved. A cell-by-cell strategy is also proposed for speeding up the calculations of the responses from small components, such as secondary stems and leaves, which are expected to far outnumber those scatterers composing the trunks and primary branches. The accuracy of the analytical solver is assessed by comparing simulation results for a forest stand with solutions from a large-scale, full-wave solver. In addition, as an application of interest, the detection and imaging of a tree-obscured walking human target is demonstrated.

Xiong Zhou;Saurabh Prasad; "Active and Semisupervised Learning With Morphological Component Analysis for Hyperspectral Image Classification," vol.14(8), pp.1348-1352, Aug. 2017. Classification of hyperspectral images has recently gained significant popularity due to both the development of remote sensing technologies and the advances in image analysis approaches. One crucial step to achieve accurate classification is to acquire sufficient high-quality training data, which is often a time-consuming and expensive process. To alleviate this burden, in this letter, we propose an active and semisupervised learning (SSL) approach that utilizes morphological component analysis (MCA) for classification of hyperspectral images. First, the original hyperspectral data are decomposed into its morphological components via MCA. In each feature domain, the active learning (AL) and SSL are combined to enlarge the training data set based on superpixels. Finally, decision fusion is carried out to integrate the predictions from the two components. The proposed method is tested on both benchmark and real world application hyperspectral data sets. Experimental results indicate that the proposed method can lead to a better classification with respect to the conventional AL approaches.

Ding Nie;Min Zhang;Wangqiang Jiang;Jia Zhao; "Spectral Investigation of Doppler Signals From Surfaces With a Mixture of Wind Wave and Swell," vol.14(8), pp.1353-1357, Aug. 2017. The mixture of wind wave and swell is simulated based on a bimodal spectrum and nonlinear hydrodynamic theory, and the investigation of the Doppler spectra of backscattered signals from such sea surfaces in time-varying marine environment is performed by applying the second-order small-slope approximation model. The results are also compared with some popular data sets, and based on the comparison it can be found that the variation trends of Doppler shift and Doppler spectral bandwidth are different for mixed waves with various weights, which reflect qualitatively the distinct ways of influences that wind wave components and swell ones exert on Doppler signature.

Haoyang Yu;Lianru Gao;Wei Li;Qian Du;Bing Zhang; "Locality Sensitive Discriminant Analysis for Group Sparse Representation-Based Hyperspectral Imagery Classification," vol.14(8), pp.1358-1362, Aug. 2017. This letter proposes to integrate the locality sensitive discriminant analysis (LSDA) with the group sparse representation (GSR) for a hyperspectral imagery classification. The LSDA is to project the data set to a lower-dimensional subspace to preserve local manifold structure and discriminant information, while the GSR is to encode the projected testing set as a sparse linear combination of group-structured training samples for classification. The proposed approach, denoted as LSDA-GSR classifier (GSRC), is evaluated using two real hyperspectral data sets. Experimental results demonstrate that it can provide considerable improvement to the original counterparts, i.e., SRC and GSRC, with a relatively low computational cost.

Baofeng Guo;Honghai Shen;Mingyu Yang; "Improving Hyperspectral Image Classification by Fusing Spectra and Absorption Features," vol.14(8), pp.1363-1367, Aug. 2017. Many features can be extracted to classify hyperspectral imagery. Classification relying on a single feature set may lose some useful information due to the intrinsic limitation of each feature extraction model. To improve classification accuracy, we propose an information fusion approach, in which both the global and the local aspects of hyperspectral data are taken into account and are combined by a decision-level fusion method. The global features are hyperspectral reflectance curves representing the holistic response to the incident light, and the local features are absorption characteristics reflected by materials’ individual constituents. The decision-level fusion is carried out by analyzing the entropy of the classification output from the global feature set and modifying this output via the results of a multilabel classification using the local feature set. Simulations of classification performance on 16 classes of vegetation from the AVIRIS 92AV3C and Salinas data set show the effectiveness of the method, which increases the classification accuracy compared to a popular support vector machine-based method and a production-rule-based decision fusion method.

Amin Alizadeh Naeini;Sayyed Hamed Alizadeh Moghaddam;Sayyed Mohammad Javad Mirzadeh;Saeid Homayouni;Sayyed Bagher Fatemi; "Multiobjective Genetic Optimization of Terrain-Independent RFMs for VHSR Satellite Images," vol.14(8), pp.1368-1372, Aug. 2017. Rational polynomial coefficients (RPCs) biases and over-fitting phenomenon are two major issues in terrain-independent rational function models. These problems degrade the accuracy of extracted spatial information from very high spatial resolution (VHSR) satellite images. This study particularly focused on overcoming the over-fitting problem through an optimal term selection approach. To this end, multiobjective genetic algorithm was used in order to optimize three effective objective functions: the RMSE of ground control points (GCPs), the number, and the distribution of both RPCs and GCPs. Finally, the technique for order of preference by similarity to ideal solution, as an efficient multicriteria decision-making method, was applied to select the best solution, i.e., the optimum terms of RPCs, through the ranking of solutions in the optimum set. The performance of the proposed method was evaluated by using three VHSR images acquired by GeoEye-1, Worldview-3, and Pleiades satellite sensors. Experimental results show that subpixel accuracy can be nearly achieved in all data sets, when over-fitting problem is addressed. The optimal selected terms leaded to a significant improvement compared to the original RPCs. Indeed, our method, which is independent of GCPs distribution, not only requires a small number of GCPs, but also leads to a 30% to 75% improvement when compared to the original RPCs. This improvement in VHSR images, usually makes no more need to remove the RPCs biases.

Jidong Qiu;Biao Zhang;Zhongbiao Chen;Yijun He; "A New Modulation Transfer Function With Range and Azimuth Dependence for Ocean Wave Spectra Retrieval From X-Band Marine Radar Observations," vol.14(8), pp.1373-1377, Aug. 2017. The conventional linear modulation transfer function (MTF) was derived using HH-polarized marine radar observations in deepwater conditions. It is possible to constrain this MTF for ocean surface wave spectra retrieval in coastal shallow waters. In this letter, we propose a new MTF with both range and azimuth dependence based on VV-polarized radar measurements acquired from heterogeneous coastal wave fields. This new MTF is determined using a radar-observed image spectrum and in situ buoy-measured wave frequency spectrum. To assess the proposed MTF, we compare the buoy-measured 1-D wavenumber spectrum with those obtained using different MTFs. Compared to the conventional linear MTF, the new MTF-derived wavenumber spectrum is closer to buoy measurements. The retrieved peak and mean wave periods are also validated using concurrent wave buoy measurements. It is shown that the retrieval accuracies of peak and mean wave periods of the new MTF are better than those of the conventional MTF. The bias and root mean square errors of the peak and mean wave periods of the new MTF are 0.52 and 0.95 s and 0.26 and 0.48 s, respectively. This suggests that the proposed new MTF is more appropriate for retrieving integral wave parameters than the conventional linear MTF.

Shaheera Rashwan;Nicolas Dobigeon; "A Split-and-Merge Approach for Hyperspectral Band Selection," vol.14(8), pp.1378-1382, Aug. 2017. The problem of band selection (BS) is of great importance to handle the curse of dimensionality for hyperspectral image (HSI) applications (e.g., classification). This letter proposes an unsupervised BS approach based on a split-and-merge concept. This new approach provides relevant spectral sub-bands by splitting the adjacent bands without violating the physical meaning of the spectral data. Next, it merges highly correlated bands and sub-bands to reduce the dimensionality of the HSI. Experiments on three public data sets and comparison with state-of-the-art approaches show the efficiency of the proposed approach.

Xiaojiang Guo;Yesheng Gao;Xingzhao Liu; "Azimuth-Variant Phase Error Calibration Technique for Multichannel SAR Systems," vol.14(8), pp.1383-1387, Aug. 2017. Multiple azimuth channels are usually employed to overcome the inherent limitation between high resolution and wide swath in synthetic aperture radar systems. However, unavoidable channel phase errors will significantly degrade the performance of ambiguity suppression. Conventional calibration methods usually regard these phase errors as constants during the whole observation time and ignore the azimuth-variant phase errors, which may not totally suppress azimuth ambiguities especially for very strong targets. This letter presents an azimuth-variant phase error calibration technique. The proposed technique first selects a strong point-like target as the calibration source. Then, the azimuth-variant phase errors can be estimated by comparing the phase of the calibration source and the corresponding steering vector in the range-compressed signals. Besides, a preprocessing method is presented to improve the calibration accuracy when the selected calibration source is affected by noise or interferences. Theoretical analysis and experiments demonstrate the feasibility of the proposed technique.

Florian Klopfer;Martin Hämmerle;Bernhard Höfle; "Assessing the Potential of a Low-Cost 3-D Sensor in Shallow-Water Bathymetry," vol.14(8), pp.1388-1392, Aug. 2017. Highly detailed 3-D geoinformation about bathymetry is crucial to understand a wide range of processes and conditions in the geosciences. Recently, low-cost sensors such as Microsoft’s structured-light 3-D camera Kinect for Xbox 360 have been deployed to complement established sources of 3-D bathymetric data like light detection and ranging or sound navigation and ranging. In this letter, we assess the Kinect’s applicability to capture the bathymetry of shallow waters. Therefore, the maximum capturing range through water, accuracy, and precision of Kinect measurements are examined. Additionally, we test a recording setup which allows for the mitigation of waves and which features advantages in terms of refraction correction on a scene containing submerged gravels. As a result, water depths of 30 cm (outdoors) and 40 cm (indoors) can be penetrated. The accomplished accuracy [mean standard deviation (SD) 7 mm] and precision values (mean SD 3.1 mm) are similar to the ones achieved by terrestrial laser scanning bathymetry. Derived gravel sizes highly correspond to the manual reference measurements. Overall, the findings show the Kinect’s applicability in researching shallow natural water bodies.

Roshan Kumar;P. Sumathi;Ashok Kumar; "Synchrosqueezing Transform-Based Frequency Shifting Detection for Earthquake-Damaged Structures," vol.14(8), pp.1393-1397, Aug. 2017. Analysis based on time–frequency representations provides meaningful interpretations for seismic signals. An application of the synchrosqueezing transform (SST) based on continuous wavelet transform for the detection of frequency shifting of earthquake-damaged structures is introduced in this letter. The analysis of real-time seismic records of the damaged structures is performed with Wigner–Ville distribution, <inline-formula> <tex-math notation="LaTeX">$S$ </tex-math></inline-formula>-transform, and hybrid transform, which is known as Gabor–Wigner transform. The detection of frequency shifting with SST for synthetic and seismic signals outperforms the other time–frequency distributions tested. SST improves the frequency localization by reducing smearing around frequency components and it also yields good time localization, which is comparable with <inline-formula> <tex-math notation="LaTeX">$S$ </tex-math></inline-formula>-transform. The small variation in the natural frequency of earthquake-damaged structures is detected through SST-based analysis. Hence, we show that SST improves the readability and interpretation of the time–frequency representation of frequency shifting in damaged buildings caused by large earthquakes.

Peng Xu;Kun-Shan Chen; "Circularly Polarized Bistatic Scattering From Sastrugi Snow Surfaces," vol.14(8), pp.1398-1402, Aug. 2017. In this letter, we investigate the circularly polarized bistatic scattering from sastrugi snow surfaces at the <inline-formula> <tex-math notation="LaTeX">$L$ </tex-math></inline-formula>-band (1.575 GHz) at different azimuthal angles. Comparisons of circularly polarized bistatic scattering are made between the likewise (left-hand circularly polarized transmitting and left-hand circularly polarized receiving or right-hand circularly polarized transmitting and right-hand circularly polarized receiving) and the counterwise (RL or LR). Numerical simulations show that the counterwise configurations are only preferred, in the sense of maximum received power, for randomly sastrugi surface, but not always so when the surface structure is double-layered, at which the likewise polarized scattering strength is comparable to, and at certain azimuthal angles it is even larger than, that of the counterwise polarized scattering. Physical mechanism of such a behavior is explicated by the phenomena of local Fresnel reflections. Results from this letter offer physical insights for bistatic sensing of layered snow surface. It is suggested that for microwave remote sensing of snow surface by bistatic signals (e.g., global positioning system), both transmitting and receiving polarizations in likewise and counterwise be adopted.

Dongsheng Fang;Xiaolei Lv;Ye Yun;Fangfang Li; "An InSAR Fine Registration Algorithm Using Uniform Tie Points Based on Voronoi Diagram," vol.14(8), pp.1403-1407, Aug. 2017. Interferometric synthetic aperture radar (InSAR) image coregistration is a nontrivial task for its skew and distorted image pair, especially in severe decorrelated areas. In this letter, a new InSAR coregistration method, which considers both the coherence of the reference points and their topographical distribution, is proposed to perform accurate coregistration. First, a conventional cross-correlation registration method is performed, and a number of high coherent points are extracted as the reference points. Then, some well-distributed reference points are selected by utilizing the Voronoi diagram-based distribution optimization algorithm. Their subpixel correspondences are selected by finding the maximum values of the cross-correlation functions. Some singular correspondences are rejected by the random sample consensus method. Based on these accurate correspondences, the parameters of polynomial mapping function are estimated via the weighted least squares method. It improves the accuracy of geometrical mapping functions and overcomes the limitation of the conventional registration method when the reference points locate in low coherent area. C-band airborne repeat-pass acquired SAR data with 0.5-m resolution are used to validate the proposed algorithm by comparing with the conventional registration algorithm in accuracy. Experimental results prove the effectiveness of the proposed algorithm.

Jaakko Seppänen;Oleg Antropov;Thomas Jagdhuber;Martti Hallikainen;Janne Heiskanen;Jaan Praks; "Improved Characterization of Forest Transmissivity Within the L-MEB Model Using Multisensor SAR Data," vol.14(8), pp.1408-1412, Aug. 2017. This letter proposes a novel way to assimilate synthetic aperture radar (SAR) data to L-band Microwave Emission of the Biosphere (L-MEB) model to enhance model performance over forested areas. L- and C-band satellite SAR data are used in order to characterize the forest transmissivity within the emission model, instead of the optical satellite imagery-based leaf area index (LAI) parameter. Examination of several combinations of satellite SAR data as a substitute for LAI within the L-MEB model showed that when ALOS PALSAR (L-band) and multitemporal composite Sentinel-1 (C-band) data are applied, an improved agreement was achieved between the measured and simulated brightness temperatures (TBs) over forests. The root mean squared difference between modeled and measured TBs was reduced from 6.1 to 4.7 K with single PALSAR scene-based transmissivity correction and down to 4.1 K with multitemporal Sentinel-1 composite-based transmissivity correction. Suitability of single Sentinel-1 scenes varied based on seasonal and weather conditions. Overall, this indicates the potential of an SAR-based estimation of forest volume transmissivity and opens a possible way of fruitful active-passive microwave satellite data integration.

An Zhao;Kun Fu;Siyue Wang;Jiawei Zuo;Yuhang Zhang;Yanfeng Hu;Hongqi Wang; "Aircraft Recognition Based on Landmark Detection in Remote Sensing Images," vol.14(8), pp.1413-1417, Aug. 2017. Aircraft type recognition of remote sensing images is critical both in civil and military applications. In this letter, we propose a novel landmark-based aircraft recognition method which is highly accurate and efficient. First, we propose a new idea to address the aircraft type recognition problem by aircraft’s landmark detection. Its advantages are two folds. On the one hand, it needs fewer labeled data and alleviates the work of human annotation. On the other hand, a trained model has strong expansibility because it can be used for any type of aircraft that not contained in training data set without retraining. Then, we use a variant of a convolutional neural network called vanilla network for all landmarks regression at the same time. Therefore, it can avoid bad local minimum effectively by encoding the geometric constraints among landmarks implicitly. To handle aircrafts in myriads of poses, rotation jittering is used for data augmentation in preprocessing and multicrop fusion is used in postprocessing. Thus, an 80% reduction in error rate could be reached. Finally, we use the landmark template matching to recognize the aircraft. Our method shows a competitive performance both in accuracy and efficiency.

Zhigang Pan;Preston Hartzell;Craig Glennie; "Calibration of an Airborne Single-Photon Lidar System With a Wedge Scanner," vol.14(8), pp.1418-1422, Aug. 2017. Over the past decade, boresight angle calibration of airborne laser scanning (ALS) systems has evolved from ad hoc methods often based on qualitative assessments of point cloud fidelity to rigorous self-calibration algorithms that optimize multiple sensor parameters by minimizing the spatial discrepancies between common features. Although the calibration of linear-mode ALS systems employing oscillating or rotating mirrors has been well developed, little work has addressed the calibration of emergent single-photon lidar (SPL) sensors with circular scan patterns. We adapt a least-squares algorithm employing planar-surface matching to accommodate a spinning wedge prism, employ a synthetic dynamic wedge angle by way of a trigonometric polynomial (TP) to model imperfections in the circular scanning mechanism, and address unique characteristics of SPL data within the stochastic model. Planar fit residuals are reduced by 40% with a boresight and wedge angle adjustment and a further 40% with the introduction of the synthetic wedge angle TP. The addition of the TP also improves the median vertical discrepancy between point clouds generated from fore and aft look angles by over 75%.

Yuying Wang;Zhimin Zhang;Ning Li;Feng Hong;Huaitao Fan;Xiangyu Wang; "Maritime Surveillance With Undersampled SAR," vol.14(8), pp.1423-1427, Aug. 2017. According to the minimum antenna area constraint, synthetic aperture radar (SAR) systems require a low-pulse repetition frequency (PRF) to image the wide swaths in ocean surface monitoring scenarios. However, the low PRF that is lower than the Doppler bandwidth will cause azimuth ambiguities. This letter proposes a novel method of mitigating azimuth ambiguities when using an undersampled SAR system for ship detection over the open sea. In this letter, the concept of range subspectra is adopted to misregister the azimuth ambiguity signals. In addition, a change detection method that uses a principal component analysis and <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula>-means clustering is adopted to detect differences in the azimuth ambiguities between two range subspectra images in which the azimuth ambiguities are misregistered. By compensating for the ambiguities in the corresponding image, the ambiguities can be mitigated. This method is only appropriate for bright targets over dark backgrounds in which residual energy loss occurs for useful signals. Both simulated and real data processing results have validated the effectiveness of the proposed method and show that it is feasible for maritime surveillance.

* "IEEE Geoscience and Remote Sensing Letters information for authors," vol.14(8), pp.C3-C3, Aug. 2017.*

* "IEEE Geoscience and Remote Sensing Letters Institutional Listings," vol.14(8), pp.C4-C4, Aug. 2017.*

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing - new TOC (2017 July 20) [Website]

* "Frontcover," vol.10(6), pp.C1-C1, June 2017.*

* "IEEE Geoscience and Remote Sensing Societys," vol.10(6), pp.C2-C2, June 2017.*

* "Table of Contents," vol.10(6), pp.2425-2427, June 2017.*

J. Wu;Y.-Q. Jin;J. Shi;H. Guo;K.-S. Chen; "Special Issue on the 2016 IEEE International Geoscience and Remote Sensing Symposium," vol.10(6), pp.2428-2430, June 2017.

Yuanzhen Ren;Huadong Guo;Guang Liu;Hanlin Ye; "Simulation Study of Geometric Characteristics and Coverage for Moon-Based Earth Observation in the Electro-Optical Region," vol.10(6), pp.2431-2440, June 2017. Large-scale geoscience phenomena are increasingly attracting more attention because of their great scientific and social significance. However, many existing earth observation systems lack the ability to conduct long-term continuous observations at a regional-to-global scale because of spatial and temporal coverage limitations and systematic bias. In this work, we propose a new platform, the moon, and discuss its potential and optical geometrical characteristics for observing large-scale geoscience phenomena. Based on the Jet Propulsion Laboratory ephemerides, the reference systems transformation and a simulation system of moon-based earth observations were developed. Numerous experiments were carried out and a series of simulation images are presented, which illustrate the wide swath and continuous observation characteristics of such a lunar observatory. In order to quantify the performance of moon-based earth observations, a simplified geometrical model was constructed and data were analyzed. The sublunar points were found to be unique parameters with the ability to characterize the relative positions between moon and earth. We also defined an effective coverage parameter for assessing the optical coverage ability of moon-based earth observations. The calculation showed that the average value of the introduced coverage parameter from 1960 to 2050 was 0.500, and the coverage remained stable in different years. Furthermore, the total daily visible time and repeated times of different positions on the earth surface are analyzed and the effect of different positions on the lunar surface of the observatory are evaluated. The result shows that such a moon-based approach could make significant contributions to the monitoring and understanding of large-scale geoscience phenomena.

Yuan Ma;Xiaolei Zou;Fuzhong Weng; "Potential Applications of Small Satellite Microwave Observations for Monitoring and Predicting Global Fast-Evolving Weathers," vol.10(6), pp.2441-2451, June 2017. Two new constellations comprising 14 small satellites with microwave instruments onboard are proposed in this study. Properly arranged, the first constellation is capable of covering the entire globe at an hourly interval and the second constellation is more favorable for the tropical area. Compared to the current JPSS and MetOp satellite constellation, which has passive microwave sounding instruments ATMS or AMSU, a small satellite constellation is more cost effective, requires a shorter development cycle, and has a smaller launch-failure impact. For a designated microwave small satellite constellation, the brightness temperature distribution in space and time is simulated using the operational forecast fields as inputs to the Community Radiative Transfer Model (CRTM). It is demonstrated that the structural change of fast-evolving weather systems such as a middle-latitude cyclone can be well captured from small satellite brightness temperatures.

Sergio Bernabé;Guillermo Botella;Gabriel Martín;Manuel Prieto-Matias;Antonio Plaza; "Parallel Implementation of a Full Hyperspectral Unmixing Chain Using OpenCL," vol.10(6), pp.2452-2461, June 2017. Spectral unmixing is an important task for remotely sensed hyperspectral data exploitation. Due to the fact that the spatial resolution of the sensor may not be able to separate different spectrally pure components (endmembers), spectral unmixing faces important challenges in order to characterize mixed pixels. As a result, several hyperspectral unmixing chains have been proposed to find the spectral signatures for each endmember and their associated abundance fractions. However, unmixing algorithms can be computationally expensive, which compromises their use in applications under real-time constraints. In this paper, we describe a new parallel hyperspectral unmixing chain based on three stages: 1) estimation of the number of endmembers using the geometry-based estimation of number of endmembers algorithm; 2) automatic identification of the spectral signatures of the endmembers using the simplex growing algorithm; and 3) estimation of the fractional abundance of each endmember in each pixel of the scene using the sum-to-one constrained least-squares unmixing algorithm. These algorithms have been specifically selected due to their successful performance in different applications. We have developed new parallel implementations of the aforementioned algorithms and assembled them in a fully operative unmixing chain using an hybrid implementation with the OpenCL framework and clMAGMA library. As a result, this is one of the first real-time implementations of a full unmixing chain in an open computing language. This methodology can be executed on different heterogeneous platforms such as CPU (multicore) and GPU platforms, in which accuracy, performance, and power consumption terms have been considered.

Ambar Murillo Montes de Oca;Reza Bahmanyar;Nicolae Nistor;Mihai Datcu; "Earth Observation Image Semantic Bias: A Collaborative User Annotation Approach," vol.10(6), pp.2462-2477, June 2017. Correctly annotated image datasets are important for developing and validating image mining methods. However, there is some doubt regarding the generalizability of the models trained and validated on available datasets. This is due to dataset biases, which occur when the same semantic label is used in different ways across datasets, and/or when identical object categories are labeled differently across datasets. In this paper, we demonstrate the existence of dataset biases with a sample of eight remote sensing image datasets, first showing they are readily discriminable from a feature perspective, and then demonstrating that a model trained on one dataset is not always valid on others. Past approaches to reducing dataset biases have relied on crowdsourcing, however this is not always an option (e.g., due to public-accessibility restrictions of images), raising the question: How to structure annotation tasks to efficiently and accurately annotate images with a limited number of nonexpert annotators? We propose a collaborative annotation methodology, conducting image annotation experiments where users are placed in either a collaborative or individual condition, and we analyze their annotation performance. Results show the collaborators produce more thorough, precise annotations, requiring less time than the individuals. Collaborators labels show less variance around the consensus point, meaning their assigned labels are more predictable and likely to be generally accepted by other users. Therefore, collaborative image annotation is a promising annotation methodology for creating reliable datasets with a reduced number of nonexpert annotators. This in turn has implications for the creation of less biased image datasets.

Qingmiao Ma;Yingjie Li;Jane Liu;Jing M. Chen; "Long Temporal Analysis of 3-km MODIS Aerosol Product Over East China," vol.10(6), pp.2478-2490, June 2017. The 3-km resolution MODerate resolution Imaging Spectroradiometer (MODIS) aerosol product has advantages for local-scale aerosol monitoring over land. This study assessed the accuracy and feasibility of the product over East China and investigated the potential for aerosol climatology studies. The long-temporal aerosol optical depth (AOD) of the product from 2002 to 2015 was collected and analyzed. Validation results show good overall accuracy. The correlation coefficient between MODIS AOD and ground measurements of the Aerosol Robotic Network (AERONET) is 0.79, and 63.1% data points are within the expected error range. However, in some areas, the MODIS AODs are highly overestimated because of bias and noise. Seasonal average AOD maps indicate the spatio-temporal distributions of aerosol. In general, seasonal AOD values follow the sequence summer > spring > fall > winter. Higher AODs (>1.0) usually occur over urban areas and cropland whereas lower values coincide with forest, shrub, and grassland. A simple moving average technique was applied to remove noise. Trend slopes were calculated and the significances were tested. Most areas show remarkable increases in AOD values prior to 2010, followed by significant downward trends. Differences in MODIS AOD were calculated between 2002 and 2009 and 2010 and 2015. Despite significant downward trends after 2010, the AODs are still higher than before 2010. The study demonstrates potential application of the 3 km product in aerosol climatology but confirms that it is crucial to first remove noise.

Ailin Liang;Ge Han;Wei Gong;Jie Yang;Chengzhi Xiang; "Comparison of Global XCO2 Concentrations From OCO-2 With TCCON Data in Terms of Latitude Zones," vol.10(6), pp.2491-2498, June 2017. This work evaluated the performance of the orbiting carbon observatory 2 (OCO-2) in terms of global atmospheric CO 2 observations for 20 months (September 2014 to April 2016). Three versions of data on CO2 are currently available, namely, version 7, version 7r, and Lite File Product (Lite_FP). For the first time, we evaluated XCO2 measurements from three versions of OCO-2 in terms of utilization efficiency, spatiotemporal coverage, and measurement accuracy compared with data (GGG2014) from the total carbon column observing network (TCCON). In data application, Lite_FP usually displayed the most efficient data volume and relatively stable spatial coverage, i.e., 42% in global scale. In addition, the spatial coverage of XCO2 measurements on land and ocean displayed opposite periodic seasonal fluctuations. However, no data were obtained in some areas where research on carbon ecology is highly significant. In terms of measurement accuracy, we considered the latitude distribution of TCCON sites and performed a site-by-site comparison at different latitude zones between XCO2 from three versions of OCO-2 and TCCON. Results demonstrated that the periodic variation trend of XCO2 from OCO-2 was consistent with that from TCCON. Moreover, the amplitude was similar to that of TCCON except that several sites had significant seasonal variation amplitude. The mean bias of OCO-2 was generally < 0.8 ppm, with 0.55% deviation. Among the three versions of OCO-2, Lite_FP showed good result in filtering and bias correction in the mid-low latitudes but still needs improvement in the high latitudes of the Northern and the Southern Hemispheres.

Bomin Sun;Anthony Reale;Franklin H. Tilley;Michael E. Pettey;Nicholas R. Nalli;Christopher D. Barnet; "Assessment of NUCAPS S-NPP CrIS/ATMS Sounding Products Using Reference and Conventional Radiosonde Observations," vol.10(6), pp.2499-2509, June 2017. The NOAA unique combined atmospheric processing system (NUCAPS) sounding products derived from Suomi national polar-orbiting partnership (S-NPP) cross track infrared sounder/advanced technology microwave sounder (CrIS/ATMS) are assessed. This is done using collocated radiosondes from reference sites (i.e., global reference upper air network and satellite synchronized launch sites) and conventional upper air observing sites as the target data. Analysis of satellite retrieval bias and root-mean-square (rms) error, conducted on a global scale and at individual sites with representative climate regimes, indicates the NUCAPS temperature and water vapor retrieval performance meets the operational uncertainty requirements. Caution, however, is needed in this type of approach. In our empirical analyses, we find that the satellite retrieval rms error is sensitive to 1) the time mismatch in radiosonde launch and satellite overpass, particularly near the surface and tropopause for temperature and around the midtroposphere for water vapor, 2) vertical resolution differences between the satellite retrieval and radiosonde that become manifested as a larger rms error in the vicinity of the planetary boundary layer and tropopause, and 3) the accuracy of radiosonde water vapor measurements particularly in the upper troposphere and lower stratosphere where dry bias are prevalent. Examples highlighting these issues in the context of satellite data calibration and validation are provided.

Geng-Ming Jiang;Shanshan Li;Zhong-Yi Wang; "Intercalibration of IRAS/FY-3B Infrared Channels With IASI/Metop-A 1C Data," vol.10(6), pp.2510-2517, June 2017.

Lijuan Wang;Ni Guo;Xiaoping Wang;Wei Wang; "Effects of Spatial Resolution for Evapotranspiration Estimation by Using the Triangular Method Over Heterogeneous Underling Surface," vol.10(6), pp.2518-2527, June 2017. In order to verify the applicability of different triangular methods for evapotranspiration (ET) estimation and the effect of spatial resolution on triangular methods, the applicability of normalized-difference vegetation index-land surface temperature (NDVI-LST) and NDVI-albedo triangular methods was validated based on the enhanced thematic mapper (ETM)+ moderate-resolution imaging spectrometer (MODIS) data. Considering the effecting of soil moisture on ET, a new triangular method was developed by using the perpendicular drought index (PDI). Compared to the measured values, the result showed that LSTs retrieved by a single-channel method using ETM+ data were closed to the measured values, with a root-mean-square error (RMSE) of 5.7 K. Given the inhomogeneity of the underlying surface, the remote-sensing data related to the low spatial resolution blur the between-pixel differences. A higher spatial resolution of the remote sensing (RS) data corresponds to a greater homogeneity of the distribution of scatter plots in the eigenspace and greater differences between pixels, particularly in the NDVI-LST eigenspace. The eigenspace formed by the PDI and the NDVI possess distinct triangular characteristics, particularly the inversion results of the ETM+ data, with an mean absolute percent error of 14% and an RMSE of 103 W·m−2 . The dry-edge slope introduced by the PDI in the expression increases the accuracy of the estimated ET. Compared to the measured data, the RMSEs of ET estimated by the NDVI-PDI using the ETM+ and MODIS data were reduced to 92 and 121 W⋅m−2, respectively. The regional distribution of ET inverted by the NDVI-PDI method significantly coincided with the actual scenario of the underlying surface.

Xin Pan;Yuanbo Liu;Guojing Gan;Xingwang Fan;Yingbao Yang; "Estimation of Evapotranspiration Using a Nonparametric Approach Under All Sky: Accuracy Evaluation and Error Analysis," vol.10(6), pp.2528-2539, June 2017. Accurate estimation of regional evapotranspiration (ET) or latent heat flux (latent energy, LE) remains a challenge. On the basis of a nonparametric approach, this study proposed an all-sky algorithm based on moderate-resolution imaging spectroradiometer (MODIS) products and datasets of China Meteorological Administration Land Data Assimilation System (CLDAS). Eddy covariance observations from three nonvegetated sites (desert, Gobi, and village) and three vegetated sites (orchard, vegetable, and wetland) over an arid/semiarid region were used as references to validate the new algorithm. Results showed that the spatial and temporal patterns of LE coincided with desert–oasis ecosystems. Comparison of the retrieved and reference values yielded the following results: R2 = 0.19–0.63, bias = −129–56 W/m2, relative error (RE) = 5%–29%, and root-mean-square error (RMSE) = 95–150 W/m2. Remote-sensing-retrieved LE (RSLE) exhibited relatively good accuracy and poor agreement with ground observations at the nonvegetated sites (RE: 5%–23%, R2: 0.19–0.40), whereas contradicting scenario occurred at the vegetated sites (RE: 24%–29%, R2: 0.46–0.63). In the arid nonvegetated region, the ET error might have been caused by net radiation, soil heat flux, land surface temperature, and air temperature. In the vegetated region, the errors of MODIS and CLDAS products were not the dominant error sources of RSLE. The validation supported the applicability of the proposed algorithm in the arid/semiarid region.

Wei Wang;Hui Lu;Tianjie Zhao;Lingmei Jiang;Jiancheng Shi; "Evaluation and Comparison of Daily Rainfall From Latest GPM and TRMM Products Over the Mekong River Basin," vol.10(6), pp.2540-2549, June 2017. Many satellite-based precipitation products have been developed and released during the past decades. This paper presents a primary evaluation and comparison of the two latest products, the Global Precipitation Measurement (GPM) mission Level 3 product Integrated Multi-satellitE Retrievals for GPM (IMERG) and Version 7 Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) products. The comparison is based on daily scale data from April 1, 2014, to January 31, 2016, over the entire Mekong River Basin (MRB). Daily observation data from 53 rain gauge stations were obtained to carry out a pixel-point comparison. Various aspects were taken to examine the performance of these two satellite-based precipitation products. The results demonstrate the following: 1) Both IMERG and TMPA overestimate the amount of rain when the rain is light (less than 8 mm at daily scale), and underestimate when the rain is heavy (more than 14 mm at daily scale), and IMERG performs better than TMPA, particularly for reducing the overestimation of light rain; 2) in term of event detection, IMERG more accurately detects rainfall events than TMPA, especially for slight rainfall; 3) IMERG reproduces the probability density function (PDF) of precipitation intensity and captures its intra-annual variability better than TMPA, especially in the Lower Mekong Basin; and 4) both products still have potential for further improvement, particularly applications during the dry seasons and in high-latitude/mountainous regions. This study provides useful information on efficiently deploying IMERG to hydrological studies and for improving IMERG algorithms.

Yun Yang;Martha Anderson;Feng Gao;Christopher Hain;William Kustas;Tilden Meyers;Wade Crow;Raymond Finocchiaro;Jason Otkin;Liang Sun;Yang Yang; "Impact of Tile Drainage on Evapotranspiration in South Dakota, USA, Based on High Spatiotemporal Resolution Evapotranspiration Time Series From a Multisatellite Data Fusion System," vol.10(6), pp.2550-2564, June 2017. Soil drainage is a widely used agricultural practice in the midwest USA to remove excess soil water to potentially improve the crop yield. Research shows an increasing trend in baseflow and streamflow in the midwest over the last 60 years, which may be related to artificial drainage. Subsurface drainage (i.e., tile) in particular may have strongly contributed to the increase in these flows, because of its extensive use and recent gain in the popularity as a yield-enhancement practice. However, how evapotranspiration (ET) is impacted by tile drainage on a regional level is not well-documented. To explore spatial and temporal ET patterns and their relationship to tile drainage, we applied an energy balance-based multisensor data fusion method to estimate daily 30-m ET over an intensively tile-drained area in South Dakota, USA, from 2005 to 2013. Results suggest that tile drainage slightly decreases the annual cumulative ET, particularly during the early growing season. However, higher mid-season crop water use suppresses the extent of the decrease of the annual cumulative ET that might be anticipated from widespread drainage. The regional water balance analysis during the growing season demonstrates good closure, with the average residual from 2005 to 2012 as low as -3 mm. As an independent check of the simulated ET at the regional scale, the water balance analysis lends additional confidence to the study. The results of this study improve our understanding of the influence of agricultural drainage practices on regional ET, and can affect future decision making regarding tile drainage systems.

Shuyan Liu;Christopher Grassotti;Junye Chen;Quanhua Liu; "GPM Products From the Microwave-Integrated Retrieval System," vol.10(6), pp.2565-2574, June 2017. An updated version of the microwave-integrated retrieval system (MiRS) V11.2 was recently released. In addition to the previous capability to process multiple satellites/sensors, the new version has been extended to process global precipitation measurement (GPM) microwave imager (GMI) measurements. The main purpose of this study is to introduce MiRS GPM products and to evaluate rain rate, total precipitable water (TPW), and snow water equivalent (SWE) using various independent datasets. Rain rate evaluations were performed for January, April, July, and October 2015 which represents one full month in each season. TPW was evaluated on four days: 9 January, 1 April, 13 July, and 1 October, which represents one full day in each season. SWE was evaluated for a week in January 2015. Results show that MiRS performance is generally satisfactory in regards to both global/regional geographical distribution and quantified statistical/categorical scores. Histograms show that MiRS GPM rain rate estimates have the capability to reproduce moderate to heavy rain frequency distribution over land, and light rain distribution over ocean when compared with a ground-based reference. Evaluations of TPW show the best performance over ocean with the correlation coefficient, bias, and standard deviation of 0.99, <1.25 mm, and <2.4 mm, respectively. Robust statistical results were also obtained for SWE, with a correlation coefficient, bias, and standard deviation of 0.77, 1.72 cm, and 3.61 cm, respectively. The examples shown demonstrate that MiRS, now extended to GPM/GMI, is capable of producing realistic retrieval products that can be used in broad applications including extreme weather events monitoring, depiction of global rainfall distribution, and water vapor patterns, as well as snow cover monitoring.

Guangjun He;Xuezhi Feng;Pengfeng Xiao;Zhenghuan Xia;Zuo Wang;Hao Chen;Hui Li;Jinjin Guo; "Dry and Wet Snow Cover Mapping in Mountain Areas Using SAR and Optical Remote Sensing Data," vol.10(6), pp.2575-2588, June 2017. Snow cover in mountain areas is a key factor controlling regional energy balances, hydrological cycle, and water utilization. Optical remote sensing data offer an effective means of mapping snow cover, although their application is limited by solar illumination conditions, conversely, synthetic aperture radar (SAR) offers the ability to measure snow wetness changes in all weather. In this study, a novel method, which can be approached in two steps by using SAR and optical data, has been developed for dry and wet snow cover recognition in mountain areas. First, two ground-based synchronous observations were implemented, respectively, for snow-accumulation period and snow-melt period. Then, the RADARSAT-2 interferometric coherence images and the backscattering coefficient images of the two periods are analyzed, adopting snow-covered and snow-free areas obtained from GF-1 satellite observations as the “ground truth.” A dynamic thresholding algorithm was proposed to identify snow cover by taking the polarization mode, local incidence angle, and underlying surface type into consideration. Finally, 36 polarimetric parameters obtained from Pauli, H/A/α, Freeman, and Yamaguchi decomposition were analyzed; the results indicate that P vol from Pauli, λ3 from H/A/α, and Y vol from Yamaguchi are more applicable to discriminate dry and wet snow. These three factors, combined with training samples from Nagler algorithm and in situ data, were used to build a support vector machine to classify the extracted snow cover to dry and wet snow. The classification results demonstrate that the dry and wet snow cover extraction can achieve an accuracy of 90.3% compared with in situ measurements.

Paul A. Hwang;Xiaofeng Li;Biao Zhang; "Retrieving Hurricane Wind Speed From Dominant Wave Parameters," vol.10(6), pp.2589-2598, June 2017. One of the most difficult issues in wind measurement using microwave radar is the decreased or loss of sensitivity of the return signal in high winds. Recent analyses of wind speed, wave height, and wave period data from hurricane hunter missions show that surface waves inside hurricanes adhere to the nature of fetch- and duration-limited wind wave generation. Making use of this property, the hurricane wind speed is retrievable from the dominant wave parameter (significant wave height or dominant wave period) using the fetch- or duration-limited wave growth function. An algorithm based on such consideration is developed and applied to two hurricanes of different strengths (categories 2 and 4). The retrieved wind speeds are in good agreement with the reference wind speeds from hurricane hunter measurements. For example, combining the two hurricanes the regression statistics of the bias, slope of linear fitting, root mean squares difference, and correlation coefficient of wind retrieval from significant wave height are 0.50 m/s, 1.01, 4.03 m/s, 0.87 using the fetch-limited wind wave growth function. The range of wind speeds in the combined data is from 22.4 to 65.4 m/s, and there is no indication of saturation problem in the wind retrieval using the dominant wave parameters.

Tae-Sung Kim;Kyung-Ae Park;Xiaofeng Li;Alexis A. Mouche;Bertrand Chapron;Moonjin Lee; "Observation of Wind Direction Change on the Sea Surface Temperature Front Using High-Resolution Full Polarimetric SAR Data," vol.10(6), pp.2599-2607, June 2017. In this study, we derive high-resolution wind speeds and directions from full-polarization synthetic aperture radar (SAR) data. Previous wind retrieval result from conventional single-polarization SAR data has a limitation to resolve small-scale structures in the surface wind because external wind direction data with coarser spatial resolution than those of SARs have been commonly used as an input. Using fully polarimetric SAR data, however, both wind speed and direction can be derived with high resolution from the image itself without any ancillary data. We derive wind field off the southern coast of Korea from the Radarsat-2 quad-polarization data and investigate the spatial variation. The retrieved wind field from the Radarsat-2 image presents a detailed structure including small-scale variations which is unobtainable from conventional wind observations. Comparison of the derived wind directions with in-situ buoy wind measurements shows a small difference of 8° which is regarded as sufficient to analyze small-scale wind vector changes. The retrieved wind field off the southern coast of Korea demonstrates the distinct patterns of direction changes. While blowing over the sea surface temperature (SST) frontal zone, the veering angles of wind vectors decrease and then are restored. The analysis of SAR-derived wind vectors with coinciding temperature distributions confirms that the variation in SAR-derived wind vectors on the SST fronts is mainly induced by the stability effect. This study also addresses the important role of precise wind direction retrieval on the accuracy of retrieved wind speed.

Yangdong Li;Chunyan Li;Xiaofeng Li; "Remote Sensing Studies of Suspended Sediment Concentration Variations in a Coastal Bay During the Passages of Atmospheric Cold Fronts," vol.10(6), pp.2608-2622, June 2017. Mississippi River, the largest river in North America, carries a significant amount of sediment into the northern Gulf of Mexico coastal waters. In this region, recurring wind events associated with atmospheric cold fronts between fall and the following spring dominate resuspension and transport of the sediment. In this paper, based on a time series of the moderate-resolution imaging spectroradiometer satellite images with a moderate spatial resolution of 250 m during the passages of three atmospheric cold fronts in March 2013, considering the entire process of cold fronts, the distribution and variation of total suspended sediment (TSS) concentration and underlying hydrodynamic mechanism were analyzed. The TSS concentration distribution and sediment transport in the Atchafalaya Bay within the Mississippi River estuary were found to be positively correlated with the magnitude of local winds and waves, and negatively correlated with bathymetry. During the prefrontal stage, concentrations in the offshore water always decreased as a result of the wind-induced landward sediment transport. In the shallower water, the variations of TSS concentration are dominated by the intensity of wind stress. In the postfrontal period, the variations of TSS were controlled by wind-driven currents, resuspension, and vertical mixing. Although there were some exceptional situations due to the variations of the characteristics of wind and wind-induce wave, in general, the TSS concentration decreased in the entire region.

Ruiyao Chen;Hamideh Ebrahimi;W. Linwood Jones; "Creating a Multidecadal Ocean Microwave Brightness Dataset: Three-Way Intersatellite Radiometric Calibration Among GMI, TMI, and WindSat," vol.10(6), pp.2623-2630, June 2017. The Tropical Rainfall Measuring Mission (TRMM), launched in late November 1997 into a low earth orbit, produced the longest satellite-derived precipitation time series of 17 years. During the second half of this mission, a collection of cooperative weather satellites, with microwave radiometers, was combined to produce a 6-h tropical precipitation product, and the TRMM Microwave Imager (TMI) was used as the radiometric transfer standard to intercalibrate the constellation members. To continue this valuable precipitation climate data record, the Global Precipitation Mission (GPM) observatory was launched in February 2014, and the GPM Microwave Imager (GMI) became the new transfer standard that normalized the microwave radiance measurements of the GPM constellation radiometers. Previously, the Central Florida Remote Sensing Lab conducted intercomparisons over oceans, between TMI and the Naval Research Laboratory's WindSat polarimetric radiometer and found that the radiometric calibration of TMI relative to WindSat exhibited exceptional long-term radiometric stability over a period >8 years. Moreover, for purposes of assessing global climate change, it is crucial that a seamless transfer between the TRMM and the GPM microwave brightness temperature time series be achieved. Therefore, this paper presents arguments that the 3-way (WindSat, TMI, and GMI) intersatellite radiometric comparisons, performed during the 13-month period overlap, can be used to bridge the TRMM and GPM eras and assure a stable radiometric calibration between the diverse constellation's member radiometers.

Monidipa Das;Soumya K. Ghosh; "Measuring Moran's I in a Cost-Efficient Manner to Describe a Land-Cover Change Pattern in Large-Scale Remote Sensing Imagery," vol.10(6), pp.2631-2639, June 2017. Detection and analysis of a land-cover change pattern from remotely sensed imagery have gained increasing research interests in recent years. A number of spatial statistics and landscape pattern metrics have been explored for this purpose. Moran's index (Moran's I) of spatial autocorrelation is one such spatiostatistical measure, which has been proved to be useful in characterizing the land-cover change, especially in Landsat data. However, since the Moran's I estimation needs to deal with spatial weight between each pair of spatial data objects, it becomes almost unfeasible to apply Moran's I in the case of large-scale remote sensing data, containing several millions of pixels. This paper proposes a method for computing Moran's I in the Hadoop MapReduce framework and thereby helps in describing spatial patterns in large-scale remotely sensed data. The contributions of the work include: 1) the exhaustive description of the Mapper and Reducer implementation for cost-effective estimation of Moran's I, and 2) the computational complexity analysis of the respective algorithms. Furthermore, two case studies have been presented, considering both the rook case and the queen case of spatial contiguity. Case Study 1 demonstrates the computational efficiency of the proposed implementation, and Case Study 2 illustrates an application of Moran's I in describing the urban sprawling pattern in two large spatial zones in Kolkata, India.

Jean-Philippe Gastellu-Etchegorry;Nicolas Lauret;Tiangang Yin;Lucas Landier;Abdelaziz Kallel;Zbynek Malenovský;Ahmad Al Bitar;Josselin Aval;Sahar Benhmida;Jianbo Qi;Ghania Medjdoub;Jordan Guilleux;Eric Chavanon;Bruce Cook;Douglas Morton;Nektarios Chrysoulakis;Zina Mitraka; "DART: Recent Advances in Remote Sensing Data Modeling With Atmosphere, Polarization, and Chlorophyll Fluorescence," vol.10(6), pp.2640-2649, June 2017. To better understand the life-essential cycles and processes of our planet and to further develop remote sensing (RS) technology, there is an increasing need for models that simulate the radiative budget (RB) and RS acquisitions of urban and natural landscapes using physical approaches and considering the three-dimensional (3-D) architecture of Earth surfaces. Discrete anisotropic radiative transfer (DART) is one of the most comprehensive physically based 3-D models of Earth-atmosphere radiative transfer, covering the spectral domain from ultraviolet to thermal infrared wavelengths. It simulates the optical 3-D RB and optical signals of proximal, aerial, and satellite imaging spectrometers and laser scanners, for any urban and/or natural landscapes and for any experimental and instrumental configurations. It is freely available for research and teaching activities. In this paper, we briefly introduce DART theory and present recent advances in simulated sensors (LiDAR and cameras with finite field of view) and modeling mechanisms (atmosphere, specular reflectance with polarization and chlorophyll fluorescence). A case study demonstrating a novel application of DART to investigate urban landscapes is also presented.

Nazzareno Pierdicca;Luca Pulvirenti;Giorgio Boni;Giuseppe Squicciarino;Marco Chini; "Mapping Flooded Vegetation Using COSMO-SkyMed: Comparison With Polarimetric and Optical Data Over Rice Fields," vol.10(6), pp.2650-2662, June 2017. The capability of COSMO-SkyMed (CSK) radar to remotely sense standing water beneath vegetation using an automatic algorithm working on a single image is investigated. The objective is to contribute to tackle the problem of missed detection of inundated vegetation by near real-time flood mapping algorithms using SAR data. The focus is on CSK because its four-satellite constellation is very suitable for rapid mapping. A set of CSK observations of an area in Northern Italy where many rice fields are present and recurrent artificial inundations occur were analyzed. Considering that double-bounce is the key process to detect floodwater under vegetation and that polarimetry is potentially able to discriminate double-bounce among different scattering mechanisms, single polarization CSK observations were compared with ALOS-2 and RADARSAT-2 fully polarimetric data. Such a multifrequency and multiangle dataset helped understanding the multitemporal signature of CSK data. A set of Landsat-8 images collected under cloud free conditions were also used as reference. Satellite acquisitions were gathered in order to ensure both spatial overlap among the images of the various sensors and temporal overlap along most of the rice growing season. The comparison between CSK and polarimetric data showed that at least for a slender leaf plant like rice, CSK can be able to detect the enhancement of double-bounce backscattering involving water and vertical plant stems. For some selected fields, it was found a good agreement between CSK-derived floodwater maps and those produced using the normalized-difference water index derived from Landsat-8 images, as well as double-bounce detection from polarimetric data.

Daniel Clewley;Jane B. Whitcomb;Ruzbeh Akbar;Agnelo R. Silva;Aaron Berg;Justin R. Adams;Todd Caldwell;Dara Entekhabi;Mahta Moghaddam; "A Method for Upscaling In Situ Soil Moisture Measurements to Satellite Footprint Scale Using Random Forests," vol.10(6), pp.2663-2673, June 2017. Geophysical products generated from remotely sensed data require validation to evaluate their accuracy. Typically in situ measurements are used for validation, as is the case for satellite-derived soil moisture products. However, a large disparity in scales often exists between in situ measurements (covering meters to 10 s of meters) and satellite footprints (often hundreds of meters to several kilometers), making direct comparison difficult. Before using in situ measurements for validation, they must be “upscaled” to provide the mean soil moisture within the satellite footprint. There are a number of existing upscaling methods previously applied to soil moisture measurements, but many place strict requirements on the number and spatial distribution of soil moisture sensors difficult to achieve with permanent/semipermanent ground networks necessary for long-term validation efforts. A new method for upscaling is presented here, using Random Forests to fit a model between in situ measurements and a number of landscape parameters and variables impacting the spatial and temporal distributions of soil moisture. The method is specifically intended for validation of the NASA soil moisture active passive (SMAP) products at 36-, 9-, and 3-km scales. The method was applied to in situ data from the SoilSCAPE network in California, validated with data from the SMAPVEX12 campaign in Manitoba, Canada with additional verification from the TxSON network in Texas. For the SMAPVEX12 site, the proposed method was compared to extensive field measurements and was able to predict mean soil moisture over a large area more accurately than other upscaling approaches.

Weichao Sun;Xia Zhang;Nan Wang;Yi Cen; "Estimating Population Density Using DMSP-OLS Night-Time Imagery and Land Cover Data," vol.10(6), pp.2674-2684, June 2017. Population density is an essential indicator of human society. Night-time light (NTL) data provided by the Defense Meteorological Satellite Program's Operational Linescan System (DMSP-OLS) has been widely used in estimating population distribution, due to its capability of indicating human activity. The overglow effect of the DMSP-OLS NTL image caused by reflection of light from adjacent areas and the different population distribution patterns between urban and rural areas have limited its application in estimating population density. Therefore, a method was proposed to reduce the overglow effect and to model urban and rural population densities separately. Moderate resolution imaging spectroradiometer (MODIS) land cover product was applied to reduce the overglow effect and to separate urban and rural areas. In urban area, the extracted urban DMSP-OLS NTL image was used to model population density. In rural area, a slope adjusted human settlement index (SAHSI), based on digital elevation model, MODIS enhanced vegetation index (EVI), and the DMSP-OLS NTL data, was proposed to estimate rural population density. Guangdong Province of China was taken as the study area for it has diverse population densities. The estimation in urban area was compared with population densities derived from normalized difference vegetation index adjusted NTL urban index (VANUI) and EVI adjusted NTL urban index (VANUI-EVI). Population density in the rural area was compared with results from EVI adjusted human settlement index (HSI-EVI) and the NTL data. The mean relative error of the proposed method was 55.14% in urban areas, which was better than VANUI (60.10%) and VANUI-EVI (60.16%), and was 71% in rural areas, which was 6% lower than HSI-EVI and 3% lower than NTL data. The result indicates that the proposed method has the ability to reduce the overglow eff- ct of DMSP-OLS NTL image and to correct the impact of terrain on rural population density estimation.

Yue-Xia Wang;Ming Wei;Zhen-hui Wang;Shuai Zhang;Li-Xia Liu; "Novel Scanning Strategy for Future Spaceborne Doppler Weather Radar With Application to Tropical Cyclones," vol.10(6), pp.2685-2693, June 2017. Severe tropical cyclones (TCs) are one of the most devastating natural disasters along the coastal regions from tropical to temperate zones. Understanding the three-dimensional (3-D) wind fields in TCs helps in assimilating their dynamics and predicting their evolution. Unfortunately, up to now, there is no spaceborne weather radar with Doppler capability to measure the 3-D wind components in the global scale. This paper presents a novel scanning strategy for future spaceborne Doppler weather radar mission to retrieve 3-D wind fields of TCs, which has three downward-pointing (with three different fixed tilt angles) and conically scanning beams. With spaceborne Doppler detection, the radar system enables to measure the vertical motion of hydrometeors, which is important in the estimation of latent heat fluxes and in the study of energy transportation in the structure of TCs on a global scale. A novel scanning strategy is presented and optimized to construct three noncollinear radar observations. Three-dimensional wind fields are retrieved using the least-squares method. A model simulation of TCs is used to validate the proposed scanning strategy. The radar beams of the proposal are able to cover TC's full area of interest. The results demonstrate a suitable accuracy of the proposed scanning strategy on retrieving the TC's 3-D wind fields with a resolution of 10 km × 10 km × 0.25 km grid cell, showing a promising use in the future spaceborne Doppler radar.

Yujie Zheng;Howard A. Zebker; "Phase Correction of Single-Look Complex Radar Images for User-Friendly Efficient Interferogram Formation," vol.10(6), pp.2694-2701, June 2017. We present a new interferometric synthetic aperture radar processing approach that removes topography–dependent phase from single-look complex (SLC) radar images, making interferogram formation more efficient. We first adopt motion compensation techniques to resample SLC images with respect to an ideal reference orbit and then separate the residual topographic phase contributions into parts dependent only on individual SLC acquisitions, and generate topography-compensated images directly in latitude–longitude coordinates. Since the number of interferograms is typically much larger than the number of SLC images, our approach greatly reduces needed computational resources. Furthermore, we move the need for precise knowledge of imaging geometry upstream from the end user to the data provider. We demonstrate our approach for both preprocessed SLC images and raw data using COSMO-SkyMed L1A and ALOS L0 products. The performance of our method depends on the quality of the digital elevation model (DEM) used—DEM error affects the correction phase proportionally to the baseline between radar scenes and the reference orbital path. With a 1000-m baseline and a nominal 30° incidence angle, we find that the uncertainty of estimated deformation increases by approximately 1 cm with every 3 m increase in the DEM error.

Liling Liu;Xiaolong Dong;Wenming Lin;Jintai Zhu;Di Zhu; "Regularized Deconvolution Method for the Resolution Enhancement of a Dual-Frequency Polarized Scatterometer on WCOM," vol.10(6), pp.2702-2712, June 2017. A dual-frequency polarized scatterometer (DPSCAT) is proposed for the Chinese Water Cycle Observation Mission (WCOM) to be launched around 2020. DPSCAT is used to measure the snow water equivalent (SWE) and the freeze/thaw state, which requires a measurement precision of 0.5 dB and a relatively higher spatial resolution (2–5 km) than the regular scatterometers (about 25 km). Therefore, the conventional range-gate dechirping along with the Doppler beam sharpening (DBS) technique is used by DPSCAT to achieve high range and azimuth resolution simultaneously. However, DBS cannot improve the azimuth resolution over the nadir swath; thus, a new data processing, namely regularized deconvolution method (RDM), is explored to address this problem. In this paper, a quantitative analysis model is developed for RDM in order to study two crucial issues, i.e., the spatial resolution (mainly for the nadir swath) and the accuracy/precision of the backscatter measurements after resolution enhancement. Normally, the measurement precision and spatial resolution cannot be improved simultaneously using RDM. The accuracy/precision degrades as the spatial resolution improves, and vice versa. Moreover, they both degrade as the measurement noise or uncertainty increases, which latter is usually defined as the normalized standard deviation of the measurements ( <inline-formula><tex-math notation="LaTeX">$K_p$</tex-math></inline-formula>). In case of SWE retrieval that requires a reconstructed measurement precision of 0.5 dB, the best spatial resolution resolved by RDM is 3 km for <inline-formula> <tex-math notation="LaTeX">$K_p$</tex-math></inline-formula> = 7%, 4 km for <inline-formula> <tex-math notation="LaTeX">$K_p$</tex-math></inline-formula> = 10%, and 5 km for <inline-formula> <tex-math notation="LaTeX">$- _p$</tex-math></inline-formula> = 12%.

Seung-Kuk Lee;Joo-Hyung Ryu; "High-Accuracy Tidal Flat Digital Elevation Model Construction Using TanDEM-X Science Phase Data," vol.10(6), pp.2713-2724, June 2017. This study explored the feasibility of using TanDEM-X (TDX) interferometric observations of tidal flats for digital elevation model (DEM) construction. Our goal was to generate high-precision DEMs in tidal flat areas, because accurate intertidal zone data are essential for monitoring coastal environments and erosion processes. To monitor dynamic coastal changes caused by waves, currents, and tides, very accurate DEMs with high spatial resolution are required. The bi- and monostatic modes of the TDX interferometer employed during the TDX science phase provided a great opportunity for highly accurate intertidal DEM construction using radar interferometry with no time lag (bistatic mode) or an approximately 10-s temporal baseline (monostatic mode) between the master and slave synthetic aperture radar image acquisitions. In this study, DEM construction in tidal flat areas was first optimized based on the TDX system parameters used in various TDX modes. We successfully generated intertidal zone DEMs with 5–7-m spatial resolutions and interferometric height accuracies better than 0.15 m for three representative tidal flats on the west coast of the Korean Peninsula. Finally, we validated these TDX DEMs against real-time kinematic-GPS measurements acquired in two tidal flat areas; the correlation coefficient was 0.97 with a root mean square error of 0.20 m.

Weiwei Fan;Feng Zhou;Mingliang Tao;Xueru Bai;Xiaoran Shi;Hanyang Xu; "An Automatic Ship Detection Method for PolSAR Data Based on K-Wishart Distribution," vol.10(6), pp.2725-2737, June 2017. For polarimetric synthetic aperture radar (PolSAR) data, abundant structure and textural information significantly enhance the ability of ship detection. This paper presents an automatic ship detection algorithm for PolSAR data, termed K-Wishart detector, which utilizes non-Gaussian K-Wishart classifier and incorporates the polarimetric SPAN parameter to identify the ships. The fundamental assumption is that the PolSAR data could be well characterized by the non-Gaussian K-Wishart distribution. The automatic ship detection scheme mainly consists of two steps. First, the PolSAR data are divided into different unlabeled clusters by the automatic non-Gaussian K-Wishart classifier. Then, the SPAN information is used to extract ships among multiple unlabeled clusters considering the energy difference with ambient environment. Finally, the proposed method is validated using real measured NASA/JPL AIRSAR and UAVSAR datasets by comparing the performance with modified CFAR detector, SPAN Wishart (SPWH) detector, and Wishart detector. The comparison results show that the proposed algorithm could improve the ability of target detection while reduces the rate of false alarm and miss detections.

Tianheng Yan;Wen Yang;Xiangli Yang;Carlos López-Matínez;Heng-Chao Li;Mingsheng Liao; "Polarimetric SAR Despeckling by Integrating Stochastic Sampling and Contextual Patch Dissimilarity Exploration," vol.10(6), pp.2738-2753, June 2017. Speckle reduction has been the longstanding task since the invention of synthetic aperture radar (SAR). For further image analysis and interpretation, it usually demands better speckle suppression to preserve spatial and polarimetric information of the polarimetric SAR (PolSAR) images. In this paper, a new PolSAR filtering algorithm which combines stochastic sampling based on nonlocal mean, random walk model, and contextual patch dissimilarity is proposed. The nonlocal mean suppresses the speckle effectively while preserving the details and polarimetric information well. However, the size of nonlocal search window is fixed without taking the homogeneous and heterogeneous areas into consideration. Thus, it usually leads to computational redundancy. Correspondingly, we propose to apply the random walk to reduce the search domain. Contextual patch is also proposed to represent the large surroundings of a patch in a compact fashion. More precisely, the random walk model is utilized to determine the sampling path. Then, the traditional center patch dissimilarity and contextual patch dissimilarity are employed to measure the transition probability of the random walk sequence. Finally, the denoised estimation of PolSAR image is obtained by weight function derived from the transition probability. Since the stochastic sampling combines spatial random walk and the polarimetric dissimilarity measurements, the proposed algorithm fully exploits both the spatial and polarimetric information. Experimental results on synthetic and real PolSAR data demonstrate the effectiveness of the proposed method in terms of spatial and polarimetric information maintenance.

Feifei Yan;Wenge Chang;Qilei Zhang;Xiangyang Li; "Analysis and Validation of Transmitter's Beam Footprint Detection and Tracking for Noncooperative Bistatic SAR," vol.10(6), pp.2754-2767, June 2017. In noncooperative bistatic synthetic aperture radar (SAR), the position of the transmitter's beam footprint should be detected and tracked in real time to perform beam synchronization. Theoretical analysis shows that the signal-to-noise ratio (SNR) of the reflected signals from the observational scene is too low to apply the conventional detection and tracking method. According to the cross correlation and Doppler frequency information of the raw data, a transmitter's beam footprint detection and tracking method is proposed in this paper. This method can realize the beam footprint detection in the sidelobe of the receiver's beam footprint and obtain the relative position between the transmitter and the receiver, therefore significantly improve the performance of beam footprint detection and tracking in noncooperative. Meanwhile, vehicle-based bistatic SAR experiment and airborne bistatic SAR experiment are performed to evaluate the performance of the proposed transmitter's beam footprint detection and tracking method. Experimental results show that the proposed method performs well for real-time transmitter's beam footprint detection and tracking.

Alessandra Budillon;Angel Caroline Johnsy;Gilda Schirinzi; "A Fast Support Detector for Superresolution Localization of Multiple Scatterers in SAR Tomography," vol.10(6), pp.2768-2779, June 2017. This paper is focused on the problem of the detection of multiple scatterers in synthetic aperture radar (SAR) tomography. The method presented exploits the a priori information that at most <inline-formula> <tex-math notation="LaTeX">$K_{{\rm{max}}}$</tex-math></inline-formula> different scatterers are present in the same range-azimuth resolution cell. In particular, a simplified version of a generalized-likelihood ratio test (GLRT) detector, based on support estimation (Sup-GLRT), is proposed. The Sup-GLRT is a constant false alarm rate sequential test that detects the presence of scatterers, one after another, and estimates their positions, detecting the support of the unknown signal. The proposed simplified test denoted as Fast-Sup-GLRT detector, despite still being a multistep statistical hypothesis test, exploits, at each step i, an approximated maximum-likelihood estimate of the signal support of cardinality i−1, based on the sequential estimation of i −1 supports of cardinality one. The introduced approximation allows a considerable reduction of the computational complexity, which from the combinatorial trend of Sup-GLRT passes to the linear one of Fast-Sup-GLRT, without significantly impairing the detection probability. The performance of the proposed approach is analyzed using TerraSAR-X system parameters, with particular reference to the elevation superresolution achievable for an assigned probability of false alarm and with a given number of acquisitions. Numerical results on simulated and real data are presented and discussed.

Wei Yang;Jie Chen;Wei Liu;Pengbo Wang; "Moving Target Azimuth Velocity Estimation for the MASA Mode Based on Sequential SAR Images," vol.10(6), pp.2780-2790, June 2017. A novel azimuth velocity estimation method is proposed based on the multiple azimuth squint angles (MASA) imaging mode, acquiring sequential synthetic aperture radar images with different squint angles and time lags. The MASA mode acquisition geometry is given first, and the effect of target motion on azimuth offset and slant range offset is discussed in detail. Then, the azimuth velocity estimation accuracy is analyzed, considering the errors caused by registration, defocusing, and range velocity. Moreover, the interaction between target azimuth velocity and range velocity is studied for a better understanding of the azimuth velocity estimation error caused by the range velocity. With the proposed error compensation step, the new method can achieve a very high accuracy in azimuth velocity estimation, as verified by experimental results based on both simulated data and the TerraSAR-X data.

Matus Bakon;Irene Oliveira;Daniele Perissin;Joaquim Joao Sousa;Juraj Papco; "A Data Mining Approach for Multivariate Outlier Detection in Postprocessing of Multitemporal InSAR Results," vol.10(6), pp.2791-2798, June 2017. Displacement maps from multitemporal InSAR (MTI) are usually noisy and fragmented. Thresholding on ensemble coherence is a common practice for identifying radar scatterers that are less affected by decorrelation noise. Thresholding on coherence might, however, cause loss of information over the areas undergoing more complex deformation scenarios. If the discrepancies in the areas of moderate coherence share similar behavior, it appears important to take into account their spatial correlation for correct inference. The information over low-coherent areas might then be used in a similar way the coherence is used in thematic mapping applications such as change detection. We propose an approach based on data mining and statistical procedures for mitigating the impact of outliers in MTI results. Our approach allows for minimization of outliers in final results while preserving spatial and statistical dependence among observations. Tests from monitoring slope failures and undermined areas performed in this work have shown that this is beneficial: 1) for better evaluation of low-coherent scatterers that are commonly discarded by the standard thresholding procedure, 2) for tackling outlying observations with extremes in any variable, 3) for improving spatial densities of standard persistent scatterers, 4) for the evaluation of areas undergoing more complex deformation scenarios, and 5) for the visualization purposes.

Claus Gebhardt;Jean-Raymond Bidlot;Sven Jacobsen;Susanne Lehner;P. Ola G. Persson;Andrey L. Pleskachevsky; "The Potential of TerraSAR-X to Observe Wind Wave Interaction at the Ice Edge," vol.10(6), pp.2799-2809, June 2017. This paper performs a study on sea state and wind fields at the ice edge boundary by utilizing information from different sources including synthetic aperture radar (SAR) satellite imagery, weather and sea state analyses from the European Centre for Medium-Range Weather Forecasts, shipborne in-situ measurements, and AMSR2 ice charts. The basis is a Stripmap scene from the TerraSAR-X satellite acquired on October 18, 2015, at <inline-formula> <tex-math notation="LaTeX">$\sim$</tex-math></inline-formula>18 UTC, in support of the cruise of the research vessel R/V Sikuliaq in the Beaufort/Chukchi Sea. This scene covers an area with a length of more than 100 km and comprises both the marginal ice zone and, for the largest part, open water. The wave and wind field is retrieved from satellite at high spatial resolution using empirical retrieval algorithms. These algorithms are XWAVE and XMOD-2 specifically developed for X-Band SAR. XWAVE allows for determining the significant wave height not only for long swell waves, but also for short waves with their wave pattern being hardly visible from SAR. The latter is based on the analysis of image spectrum parameters and spectral noise. As well, the possibility of the imaging quality of longer waves visible from SAR being affected by SAR-specific nonlinear imaging effects is narrowed down. Both the wave and wind field are found to exhibit considerable spatial variability, and their relationship is analyzed. The relevance of the findings of this study with respect to wave/ice modeling is discussed.

Lizwe Wandile Mdakane;Waldo Kleynhans; "An Image-Segmentation-Based Framework to Detect Oil Slicks From Moving Vessels in the Southern African Oceans Using SAR Imagery," vol.10(6), pp.2810-2818, June 2017. Oil slick events caused due to bilge leakage/dumps from ships and from other anthropogenic sources pose a threat to the aquatic ecosystem and need to be monitored on a regular basis. An automatic image-segmentation-based framework to detect oil slick from moving vessels using spaceborne synthetic aperture radar (SAR) images over Southern African oceans was proposed. The study uses an automated threshold-based algorithm and a region-based algorithm to achieve a more efficient oil slick detection. The proposed framework consisted of two parts: First, a threshold-based method was used to detect areas with a high oil slick probability; second, a region-based method was used to extract the full extent of the detected oil slick. The proposed framework was tested on both real SAR and synthetic SAR images and was robust to intensity variations, weak boundaries, and was also more computationally efficient when compared to the region-based method without the threshold-based input.

Shengli Song;Bin Xu;Jian Yang; "Ship Detection in Polarimetric SAR Images via Variational Bayesian Inference," vol.10(6), pp.2819-2829, June 2017. In this paper, we propose a novel ship detection approach in polarimetric synthetic aperture radar (SAR) images via variational Bayesian inference. First, we express the polarimetric SAR image as a tensor, and decompose the SAR image as the sum of a sparse component associated with ships and a sea clutter component. These components are denoted by some latent variables. Then, we introduce hierarchical priors of the latent variables to establish the probabilistic model of ship detection. By using variational Bayesian inference, we estimate the posterior distributions of the latent variables. Finally, the ship detection result is obtained in the iterative Bayesian inference process. By virtue of the tensor representation of polarimetric SAR image, the proposed approach explicitly uses all the polarization channels of the SAR image, and avoids the possible information loss in scalar polarimetric feature representation. Moreover, the proposed approach needs no sliding windows. The variational Bayesian inference process actually uses all the pixels instead of the limited pixels in sliding windows. Thus, the proposed approach has good ship detection performance and shape preserving ability, which is especially suitable for congested sea areas. Experimental results accomplished over C-band RADARSAT-2 polarimetric SAR images demonstrate that the proposed approach can achieve state-of-the-art ship detection performance.

Quentin Oliveau;Hichem Sahbi; "Learning Attribute Representations for Remote Sensing Ship Category Classification," vol.10(6), pp.2830-2840, June 2017. Object category classification in remote sensing applications usually relies on exemplar-based training. The latter is achieved by modeling the intricate relationships between visual features and their corresponding object categories. However, these models might fail when applied to fine-grained object classification problems especially when training examples are scarce and when objects exhibit complex visual appearances and strong variability. In this paper, we introduce a framework dedicated to object category classification in the context of scarce datasets. Our method builds discriminative mid-level image representations (also referred to as attributes) by learning a nonlinear mapping between the input image features and the attribute space. Moreover, we also enforce these learned attributes to be highly discriminative and easy to predict. We compare our proposed framework to existing attribute and related dictionary-based methods and apply it to two challenging tasks with scarce datasets: binary ship classification on synthetic aperture radar images and multiclass ship category recognition on optical images. These experiments show that our proposed framework is indeed highly effective and generalizes well despite the scarcity of training data.

Maryam Salehi;Ali Mohammadzadeh;Yasser Maghsoudi; "Adaptive Speckle Filtering for Time Series of Polarimetric SAR Images," vol.10(6), pp.2841-2848, June 2017. The temporal evolution of the polarimetric SAR data provides an extremely valuable information source for the various applications of this data. In this paper, the evolution of the agriculture crops in the time dimension is employed for multitemporal multidimensional speckle filtering. The proposed filtering method is based on a temporal statistical criterion dependent on all elements of the covariance matrix. Also, this method adaptively uses the pixels with the same scattering characteristics in the spatial and temporal space. The experimental results on time series of Radarsat-2 images show that the proposed filter preserves the polarimetric information and also has a better performance compared to the other used filters, based on different criteria such as the equivalent number of looks and edge-preservation parameter.

Hamideh Ebrahimi;Ruiyao Chen;W. Linwood Jones; "Calibration of Millimeter Wave Sounder Radiometers on Polar Orbiting Satellites," vol.10(6), pp.2849-2854, June 2017. This paper discusses the radiometric calibration of millimeter sounder radiometers, on polar orbiter satellites in the NASA Global Precipitation Mission (GPM) constellation; and presents radiometric bias results. Because the Tropical Rainfall Measurement Mission (TRMM) operated for over 17 years, it is important to combine the TRMM and the GPM precipitation datasets to produce a climate data record for global climate change studies. In the last decade of TRMM's operation, sounder radiometers were introduced into the TRMM constellation, which included: Advanced Microwave Sounding Unit-B sensors flown on NOAA weather satellites and the microwave humidity sounders sensors flown on NOAA and meteorological operational satellite program satellites. These sensors have provided an invaluable dataset of radiance measurements with full earth coverage, which has been used in precipitation measurements, weather prediction, and climate studies.

Shengwei Zhong;Ye Zhang;Yushi Chen;Di Wu; "Combining Component Substitution and Multiresolution Analysis: A Novel Generalized BDSD Pansharpening Algorithm," vol.10(6), pp.2867-2875, June 2017. Modern optical satellites can acquire bundles of panchSromatic (PAN) and multispectral (MS) images of the scene simultaneously. Because of the complexity of the sensors and amount of data involved, an MS image always has lower spatial resolution than the corresponding PAN image. Pansharpening aims at fusing MS images and PAN images, characterized by the spectral content of the former and the spatial details of the latter. There are two main large families of pansharpening algorithms, i.e., component substitution (CS) and multiresolution analysis (MRA). Generally speaking, the CS algorithms have better performance on spatial detail injection, while the MRA shows better spectral content preservation. In this paper, we propose a novel pansharpening algorithm, which combines the conceptions of CS and MRA. This proposed algorithm can be regarded as a generalized version of the existing band-dependent spatial-detail (BDSD) algorithm. A semisimulated dataset and three real datasets are adopted to compare the performance among the generalized-BDSD algorithm and six existing popular pansharpening algorithms. It shows that the proposed method has much lower spectral distortion and good visual appearance. In other words, the proposed method aggregates the advantages of CS and MRA, which shows effectiveness in practice.

Xi Chen;Wei Liu;Fulin Su;Gongjian Zhou; "Semisupervised Multiview Feature Selection for VHR Remote Sensing Images With Label Learning and Automatic View Generation," vol.10(6), pp.2876-2888, June 2017. The features of very high resolution (VHR) images can be considered as multiview data. For better analysis of intrinsic data structure, a semisupervised multiview feature selection (SemiMFS) method is proposed to exploit the multiple views in this paper. In SemiMFS, feature views are automatically generated by decomposing features into multiple disjoint and meaningful groups. Each feature group represents a view, and each view describes a data characteristic. Then, features are evaluated and selected within each view. The contributions of SemiMFS are listed as follows: 1) A SemiMFS is proposed for VHR images. 2) <inline-formula><tex-math notation="LaTeX">${\ell _{1,2}}$ </tex-math></inline-formula>-norm regularization and automatic view generalization are utilized in semisupervised feature selection for the intragroup sparsity, not the conventional intergroup sparsity, without any prior knowledge. Thus, SemiMFS reduces the redundancy within views by selecting features within each view, and simultaneously preserve as much information as possible by only shrinking the weight corresponding to different views. 3) An improved iterative method is developed in an <inline-formula><tex-math notation="LaTeX">${\ell _{1,2}}$</tex-math> </inline-formula>-norm-based minimization problem together with label learning of unlabeled objects. The experiments on three VHR satellite images verify the effectiveness and practicability of the method, compared with traditional single-view algorithms. The experiments demonstrate that the views and the intraview features make sense, and they offer a new way to analyze data structure of VHR images.

Xiaoyong Bian;Chen Chen;Long Tian;Qian Du; "Fusing Local and Global Features for High-Resolution Scene Classification," vol.10(6), pp.2889-2901, June 2017. In this paper, a fused global saliency-based multiscale multiresolution multistructure local binary pattern (salM 3LBP) feature and local codebookless model (CLM) feature is proposed for high-resolution image scene classification. First, two different but complementary types of descriptors (pixel intensities and differences) are developed to extract global features, characterizing the dominant spatial features in multiple scale, multiple resolution, and multiple structure manner. The micro/macrostructure information and rotation invariance are guaranteed in the global feature extraction process. For dense local feature extraction, CLM is utilized to model local enrichment scale invariant feature transform descriptor and dimension reduction is conducted via joint low-rank learning with support vector machine. Finally, a fused feature representation between salM3LBP and CLM as the scene descriptor to train a kernel-based extreme learning machine for scene classification is presented. The proposed approach is extensively evaluated on three challenging benchmark scene datasets (the 21-class land-use scene, 19-class satellite scene, and a newly available 30-class aerial scene), and the experimental results show that the proposed approach leads to superior classification performance compared with the state-of-the-art classification methods.

Andreea Griparis;Daniela Faur;Mihai Datcu; "Quantitative Evaluation of the Feature Space Transformation Methods Used for Applications of Visual Semantic Clustering of EO Images," vol.10(6), pp.2902-2909, June 2017. Data visualization guides the process of indexing and retrieval, strengthening the link between low-level image features and high-level human understanding of image content. In this regard, we have described the semantic content of a multidimensional dataset using its descriptors to derive high-dimensional feature spaces. The dimensionality of these spaces is further reduced to three in order to provide a three-dimensional (3-D) representation of the dataset items. Our main challenge was to identify the transformation that projects the high-dimensional feature set into a 3-D space preserving its semantic content. To overcome this issue, we have compared the efficiency of 11 feature space transformations: one feature selection algorithm and ten dimensionality reduction methods. As long as the dataset properties, during mapping, may differ depending on the chosen algorithm, the performance comparison of multiple algorithms is a difficult task. Therefore, three quantitative measures have been used: “Trustworthiness,” “Continuity,” and “<inline-formula><tex-math notation="LaTeX">$Q_{NX}$</tex-math></inline-formula> ”—the number of points preserved in data neighborhoods over projection. The mapping algorithms have been applied to three remote sensing datasets achieved from different sensors: LANDSAT 7 ETM+ and WorldView-3.

Shaohui Mei;Qianqian Bi;Jingyu Ji;Junhui Hou;Qian Du; "Hyperspectral Image Classification by Exploring Low-Rank Property in Spectral or/and Spatial Domain," vol.10(6), pp.2910-2921, June 2017. Within-class spectral variation, which is caused by varied imaging conditions, such as changes in illumination, environmental, atmospheric, and temporal conditions, significantly degrades the performance of hyperspectral image classification. Recent studies have shown that such spectral variation can be alleviated by exploring the low-rank property in the spectral domain, especially based on the low-rank subspace assumption. In this paper, the low-rank subspace assumption is approached by exploring the low-rank property in the local spectral domain. In addition, the low-rank property in the spatial domain is also explored to alleviate spectral variation. As a result, two novel spectral-spatial low-rank (SSLR) strategies are designed to alleviate spectral variation by exploring the low-rank property in both spectral and spatial domains. Experimental results on two benchmark hyperspectral datasets demonstrate that exploring the low-rank property in local spectral space can help to alleviate spectral variation and improve the performance of classification obviously for all tested data, while exploring the low-rank property in spatial space is more effective for images presenting large homogeneous areas.

Bushra Naz Soomro;Liang Xiao;Mohsen Molaei;Lili Huang;Zhichao Lian;Shahzad Hyder Soomro; "Local and Nonlocal Context-Aware Elastic Net Representation-Based Classification for Hyperspectral Images," vol.10(6), pp.2922-2939, June 2017. By representing a query sample as a linear combination of all labeled samples and then classifying it by evaluating which class leads to the minimal representation error, the representation-based classification methods have been successfully used for the classification of hyperspectral images (HSI). According to the usage of different norms, the sparse representation-based classification (SRC) and collaborative representation-based classification (CRC) methods have been presented in two different paradigms. The SRC promotes the use of few labeled samples, while the CRC encourages the use of all labeled samples to collaboratively represent the query one from all classes. However, when the limited labeled samples of different classes are unbalance, the learnt representation is hardly to reflect the particular characteristics of each class. To overcome this problem, this paper presents a novel graph based context-aware elastic net (ELN) model for the HSI classification. Under a generalized ELN framework, the proposed model is able to take full advantages of SRC and CRC. Specifically, by evaluating the spectral and spatial self-similarity of local and nonlocal neighbors, an ELN-coding neighborhood graph is constructed with image patch distance. Owing to the exploitation of the spectral-spatial context, a centralized sparsity norm is integrated into the optimization model and it can promote the local and global consistence preserving. Finally, an efficient solver for the proposed model is developed by using the well-known alternating direction method of multiplier. Experiments on several real datasets validated that the proposed method can outperform state-of-the-art algorithms in terms of classification accuracy. Furthermore, even with the limited unbalanced labeled samples the proposed method is robust.

Fatemeh Kowkabi;Hassan Ghassemian;Ahmad Keshavarz; "Hybrid Preprocessing Algorithm for Endmember Extraction Using Clustering, Over-Segmentation, and Local Entropy Criterion," vol.10(6), pp.2940-2949, June 2017. Most spectral mixture analyses in the literature overlook the spatial correlation of neighborhood pixels. The main contribution of this paper is to consider the impacts of both spatial and spectral information prior to endmember (EM) extraction algorithms. Hence, we take advantage of a top-down over-segmentation algorithm in combination with fuzzy c-means (FCM) clustering to identify spatially homogenous over-segments with minimum spectral variability and high spatial correlation. FCM provides a soft segmentation while its partial membership matrix is exploited to calculate a novel local entropy criterion (LEC) at pixels seated in homogenous over-segments. Afterwards, by performing an adaptive threshold per homogenous over-segment, pixels with high LEC values which have high certainty to associate with only one class are selected as pure ones. LEC calculations lead to preserving level of unmixing accuracy while speeding up EM extraction. This subject is important for large images particularly with real-time limitations. With respect to experiments accomplished on synthetic and AVIRIS hyperspectral images, clustering, over-segmentation, and entropy preprocessing has a simple and fast framework while it relatively outperforms the state-of-the-art procedures in terms of extraction accuracy and computing time.

Peng Wang;Liguo Wang;Mauro Dalla Mura;Jocelyn Chanussot; "Using Multiple Subpixel Shifted Images With Spatial–Spectral Information in Soft-Then-Hard Subpixel Mapping," vol.10(6), pp.2950-2959, June 2017. Multiple subpixel shifted images (MSIs) from the same area can be incorporated to improve the accuracy of soft-then-hard subpixel mapping (STHSPM). In this paper, a novel method that derives higher resolution MSIs with more spatial–spectral information (MSI-SS) is proposed. First, coarse MSIs produce two high-resolution MSIs for each class respectively by two paths at the same time. The spatial path produces the high-resolution MSIs by soft classification followed by interpolation. And, the other high-resolution MSIs are derived from the spectral path by interpolation followed by soft classification. Then the higher resolution MSIs with more spatial–spectral information for each class are derived by integrating the aforementioned two kinds of high-resolution MSIs by the appropriate weight. Finally, the integrated higher resolution MSIs for each class are used to allocate hard class labels to subpixels. The proposed method is fast and takes more spatial–spectral information of the original MSIs into account. Experiments on three real hyperspectral remote sensing images show that the proposed method produce higher SPM accuracy result.

Danfeng Hong;Naoto Yokoya;Xiao Xiang Zhu; "Learning a Robust Local Manifold Representation for Hyperspectral Dimensionality Reduction," vol.10(6), pp.2960-2975, June 2017. Local manifold learning has been successfully applied to hyperspectral dimensionality reduction in order to embed nonlinear and nonconvex manifolds in the data. Local manifold learning is mainly characterized by affinity matrix construction, which is composed of two steps: neighbor selection and computation of affinity weights. There is a challenge in each step: First, the neighbor selection is sensitive to complex spectral variability due to nonuniform data distribution, illumination variations, and sensor noise; second, the computation of affinity weights is challenging due to highly correlated spectral signatures in the neighborhood. To address the two issues, in this paper, a novel manifold learning methodology based on locally linear embedding is proposed through learning a robust local manifold representation. More specifically, a hierarchical neighbor selection is designed to progressively eliminate the effects of complex spectral variability using joint normalization and to robustly compute affinity (or reconstruction) weights reducing multicollinearity via the refined neighbor selection. Additionally, an idea that combines spatial–spectral information is introduced into the proposed manifold learning methodology to further improve the robustness of affinity calculations. Classification is explored as a potential application for validating the proposed algorithm. The classification accuracy in the use of different dimensionality reduction methods is evaluated and compared, while two kinds of strategies are applied in selecting the training and test samples: random sampling and region-based sampling. Experimental results show the classification accuracy obtained by the proposed method is superior to those state-of-the-art dimensionality reduction methods.

Bo-Hui Tang;Chuan Zhan;Zhao-Liang Li;Hua Wu;Ronglin Tang; "Estimation of Land Surface Temperature From MODIS Data for the Atmosphere With Air Temperature Inversion Profile," vol.10(6), pp.2976-2983, June 2017. Air temperature inversion (ATI), which usually occurs at the near surface boundary layer in the atmosphere, influences the thermal path atmospheric upwelling and downwelling radiances. Because it is difficult to determine the occurrence of temperature inversion from satellite data, the influence of ATI on the retrieval of land surface temperature (LST) was not considered in the development of LST retrieval algorithm. This paper aims to analyze and reduce the influence of ATI on LST retrieval in the generalized split-window (GSW) algorithm. The GSW coefficients are reparameterized by using the ATI profiles from the thermodynamic initial guess retrieval cloud-free database. Comparison of the root-mean-square errors calculated before and after using the reparameterized coefficients in the GSW algorithm from the numerical simulations showed that the LST retrieval accuracy could be improved by 0.71 K for viewing zenith angle equivalent to 60°. Intercomparisons using the moderate resolution imaging spectroradiometer products MOD11_L2/MYD11_L2 and in situ LST measured at the Hailar field site in northeastern Inner Mongolia, China, showed that the LST retrieval accuracy could be improved by 0.4 K using the reparamerization coefficients in the GSW algorithm when the atmosphere is occurred by ATI.

Zhiguo Meng;Rui Zhao;Zhanchuan Cai;Jinsong Ping;Zesheng Tang;Si Chen; "Microwave Thermal Emission at Tycho Area and Its Geological Significance," vol.10(6), pp.2984-2990, June 2017. Tycho crater is the most prominent crater of Copernican era, the study on whose microwave thermal emission (MTE) can help us better understand the internal structures of the Moon surface. In this paper, the spatial and temporal features of the MTE at Tycho area are studied using the microwave sounder [Chang'E Lunar Microwave Sounder (CELMS)] data combined with TiO2 abundance, surface slope and roughness, and rock abundance data of the lunar regolith. The results indicate that Tycho crater is not only a hot spot at the southern crater wall, but also a cold spot at the northern crater floor. Moreover, the surface slope and its orientation is the dominant factor influencing the thermal anomalies at Tycho crater, whereas the surface roughness, rock abundance, and TiO2 abundance have a lesser effect. Furthermore, the brightness temperature difference data indicate variations between the surface and substrate structure, demonstrating the potential applications of the CELMS data for geological studies of the Moon surface.

Zhiguo Meng;Jidong Zhang;Zhanchuan Cai;Jinsong Ping;Ze-Sheng Tang; "Microwave Thermal Emission Features of Mare Orientale Revealed by CELMS Data," vol.10(6), pp.2991-2998, June 2017. Mare Orientale is the youngest and best-preserved multiring impact basin, whose microwave thermal emission (MTE) research has great significance for studying the internal thermal features and thermal structures of the lunar surface. In this paper, the microwave sounder (Chang'E Lunar Microwave Sounder, CELMS) data from Chang'E-2 satellite were used to study the MTE features of the Mare Orientale. The results indicate that the MTE features of the lunar regolith are much different between those in the superficial layer and in the substrate layer. Also, substrate temperature in the Orientale Basin is likely much higher than what we know. In addition, a new perspective is given to the Hevelius Formation, which is divided into the northeastern, northwestern, and southern parts, and has several potential linear structures. The results are of substantial geologic importance to study the thermal evolution of the Moon.

Xiaoqiong Qin;Mingsheng Liao;Lu Zhang;Mengshi Yang; "Structural Health and Stability Assessment of High-Speed Railways via Thermal Dilation Mapping With Time-Series InSAR Analysis," vol.10(6), pp.2999-3010, June 2017. Thermal dilation is a vital component of deformation along the extensive railway network infrastructure. To monitor subtle deformation, the synthetic aperture radar interferometry (InSAR) technique has been adopted as a space-borne geodetic tool. However, InSAR applications in railway stability surveillance have been largely limited by the sparseness of detectable point-like targets (PTs). Moreover, only one-dimensional linear displacements in radar line-of-sight direction can be measured by a single data stack. To address these issues, we developed an improved persistent scatterers InSAR approach that can retrieve thermal dilation effects with an increased number of PTs along the railways. This proposed strategy effectively combines SAR amplitude, interferometric phase, and the spatial information of railway structures to maximize the number of PTs. A least square fitting of the residual phase obtained by iterative spatial-temporal filtering with respect to temperature difference is used to estimate the thermal dilation of metal and concrete-asphalt materials. To validate the effectiveness of this approach, case studies using ENVISAT ASAR (ASAR) and TerraSAR-X (TSX) datasets were carried out on the railways of Beijing–Tianjin, Beijing–Shanghai, and Shanghai–Hangzhou. Subsidence velocity, gradient, and thermal dilation were used to identify hazardous grades along each railway. Furthermore, linear deformation rates in two dimensions, i.e., vertical and west-east directions, along Shanghai–Hangzhou Railway were inverted from ascending ASAR and descending TSX observations to reveal track conditions at a high level of detail.

Pedram Ghamisi;Bernhard Höfle;Xiao Xiang Zhu; "Hyperspectral and LiDAR Data Fusion Using Extinction Profiles and Deep Convolutional Neural Network," vol.10(6), pp.3011-3024, June 2017. This paper proposes a novel framework for the fusion of hyperspectral and light detection and ranging-derived rasterized data using extinction profiles (EPs) and deep learning. In order to extract spatial and elevation information from both the sources, EPs that include different attributes (e.g., height, area, volume, diagonal of the bounding box, and standard deviation) are taken into account. Then, the derived features are fused via either feature stacking or graph-based feature fusion. Finally, the fused features are fed to a deep learning-based classifier (convolutional neural network with logistic regression) to ultimately produce the classification map. The proposed approach is applied to two datasets acquired in Houston, TX, USA, and Trento, Italy. Results indicate that the proposed approach can achieve accurate classification results compared to other approaches. It should be noted that, in this paper, the concept of deep learning has been used for the first time to fuse LiDAR and hyperspectral features, which provides new opportunities for further research.

* "Call For Papers," vol.10(6), pp.3025-3025, June 2017.*

* "Proceedings of the IEEE," vol.10(6), pp.3026-3026, June 2017.*

* "Become a published author in 4 to 6 weeks," vol.10(6), pp.3027-3027, June 2017.*

* "Introducing IEEE collabratec," vol.10(6), pp.3028-3028, June 2017.*

* "Information for Authors," vol.10(6), pp.C3-C3, June 2017.*

* "Institutional Listings," vol.10(6), pp.C4-C4, June 2017.*

IEEE Geoscience and Remote Sensing Magazine - new TOC (2017 July 20) [Website]

* "Front Cover," vol.5(2), pp.C1-C1, June 2017.* Presents the front cover for this issue of the publication.

* "CALL FOR PAPERS IEEE Geoscience and Remote Sensing Magazine," vol.5(2), pp.C2-C2, June 2017.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "Table of Contents," vol.5(2), pp.1-2, June 2017.* Presents the table of contents for this issue of the publication.

* "Staff Listing," vol.5(2), pp.2-2, June 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Lorenzo Bruzzone; "Welcome to the June 2017 Issue [From the Editor[Name:_blank]]," vol.5(2), pp.3-3, June 2017. Presents the introductory editoral for this issue of the publication.

Adriano Camps; "IEEE GRSS Continues Its Mission [President's Message[Name:_blank]]," vol.5(2), pp.4-6, June 2017. Presents the President’s message for this issue of the publication.

* "PIERS 2017," vol.5(2), pp.6-6, June 2017.* Presents information on the PIERS 2017.

* "ARSI-KEO," vol.5(2), pp.6-6, June 2017.* Presents information on the upcoming ARSI-KEO conference.

* "GRSM-HSI," vol.5(2), pp.7-7, June 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

Wenzhi Liao;Jocelyn Chanussot;Mauro Dalla Mura;Xin Huang;Rik Bellens;Sidharta Gautama;Wilfried Philips; "Taking Optimal Advantage of Fine Spatial Resolution: Promoting partial image reconstruction for the morphological analysis of very-high-resolution images," vol.5(2), pp.8-28, June 2017. Diverse sensor technologies have allowed us to measure different aspects of objects on Earth's surface [such as spectral characteristics in hyperspectral images and height in light detection and ranging (LiDAR) data] with increasing spectral and spatial resolutions. Remote-sensing images of very high geometrical resolution can provide a precise and detailed representation of the monitored scene. Thus, the spatial information is fundamental for many applications. Morphological profiles (MPs) and attribute profiles (APs) have been widely used to model the spatial information of very-high-resolution (VHR) remote-sensing images. MPs are obtained by computing a sequence of morphological operators based on geodesic reconstruction. However, both morphological operators based on geodesic reconstruction and attribute filters (AFs) are connected filters and, hence, suffer the problem of leakage (i.e., regions related to different structures in the image that happen to be connected by spurious links are considered as a single object). Objects expected to disappear at a given stage remain present when they connect with other objects in the image. Consequently, the attributes of small objects are mixed with their larger connected objects, leading to poor performances on postapplications (e.g., classification).

Naoto Yokoya;Claas Grohnfeldt;Jocelyn Chanussot; "Hyperspectral and Multispectral Data Fusion: A comparative review of the recent literature," vol.5(2), pp.29-56, June 2017. In recent years, enormous efforts have been made to design image-processing algorithms to enhance the spatial resolution of hyperspectral (HS) imagery. One of the most commonly addressed problems is the fusion of HS data with higher spatial resolution multispectral (MS) data. Various techniques have been proposed to solve this data-fusion problem based on different theories, including component substitution (CS), multiresolution analysis (MRA), spectral unmixing, and Bayesian probability. This article presents a comparative review of those HS-MS fusion techniques with extensive experiments. Ten state-of-the-art HS-MS fusion methods are compared by assessing their fusion performance both quantitatively and visually. Eight data sets featuring different geographical and sensor characteristics are used in the experiments to evaluate the generalizability and versatility of the fusion algorithms. To maximize the fairness and transparency of this comparison, publicly available source codes are used, and parameters are individually tuned for maximum performance.

Jie-Bang Yan;Sivaprasad Gogineni;Fernando Rodriguez-Morales;Daniel Gomez-Garcia;John Paden;JiLu Li;Carlton J. Leuschen;David A Braaten;Jacqueline A Richter-Menge;Sinead Louise Farrell;John Brozena;Richard D. Hale; "Airborne Measurements of Snow Thickness: Using ultrawide-band frequency-modulated-continuous-wave radars," vol.5(2), pp.57-76, June 2017. Frequency-modulated-continuous-wave (FM-CW) radar has been used extensively for the airborne measurement of snow thickness over sea ice and the mapping of annual accumulation over land ice. In contrast to conventional in situ measurements, FM-CW radar, when operated onboard an airborne platform, can be a useful tool for widearea surveys of snow deposition. Since the early 2000s, the Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas (KU) has designed, developed, and deployed airborne ultrawide-band (UWB) FM-CW radars, called Snow Radars, on National Science Foundation (NSF)-, NASA-, Naval Research Laboratory (NRL)-, and Alfred Wegener Institute (AWI)-provided aircraft in both Arctic and Antarctic regions and generated a large amount of snow data products. In addition to the significant standalone value of the snow-thickness measurements, these data are being used in estimating Arctic sea ice thickness, which is a key variable in the study of atmosphere-ocean-ice interactions. This article provides a review of snow remote sensing techniques with airborne FM-CW radars to document the operating principle, design, and evolution of CReSIS' UWB FM-CW radars and discuss and promote understanding of the advantages and limitations associated with these systems.

Jitao Yang;Guoqing Li; "The China GEO Data Center: Bringing order to open Earth-observation data," vol.5(2), pp.77-85, June 2017. Numerous remote-sensing devices observe the Earth every day, generating large amounts of remotesensing images and metadata. The extensive Earth obser vation (EO) data made available on the Internet by different EO agencies in multiple countries pose challenges for integration, storage, access, and collaborative applications. The capacity to explore and utilize all these heterogeneous EO data from across the world is very limited. In this article, we introduce the China Group on Earth Observations (GEO) Data Center, which is devoted to the efficient integration of EO data on the Internet and opening China's EO data to the world. We describe the infrastructure of the China GEO Data Center and discuss the design and implementation of the China Global Earth Observation System of Systems (GEOSS). We demonstrate the use of the GEO discovery and access broker (DAB) to harvest the large amounts of open EO data brokered by the GEO from different agencies around the world. We describe a data model we have designed to express the relations among EO data and capture the implicit semantic knowledge in EO data through rules (a model implemented using the Apache Jena framework). Finally, we describe the mechanisms for opening China's EO data to the world.

Pierre-Philippe Mathieu;Maurice Borgeaud;Yves-Louis Desnos;Michael Rast;Carsten Brockmann;Linda See;Ravi Kapur;Miguel Mahecha;Ursula Benz;Steffen Fritz; "The ESA's Earth Observation Open Science Program [Space Agencies[Name:_blank]]," vol.5(2), pp.86-96, June 2017. The world of Earth observation (EO) data is rapidly changing, driven by exponential advances in sensor and digital technologies. Recent decades have seen the development of extraordinary new ways of collecting, storing, manipulating, and transmitting data that are radically transforming the way we conduct and organize science. This convergence of technologies creates new challenges for EO scientists and data and software providers to fully exploit large amounts of multivariate data from diverse sources. At the same time, these technological trends also generate huge opportunities to better understand our planet and turn big data into new types of information services. This article briefly describes some of the elements of the European Space Agency's (ESA) EO Open Science program, which aims to enable the digital transformation of the EO community and make the most of the large, complex, and diverse data delivered by the new generation of EO missions, such as the Copernicus Sentinels.

Fawwaz Ulaby; "Remote Sensing Code Library [Software and Data Sets[Name:_blank]]," vol.5(2), pp.95-96, June 2017. Forward progress in remote sensing science and technology relies on the free exchange of results and ideas as well as the dissemination of remotely sensed data acquired by ground-based, airborne, and spaceborne sensors. An important ingredient is the family of computer codes and algorithms used by scientists and engineers to analyze data, model and calibrate sensors, and associate sensor output with the physical parameters of the observed scenes. Scientific journals and meetings—such as IEEE Transactions on Geoscience and Remote Sensing and the IEEE International Geoscience and Remote Sensing symposia—play an important role in facilitating the exchange of scientific information, but their domains do not encompass computer codes. For the most part, remote sensing computer codes remain hidden from public view. The Remote Sensing Code Library (RSCL) is an IEEE Geoscience and Remote Sensing Society initiative to establish a large family of remote sensing computer codes—contributed by members of the remote sensing community and their host institutions—for use by other members of the community not only to verify and extend published results but to also reduce the duplication of time and effort invested in the development of these codes.

Animesh Maitra; "A Profile of Remote-Sensing Activities in the Kolkata Chapter 2015-2016 [Chapters[Name:_blank]]," vol.5(2), pp.97-100, June 2017. Presents GRSS events from the Kolkata Chapter.

Dinesh Sathyamoorthy;Biwajeet Pradhan;Koo Voon Chet;Hean Teik Chuah; "IEEE GRSS Malaysia Chapter [Chapters[Name:_blank]]," vol.5(2), pp.100-102, June 2017. Presents GRSS chapter events from Malaysia.

Lori Mann Bruce; "Promoting the Success of Women in Engineering Through Affinity Groups [Women in GRS[Name:_blank]]," vol.5(2), pp.103-105, June 2017. Presents information on the Women in Engineering (WIE) Affinity Groups whose purpose is to promote and encourage women engineers in their careers.

Linda Hayden; "Ground Station Operations Training Hosted by Eastern North Carolina Chapter Chapter [Education[Name:_blank]]," vol.5(2), pp.106-106, June 2017. Presents GRSS chapter events from the North Carolina Chapter.

* "GRSS Members Elevated to Senior Member in February 2017 [GRSS Member Highlights[Name:_blank]]," vol.5(2), pp.107-107, June 2017.* Lists the GRSS members who were elevated to the status of Senior Member.

* "IEEE Geoscience and Remote Sensing Society's 2016 Best Reviewers [GRSS Member Highlights[Name:_blank]]," vol.5(2), pp.107-108, June 2017.* Lists the best reviewers of GRSS society publications in 2016.

* "[Calendar[Name:_blank]]," vol.5(2), pp.C3-C3, June 2017.* Presents upcoming events and meetings of interest to GRSS society members.

* "CPGIS 2017," vol.5(2), pp.C3-C3, June 2017.* Presents information on the CPGIS 2017 conference.

Topic revision: r6 - 22 May 2015, AndreaVaccari
 
banner.png
©2017 University of Virginia. Privacy Statement
Virginia Image and Video Analysis - School of Engineering and Applied Science - University of Virginia
P.O. Box 400743 - Charlottesville, VA - 22904 - E-Mail viva.uva@gmailREMOVEME.com