Relevant TOCs

IEEE Transactions on Image Processing - new TOC (2018 November 15) [Website]

Donghyeon Cho;Yu-Wing Tai;In So Kweon; "Deep Convolutional Neural Network for Natural Image Matting Using Initial Alpha Mattes," vol.28(3), pp.1054-1067, March 2019. We propose a deep convolutional neural network (CNN) method for natural image matting. Our method takes multiple initial alpha mattes of the previous methods and normalized RGB color images as inputs, and directly learns an end-to-end mapping between the inputs and reconstructed alpha mattes. Among the various existing methods, we focus on using two simple methods as initial alpha mattes: the closed-form matting and KNN matting. They are complementary to each other in terms of local and nonlocal principles. A major benefit of our method is that it can “recognize” different local image structures and then combine the results of local (closed-form matting) and nonlocal (KNN matting) mattings effectively to achieve higher quality alpha mattes than both of the inputs. Furthermore, we verify extendability of the proposed network to different combinations of initial alpha mattes from more advanced techniques such as KL divergence matting and information-flow matting. On the top of deep CNN matting, we build an RGB guided JPEG artifacts removal network to handle JPEG block artifacts in alpha matting. Extensive experiments demonstrate that our proposed deep CNN matting produces visually and quantitatively high-quality alpha mattes. We perform deeper experiments including studies to evaluate the importance of balancing training data and to measure the effects of initial alpha mattes and also consider results from variant versions of the proposed network to analyze our proposed DCNN matting. In addition, our method achieved high ranking in the public alpha matting evaluation dataset in terms of the sum of absolute differences, mean squared errors, and gradient errors. Also, our RGB guided JPEG artifacts removal network restores the damaged alpha mattes from compressed images in JPEG format.

Xianming Liu;Deming Zhai;Rong Chen;Xiangyang Ji;Debin Zhao;Wen Gao; "Depth Restoration From RGB-D Data via Joint Adaptive Regularization and Thresholding on Manifolds," vol.28(3), pp.1068-1079, March 2019. In this paper, we propose a novel depth restoration algorithm from RGB-D data through combining characteristics of local and non-local manifolds, which provide low-dimensional parameterizations of the local and non-local geometry of depth maps. Specifically, on the one hand, a local manifold model is defined to favor local neighboring relationship of pixels in depth, according to which, manifold regularization is introduced to promote smoothing along the manifold structure. On the other hand, the non-local characteristics of the patch-based manifold can be used to build highly data-adaptive orthogonal bases to extract elongated image patterns, accounting for self-similar structures in the manifold. We further define a manifold thresholding operator in 3D adaptive orthogonal spectral bases—eigenvectors of the discrete Laplacian of local and non-local manifolds—to retain only low graph frequencies for depth maps restoration. Finally, we propose a unified alternating direction method of multipliers optimization framework, which elegantly casts the adaptive manifold regularization and thresholding jointly to regularize the inverse problem of depth maps recovery. Experimental results demonstrate that our method achieves superior performance compared with the state-of-the-art works with respect to both objective and subjective quality evaluations.

Di Hu;Feiping Nie;Xuelong Li; "Discrete Spectral Hashing for Efficient Similarity Retrieval," vol.28(3), pp.1080-1091, March 2019. To meet the required huge data analysis, organization, and storage demand, the hashing technique has got a lot of attention as it aims to learn an efficient binary representation from the original high-dimensional data. In this paper, we focus on the unsupervised spectral hashing due to its effective manifold embedding. Existing spectral hashing methods mainly suffer from two problems, i.e., the inefficient spectral candidate and intractable binary constraint for spectral analysis. To overcome these two problems, we propose to employ spectral rotation to seek a better spectral solution and adopt the alternating projection algorithm to settle the complex code constraints, which are therefore named as Spectral Hashing with Spectral Rotation and Alternating Discrete Spectral Hashing, respectively. To enjoy the merits of both methods, the spectral rotation technique is finally combined with the original spectral objective, which aims to simultaneously learn better spectral solution and more efficient discrete codes and is called as Discrete Spectral Hashing. Furthermore, the efficient optimization algorithms are also provided, which just take comparable time complexity to existing hashing methods. To evaluate the proposed three methods, extensive comparison experiments and studies are conducted on four large-scale data sets for the image retrieval task, and the noticeable performance beats several state-of-the-art spectral hashing methods on different evaluation metrics.

Yue Li;Dong Liu;Houqiang Li;Li Li;Zhu Li;Feng Wu; "Learning a Convolutional Neural Network for Image Compact-Resolution," vol.28(3), pp.1092-1107, March 2019. We study the dual problem of image super-resolution (SR), which we term image compact-resolution (CR). Opposite to image SR that hallucinates a visually plausible high-resolution image given a low-resolution input, image CR provides a low-resolution version of a high-resolution image, such that the low-resolution version is both visually pleasing and as informative as possible compared to the high-resolution image. We propose a convolutional neural network (CNN) for image CR, namely, CNN-CR, inspired by the great success of CNN for image SR. Specifically, we translate the requirements of image CR into operable optimization targets for training CNN-CR: the visual quality of the compact resolved image is ensured by constraining its difference from a naively downsampled version and the information loss of image CR is measured by upsampling/super-resolving the compact-resolved image and comparing that to the original image. Accordingly, CNN-CR can be trained either separately or jointly with a CNN for image SR. We explore different training strategies as well as different network structures for CNN-CR. Our experimental results show that the proposed CNN-CR clearly outperforms simple bicubic downsampling and achieves on average 2.25 dB improvement in terms of the reconstruction quality on a large collection of natural images. We further investigate two applications of image CR, i.e., low-bit-rate image compression and image retargeting. Experimental results show that the proposed CNN-CR helps achieve significant bits saving than High Efficiency Video Coding when applied to image compression and produce visually pleasing results when applied to image retargeting.

Kuo-Liang Chung;Yan-Cheng Liang;Ching-Sheng Wang; "Effective Content-Aware Chroma Reconstruction Method for Screen Content Images," vol.28(3), pp.1108-1117, March 2019. In this paper, we propose an effective novel content-aware chroma reconstruction (CACR) method for screen content images (SCIs). After receiving the decoded downsampled YUV image on the client side, our fast chroma-copy approach reconstructs the missing chroma pixels in the flat regions of SCI. Then, for non-flat regions, a non-flat region-based winner-first voting (NRWV) strategy is proposed to identify the chroma subsampling scheme used on the server side prior to compression. Further, an effective adaptive hybrid approach is proposed to reconstruct each missing chroma pixel in the non-flat region by fusing the two reconstructed results, one from our modified NRWV-based chroma subsampling-binding and luma-guided chroma reconstruction scheme, which favors the sharp edges in SCI, as well as the other from the bicubic interpolation scheme, which favors blurred and continuous-tone textures. Further, based on the identified chroma subsampling scheme, a geometry alignment-based error compensation approach is proposed to enhance the reconstructed chroma image. Based on typical test SCIs and JCT-VC screen content videos, comprehensive experiments are carried out in HEVC-16.17 to demonstrate that in terms of quality, visual effect, and quality-bitrate tradeoff of the reconstructed SCIs, our CACR method significantly outperforms the existing state-of-the-art methods.

Armin Mustafa;Hansung Kim;Adrian Hilton; "MSFD: Multi-Scale Segmentation-Based Feature Detection for Wide-Baseline Scene Reconstruction," vol.28(3), pp.1118-1132, March 2019. A common problem in wide-baseline matching is the sparse and non-uniform distribution of correspondences when using conventional detectors, such as SIFT, SURF, FAST, A-KAZE, and MSER. In this paper, we introduce a novel segmentation-based feature detector (SFD) that produces an increased number of accurate features for wide-baseline matching. A multi-scale SFD is proposed using bilateral image decomposition to produce a large number of scale-invariant features for wide-baseline reconstruction. All input images are over-segmented into regions using any existing segmentation technique, such as Watershed, Mean-shift, and simple linear iterative clustering. Feature points are then detected at the intersection of the boundaries of three or more regions. The detected feature points are local maxima of the image function. The key advantage of feature detection based on segmentation is that it does not require global threshold setting and can, therefore, detect features throughout the image. A comprehensive evaluation demonstrates that SFD gives an increased number of features that are accurately localized and matched between wide-baseline camera views; the number of features for a given matching error increases by a factor of 3–5 compared with SIFT; feature detection and matching performance are maintained with increasing baseline between views; multi-scale SFD improves matching performance at varying scales. Application of SFD to sparse multi-view wide-baseline reconstruction demonstrates a factor of 10 increases in the number of reconstructed points with improved scene coverage compared with SIFT/MSER/A-KAZE. Evaluation against ground-truth shows that SFD produces an increased number of wide-baseline matches with a reduced error.

Zheheng Jiang;Danny Crookes;Brian D. Green;Yunfeng Zhao;Haiping Ma;Ling Li;Shengping Zhang;Dacheng Tao;Huiyu Zhou; "Context-Aware Mouse Behavior Recognition Using Hidden Markov Models," vol.28(3), pp.1133-1148, March 2019. Automated recognition of mouse behaviors is crucial in studying psychiatric and neurologic diseases. To achieve this objective, it is very important to analyze the temporal dynamics of mouse behaviors. In particular, the change between mouse neighboring actions is swift in a short period. In this paper, we develop and implement a novel hidden Markov model (HMM) algorithm to describe the temporal characteristics of mouse behaviors. In particular, we here propose a hybrid deep learning architecture, where the first unsupervised layer relies on an advanced spatial-temporal segment Fisher vector encoding both visual and contextual features. Subsequent supervised layers based on our segment aggregate network are trained to estimate the state-dependent observation probabilities of the HMM. The proposed architecture shows the ability to discriminate between visually similar behaviors and results in high recognition rates with the strength of processing imbalanced mouse behavior datasets. Finally, we evaluate our approach using JHuang’s and our own datasets, and the results show that our method outperforms other state-of-the-art approaches.

Jianqing Liang;Qinghua Hu;Chuangyin Dang;Wangmeng Zuo; "Weighted Graph Embedding-Based Metric Learning for Kinship Verification," vol.28(3), pp.1149-1162, March 2019. Given a group photograph, it is interesting and useful to judge whether the characters in it share specific kinship relation, such as father–daughter, father–son, mother–daughter, or mother–son. Recently, facial image-based kinship verification has attracted wide attention in computer vision. Some metric learning algorithms have been developed for improving kinship verification. However, most of the existing algorithms ignore fusing multiple feature representations and utilizing kernel techniques. In this paper, we develop a novel weighted graph embedding-based metric learning (WGEML) framework for kinship verification. Inspired by the fact that family members usually show high similarity in facial features like eyes, noses, and mouths, despite their diversity, we jointly learn multiple metrics by constructing an intrinsic graph and two penalty graphs to characterize the intraclass compactness and interclass separability for each feature representation, respectively, so that both the consistency and complementarity among multiple features can be fully exploited. Meanwhile, combination weights are determined through a weighted graph embedding framework. Furthermore, we present a kernelized version of WGEML to tackle nonlinear problems. Experimental results demonstrate both the effectiveness and efficiency of our proposed methods.

Xinfeng Zhang;Weisi Lin;Shiqi Wang;Jiaying Liu;Siwei Ma;Wen Gao; "Fine-Grained Quality Assessment for Compressed Images," vol.28(3), pp.1163-1175, March 2019. Image quality assessment (IQA) has attracted more and more attention due to the urgent demand in image services. The perceptual-based image compression is one of the most prominent applications that require IQA metrics to be highly correlated with human vision. To explore IQA algorithms that are more consistent with human vision, several calibrated databases have been constructed. However, the distorted images in the existing databases are usually generated by corrupting the pristine images with various distortions in coarse levels, such that the IQA algorithms validated on them may be inefficient to optimize the perceptual-based image compression with fine-grained quality differences. In this paper, we construct a large-scale image database which can be used for fine-grained quality assessment of compressed images. In the proposed database, reference images are compressed at constant bitrate levels by JPEG encoders with different optimization methods. To distinguish subtle differences, the pair-wise comparison method is utilized to rank them in subjective experiments. We select 100 reference images for the proposed database, and each image is compressed into three target bitrates by four different JPEG optimization methods, such that 1200 distorted images are generated in total. Sixteen well-known IQA algorithms are evaluated and analyzed on the proposed database. With the devised fine-grained IQA database, we expect to further promote image quality assessment by shifting it from a coarse-grained stage to a fine-grained stage. The database is available at:

Zhun Zhong;Liang Zheng;Zhedong Zheng;Shaozi Li;Yi Yang; "CamStyle: A Novel Data Augmentation Method for Person Re-Identification," vol.28(3), pp.1176-1190, March 2019. Person re-identification (re-ID) is a cross-camera retrieval task that suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle). CamStyle can serve as a data augmentation approach that reduces the risk of deep network overfitting and that smooths the CamStyle disparities. Specifically, with a style transfer model, labeled training images can be style transferred to each camera, and along with the original training samples, form the augmented training set. This method, while increasing data diversity against overfitting, also incurs a considerable level of noise. In the effort to alleviate the impact of noise, the label smooth regularization (LSR) is adopted. The vanilla version of our method (without LSR) performs reasonably well on few camera systems in which overfitting often occurs. With LSR, we demonstrate consistent improvement in all systems regardless of the extent of overfitting. We also report competitive accuracy compared with the state of the art on Market-1501 and DukeMTMC-re-ID. Importantly, CamStyle can be employed to the challenging problems of one view learning and unsupervised domain adaptation (UDA) in person re-identification (re-ID), both of which have critical research and application significance. The former only has labeled data in one camera view and the latter only has labeled data in the source domain. Experimental results show that CamStyle significantly improves the performance of the baseline in the two problems. Specially, for UDA, CamStyle achieves state-of-the-art accuracy based on a baseline deep re-ID model on Market-1501 and DukeMTMC-reID. Our code is available at:

Zhimin Gao;Lei Wang;Luping Zhou; "A Probabilistic Approach to Cross-Region Matching-Based Image Retrieval," vol.28(3), pp.1191-1204, March 2019. With deep convolutional features, cross-region matching (CRM) has recently shown superior performance on image retrieval. It evaluates image similarity by comparing image regions at different locations and scales, and is, therefore, more robust to geometric variance of objects. This paper first scrutinizes CRM-based image retrieval to provide a rigorous probabilistic interpretation by following the probability ranking principle. In addition to manifesting the assumptions implicitly taken by CRM, our interpretation highlights a fundamental issue hindering the performance of CRM—when comparing two image regions, CRM ignores modeling the distribution of the visual concept class associated with an image region, making the similarity comparison less precise. Taking advantage of the unprecedented representation capability of deep convolutional features, this paper proposes one approach to tackle that issue. It treats locally clustered image regions as a pseudo-labeled class sharing the same visual concept and utilizes them to model the distribution of the visual concept class associated with an image region. Both non-parametric and parametric methods are developed for this purpose, with careful probabilistic justification. Extensive experimental study on multiple benchmark data sets demonstrates the superior performance of the proposed pseudo-label approach to CRM and other comparable methods, with the maximum improvement of more than 10 percentage points over CRM.

Xianming Liu;Gene Cheung;Xiangyang Ji;Debin Zhao;Wen Gao; "Graph-Based Joint Dequantization and Contrast Enhancement of Poorly Lit JPEG Images," vol.28(3), pp.1205-1219, March 2019. JPEG images captured in poor lighting conditions suffer from both low luminance contrast and coarse quantization artifacts due to lossy compression. Performing dequantization and contrast enhancement in separate back-to-back steps would amplify the residual compression artifacts, resulting in low visual quality. Leveraging on recent development in graph signal processing (GSP), we propose to jointly dequantize and contrast-enhance such images in a single graph-signal restoration framework. Specifically, we separate each observed pixel patch into illumination and reflectance via Retinex theory, where we define generalized smoothness prior and signed graph smoothness prior according to their respective unique signal characteristics. Given only a transform-coded image patch, we compute robust edge weights for each graph via low-pass filtering in the dual graph domain. We compute the illumination and reflectance components for each patch alternately, adopting accelerated proximal gradient (APG) algorithms in the transform domain, with backtracking line search for further speedup. Experimental results show that our generated images outperform the state-of-the-art schemes noticeably in the subjective quality evaluation.

Tom Tirer;Raja Giryes; "Image Restoration by Iterative Denoising and Backward Projections," vol.28(3), pp.1220-1234, March 2019. Inverse problems appear in many applications, such as image deblurring and inpainting. The common approach to address them is to design a specific algorithm for each problem. The Plug-and-Play (P&P) framework, which has been recently introduced, allows solving general inverse problems by leveraging the impressive capabilities of existing denoising algorithms. While this fresh strategy has found many applications, a burdensome parameter tuning is often required in order to obtain high-quality results. In this paper, we propose an alternative method for solving inverse problems using off-the-shelf denoisers, which requires less parameter tuning. First, we transform a typical cost function, composed of fidelity and prior terms, into a closely related, novel optimization problem. Then, we propose an efficient minimization scheme with a P&P property, i.e., the prior term is handled solely by a denoising operation. Finally, we present an automatic tuning mechanism to set the method’s parameters. We provide a theoretical analysis of the method and empirically demonstrate its competitiveness with task-specific techniques and the P&P approach for image inpainting and deblurring.

Meng Liu;Liqiang Nie;Xiang Wang;Qi Tian;Baoquan Chen; "Online Data Organizer: Micro-Video Categorization by Structure-Guided Multimodal Dictionary Learning," vol.28(3), pp.1235-1247, March 2019. Micro-videos have rapidly become one of the most dominant trends in the era of social media. Accordingly, how to organize them draws our attention. Distinct from the traditional long videos that would have multi-site scenes and tolerate the hysteresis, a micro-video: 1) usually records contents at one specific venue within a few seconds. The venues are structured hierarchically regarding their category granularity. This motivates us to organize the micro-videos via their venue structure. 2) timely circulates over social networks. Thus, the timeliness of micro-videos desires effective online processing. However, only 1.22% of micro-videos are labeled with venue information when uploaded at the mobile end. To address this problem, we present a framework to organize the micro-videos online. In particular, we first build a structure-guided multi-modal dictionary learning model to learn the concept-level micro-video representation by jointly considering their venue structure and modality relatedness. We then develop an online learning algorithm to incrementally and efficiently strengthen our model, as well as categorize the micro-videos into a tree structure. Extensive experiments on a real-world data set validate our model well. In addition, we have released the codes to facilitate the research in the community.

Asha Das;Madhu S. Nair;S. David Peter; "Sparse Representation Over Learned Dictionaries on the Riemannian Manifold for Automated Grading of Nuclear Pleomorphism in Breast Cancer," vol.28(3), pp.1248-1260, March 2019. Breast cancer is found to be the most pervasive type of cancer among women. Computer aided detection and diagnosis of cancer at the initial stages can increase the chances of recovery and thus reduce the mortality rate through timely prognosis and adequate treatment planning. The nuclear atypia scoring or histopathological breast tumor grading remains to be a challenging problem due to the various artifacts and variabilities introduced during slide preparation and also because of the complexity in the structure of the underlying tissue patterns. Inspired by the success of symmetric positive definite (SPD) matrices in many of the challenging tasks in machine learning and computer vision, a sparse coding and dictionary learning on SPD matrices is proposed in this paper for the breast tumor grading. The proposed covariance-based SPD matrices form a Riemannian manifold and are represented as the sparse combination of Riemannian dictionary atoms. Non-linearity of the SPD manifold is tackled by embedding into the reproducing kernel Hilbert space using kernels derived from log-Euclidean metric, Jeffrey and Stein divergences and compared with the non-kernel-based affine invariant Riemannian metric. The novelty of the work lies in exploiting the kernel approach for the Hilbert space embedding of the Riemannian manifold, that can achieve a better discrimination of the breast cancer tissues, following a sparse representation over learned dictionaries and henceforth it outperforms many of the state-of-the-art algorithms in breast cancer grading in terms of quantitative and qualitative analysis.

Kun Zhan;Feiping Nie;Jing Wang;Yi Yang; "Multiview Consensus Graph Clustering," vol.28(3), pp.1261-1270, March 2019. A graph is usually formed to reveal the relationship between data points and graph structure is encoded by the affinity matrix. Most graph-based multiview clustering methods use predefined affinity matrices and the clustering performance highly depends on the quality of graph. We learn a consensus graph with minimizing disagreement between different views and constraining the rank of the Laplacian matrix. Since diverse views admit the same underlying cluster structure across multiple views, we use a new disagreement cost function for regularizing graphs from different views toward a common consensus. Simultaneously, we impose a rank constraint on the Laplacian matrix to learn the consensus graph with exactly <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> connected components where <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> is the number of clusters, which is different from using fixed affinity matrices in most existing graph-based methods. With the learned consensus graph, we can directly obtain the cluster labels without performing any post-processing, such as <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula>-means clustering algorithm in spectral clustering-based methods. A multiview consensus clustering method is proposed to learn such a graph. An efficient iterative updating algorithm is derived to optimize the proposed challenging optimization problem. Experiments on several benchmark datasets have demonstrated the effectiveness of the proposed method in terms of seven metrics.

Da Chen;Jiong Zhang;Laurent D. Cohen; "Minimal Paths for Tubular Structure Segmentation With Coherence Penalty and Adaptive Anisotropy," vol.28(3), pp.1271-1284, March 2019. The minimal path method has proven to be particularly useful and efficient in tubular structure segmentation applications. In this paper, we propose a new minimal path model associated with a dynamic Riemannian metric embedded with an appearance feature coherence penalty and an adaptive anisotropy enhancement term. The features that characterize the appearance and anisotropy properties of a tubular structure are extracted through the associated orientation score. The proposed the dynamic Riemannian metric is updated in the course of the geodesic distance computation carried out by the efficient single-pass fast marching method. Compared to the state-of-the-art minimal path models, the proposed minimal path model is able to extract the desired tubular structures from a complicated vessel tree structure. In addition, we propose an efficient prior path-based method to search for vessel radius value at each centerline position of the target. Finally, we perform the numerical experiments on both synthetic and real images. The quantitive validation is carried out on retinal vessel images. The results indicate that the proposed model indeed achieves a promising performance.

Yupei Wang;Xin Zhao;Yin Li;Kaiqi Huang; "Deep Crisp Boundaries: From Boundaries to Higher-Level Tasks," vol.28(3), pp.1285-1298, March 2019. Edge detection has made significant progress with the help of deep convolutional networks (ConvNet). These ConvNet-based edge detectors have approached human level performance on standard benchmarks. We provide a systematical study of these detectors’ outputs. We show that the detection results did not accurately localize edge pixels, which can be adversarial for tasks that require crisp edge inputs. As a remedy, we propose a novel refinement architecture to address the challenging problem of learning a crisp edge detector using ConvNet. Our method leverages a top-down backward refinement pathway, and progressively increases the resolution of feature maps to generate crisp edges. Our results achieve superior performance, surpassing human accuracy when using standard criteria on BSDS500, and largely outperforming the state-of-the-art methods when using more strict criteria. More importantly, we demonstrate the benefit of crisp edge maps for several important applications in computer vision, including optical flow estimation, object proposal generation, and semantic segmentation.

Sunok Kim;Dongbo Min;Seungryong Kim;Kwanghoon Sohn; "Unified Confidence Estimation Networks for Robust Stereo Matching," vol.28(3), pp.1299-1313, March 2019. We present a deep architecture that estimates a stereo confidence, which is essential for improving the accuracy of stereo matching algorithms. In contrast to existing methods based on deep convolutional neural networks (CNNs) that rely on only one of the matching cost volume or estimated disparity map, our network estimates the stereo confidence by using the two heterogeneous inputs simultaneously. Specifically, the matching probability volume is first computed from the matching cost volume with residual networks and a pooling module in a manner that yields greater robustness. The confidence is then estimated through a unified deep network that combines confidence features extracted both from the matching probability volume and its corresponding disparity. In addition, our method extracts the confidence features of the disparity map by applying multiple convolutional filters with varying sizes to an input disparity map. To learn our networks in a semi-supervised manner, we propose a novel loss function that use confident points to compute the image reconstruction loss. To validate the effectiveness of our method in a disparity post-processing step, we employ three post-processing approaches; cost modulation, ground control points-based propagation, and aggregated ground control points-based propagation. Experimental results demonstrate that our method outperforms state-of-the-art confidence estimation methods on various benchmarks.

IEEE Transactions on Medical Imaging - new TOC (2018 November 15) [Website]

* "Table of contents," vol.37(11), pp.C1-C4, Nov. 2018.* Presents the table of contents for this issue of this publication.

* "IEEE Transactions on Medical Imaging publication information," vol.37(11), pp.C2-C2, Nov. 2018.* Provides a listing of current staff, committee members and society officers.

Dulal Bhaumik;Fei Jie;Rachel Nordgren;Runa Bhaumik;Bikas K. Sinha; "A Mixed-Effects Model for Detecting Disrupted Connectivities in Heterogeneous Data," vol.37(11), pp.2381-2389, Nov. 2018. The human brain is an amazingly complex network. Aberrant activities in this network can lead to various neurological disorders such as multiple sclerosis, Parkinson’s disease, Alzheimer’s disease, and autism. functional magnetic resonance imaging has emerged as an important tool to delineate the neural networks affected by such diseases, particularly autism. In this paper, we propose a special type of mixed-effects model together with an appropriate procedure for controlling false discoveries to detect disrupted connectivities for developing a neural network in whole brain studies. Results are illustrated with a large data set known as autism brain imaging data exchange which includes 361 subjects from eight medical centers.

Rodrigo A. Lobos;Tae Hyung Kim;W. Scott Hoge;Justin P. Haldar; "Navigator-Free EPI Ghost Correction With Structured Low-Rank Matrix Models: New Theory and Methods," vol.37(11), pp.2390-2402, Nov. 2018. Structured low-rank matrix models have previously been introduced to enable calibrationless MR image reconstruction from sub-Nyquist data, and such ideas have recently been extended to enable navigator-free echo-planar imaging (EPI) ghost correction. This paper presents a novel theoretical analysis which shows that, because of uniform subsampling, the structured low-rank matrix optimization problems for EPI data will always have either undesirable or non-unique solutions in the absence of additional constraints. This theory leads us to recommend and investigate problem formulations for navigator-free EPI that incorporate side information from either image-domain or k-space domain parallel imaging methods. The importance of using nonconvex low-rank matrix regularization is also identified. We demonstrate using phantom and in vivo data that the proposed methods are able to eliminate ghost artifacts for several navigator-free EPI acquisition schemes, obtaining better performance in comparison with the state-of-the-art methods across a range of different scenarios. Results are shown for both single-channel acquisition and highly accelerated multi-channel acquisition.

Fouad Hadj-Selem;Tommy Löfstedt;Elvis Dohmatob;Vincent Frouin;Mathieu Dubois;Vincent Guillemot;Edouard Duchesnay; "Continuation of Nesterov’s Smoothing for Regression With Structured Sparsity in High-Dimensional Neuroimaging," vol.37(11), pp.2403-2413, Nov. 2018. Predictive models can be used on high-dimensional brain images to decode cognitive states or diagnosis/prognosis of a clinical condition/evolution. Spatial regularization through structured sparsity offers new perspectives in this context and reduces the risk of overfitting the model while providing interpretable neuroimaging signatures by forcing the solution to adhere to domain-specific constraints. Total variation (TV) is a promising candidate for structured penalization: it enforces spatial smoothness of the solution while segmenting predictive regions from the background. We consider the problem of minimizing the sum of a smooth convex loss, a non-smooth convex penalty (whose proximal operator is known) and a wide range of possible complex, non-smooth convex structured penalties such as TV or overlapping group Lasso. Existing solvers are either limited in the functions they can minimize or in their practical capacity to scale to high-dimensional imaging data. Nesterov’s smoothing technique can be used to minimize a large number of non-smooth convex structured penalties. However, reasonable precision requires a small smoothing parameter, which slows down the convergence speed to unacceptable levels. To benefit from the versatility of Nesterov’s smoothing technique, we propose a first order continuation algorithm, CONESTA, which automatically generates a sequence of decreasing smoothing parameters. The generated sequence maintains the optimal convergence speed toward any globally desired precision. Our main contributions are: gap to probe the current distance to the global optimum in order to adapt the smoothing parameter and the To propose an expression of the duality convergence speed. This expression is applicable to many penalties and can be used with other solvers than CONESTA. We also propose an expression for the particular smoothing parameter that minimizes the number of iterations required to reach a given precision- Furthermore, we provide a convergence proof and its rate, which is an improvement over classical proximal gradient smoothing methods. We demonstrate on both simulated and high-dimensional structural neuroimaging data that CONESTA significantly outperforms many state-of-the-art solvers in regard to convergence speed and precision.

Gabriel Ramos-Llordén;Gonzalo Vegas-Sánchez-Ferrero;Marcus Björk;Floris Vanhevel;Paul M. Parizel;Raúl San José Estépar;Arnold J. den Dekker;Jan Sijbers; "NOVIFAST: A Fast Algorithm for Accurate and Precise VFA MRI <inline-formula> <tex-math notation="LaTeX">${T}_{1}$ </tex-math></inline-formula> Mapping," vol.37(11), pp.2414-2427, Nov. 2018. In quantitative magnetic resonance <inline-formula> <tex-math notation="LaTeX">${T}_{textsf {1}}$ </tex-math></inline-formula> mapping, the variable flip angle (VFA) steady state spoiled gradient recalled echo (SPGR) imaging technique is popular as it provides a series of high resolution <inline-formula> <tex-math notation="LaTeX">${T}_{textsf {1}}$ </tex-math></inline-formula> weighted images in a clinically feasible time. Fast, linear methods that estimate <inline-formula> <tex-math notation="LaTeX">${T}_{textsf {1}}$ </tex-math></inline-formula> maps from these weighted images have been proposed, such as DESPOT1 and iterative re-weighted linear least squares. More accurate, non-linear least squares (NLLS) estimators are in play, but these are generally much slower and require careful initialization. In this paper, we present NOVIFAST, a novel NLLS-based algorithm specifically tailored to VFA SPGR <inline-formula> <tex-math notation="LaTeX">${T}_{textsf {1}}$ </tex-math></inline-formula> mapping. By exploiting the particular structure of the SPGR model, a computationally efficient, yet accurate and precise <inline-formula> <tex-math notation="LaTeX">${T}_{textsf {1}}$ </tex-math></inline-formula> map estimator is derived. Simulation and in vivo human brain experiments demonstrate a twenty-fold speed gain of NOVIFAST compared with conventional gradient-based NLLS estimators while maintaining a high precision and accuracy. Moreover, NOVIFAST is eight times faster than the efficient implementations of the variable projection (VARPRO) method. Furthermore, NOVIFAST is shown to be robust against initialization.

Pietro Nardelli;Daniel Jimenez-Carretero;David Bermejo-Pelaez;George R. Washko;Farbod N. Rahaghi;Maria J. Ledesma-Carbayo;Raúl San José Estépar; "Pulmonary Artery–Vein Classification in CT Images Using Deep Learning," vol.37(11), pp.2428-2440, Nov. 2018. Recent studies show that pulmonary vascular diseases may specifically affect arteries or veins through different physiologic mechanisms. To detect changes in the two vascular trees, physicians manually analyze the chest computed tomography (CT) image of the patients in search of abnormalities. This process is time consuming, difficult to standardize, and thus not feasible for large clinical studies or useful in real-world clinical decision making. Therefore, automatic separation of arteries and veins in CT images is becoming of great interest, as it may help physicians to accurately diagnose pathological conditions. In this paper, we present a novel, fully automatic approach to classify vessels from chest CT images into arteries and veins. The algorithm follows three main steps: first, a scale-space particles segmentation to isolate vessels; then a 3-D convolutional neural network (CNN) to obtain a first classification of vessels; finally, graph-cuts’ optimization to refine the results. To justify the usage of the proposed CNN architecture, we compared different 2-D and 3-D CNNs that may use local information from bronchus- and vessel-enhanced images provided to the network with different strategies. We also compared the proposed CNN approach with a random forests (RFs) classifier. The methodology was trained and evaluated on the superior and inferior lobes of the right lung of 18 clinical cases with noncontrast chest CT scans, in comparison with manual classification. The proposed algorithm achieves an overall accuracy of 94%, which is higher than the accuracy obtained using other CNN architectures and RF. Our method was also validated with contrast-enhanced CT scans of patients with chronic thromboembolic pulmonary hypertension to demonstrate that our model generalizes well to contrast-enhanced modalities. The proposed method outperforms state-of-the-art methods, paving the way for future use of 3-D CN- for artery/vein classification in CT images.

Siqi Liu;Donghao Zhang;Yang Song;Hanchuan Peng;Weidong Cai; "Automated 3-D Neuron Tracing With Precise Branch Erasing and Confidence Controlled Back Tracking," vol.37(11), pp.2441-2452, Nov. 2018. The automatic reconstruction of single neurons from microscopic images is essential to enable large-scale data-driven investigations in neuron morphology research. However, few previous methods were able to generate satisfactory results automatically from 3-D microscopic images without human intervention. In this paper, we developed a new algorithm for automatic 3-D neuron reconstruction. The main idea of the proposed algorithm is to iteratively track backward from the potential neuronal termini to the soma centre. An online confidence score is computed to decide if a tracing iteration should be stopped and discarded from the final reconstruction. The performance improvements comparing with the previous methods are mainly introduced by a more accurate estimation of the traced area and the confidence controlled back-tracking algorithm. The proposed algorithm supports large-scale batch-processing by requiring only one user specified parameter for background segmentation. We bench tested the proposed algorithm on the images obtained from both the DIADEM challenge and the BigNeuron challenge. Our proposed algorithm achieved the state-of-the-art results.

Liang Chen;Paul Bentley;Kensaku Mori;Kazunari Misawa;Michitaka Fujiwara;Daniel Rueckert; "DRINet for Medical Image Segmentation," vol.37(11), pp.2453-2462, Nov. 2018. Convolutional neural networks (CNNs) have revolutionized medical image analysis over the past few years. The U-Net architecture is one of the most well-known CNN architectures for semantic segmentation and has achieved remarkable successes in many different medical image segmentation applications. The U-Net architecture consists of standard convolution layers, pooling layers, and upsampling layers. These convolution layers learn representative features of input images and construct segmentations based on the features. However, the features learned by standard convolution layers are not distinctive when the differences among different categories are subtle in terms of intensity, location, shape, and size. In this paper, we propose a novel CNN architecture, called Dense-Res-Inception Net (DRINet), which addresses this challenging problem. The proposed DRINet consists of three blocks, namely a convolutional block with dense connections, a deconvolutional block with residual inception modules, and an unpooling block. Our proposed architecture outperforms the U-Net in three different challenging applications, namely multi-class segmentation of cerebrospinal fluid on brain CT images, multi-organ segmentation on abdominal CT images, and multi-class brain tumor segmentation on MR images.

Farzana Nasrin;Ram V. Iyer;Steven M. Mathews; "Simultaneous Estimation of Corneal Topography, Pachymetry, and Curvature," vol.37(11), pp.2463-2473, Nov. 2018. Identification of objective criteria to correctly diagnose ectatic diseases of the cornea or to detect early stages of corneal ectasia is of great interest in ophthalmology and optometry. Metrics for diagnosis typically employed are curvature maps (axial/sagittal, tangential); elevation map of the anterior surface of the cornea with respect to a reference sphere; and pachymetry (thickness) map of the cornea. We present evidence that currently used curvature maps do not represent the actual curvatures (principal or mean) in a human cornea. A novel contribution of this paper is the computation of the true mean curvature over every point of a central region of the cornea. We show that the true mean curvature can accurately identify the location of the ectasia. We present a quartic smoothing spline algorithm for the simultaneous computation of elevation maps for anterior and posterior corneal surfaces, pachymetry, and true mean curvature. The input to the algorithm is data from a single measurement from imaging devices such as an anterior segment optical coherence tomographer or a Scheimpflug imager. We show that a different combination of metrics is useful for the diagnosis of existing ectasia (true mean curvature and anterior elevation map) as opposed to subclinical ectasia (pachymetry and posterior elevation map). We compare our results with existing algorithms, and present applications to a normal cornea, a forme fruste keratoconic cornea, and an advanced keratoconic cornea.

Jiayang Guo;Kun Yang;Hongyi Liu;Chunli Yin;Jing Xiang;Hailong Li;Rongrong Ji;Yue Gao; "A Stacked Sparse Autoencoder-Based Detector for Automatic Identification of Neuromagnetic High Frequency Oscillations in Epilepsy," vol.37(11), pp.2474-2482, Nov. 2018. High-frequency oscillations (HFOs) are spontaneous magnetoencephalography (MEG) patterns that have been acknowledged as a putative biomarker to identify epileptic foci. Correct detection of HFOs in the MEG signals is crucial for the accurate and timely clinical evaluation. Since the visual examination of HFOs is time-consuming, error-prone, and with poor inter-reviewer reliability, an automatic HFOs detector is highly desirable in clinical practice. However, the existing approaches for HFOs detection may not be applicable for MEG signals with noisy background activity. Therefore, we employ the stacked sparse autoencoder (SSAE) and propose an SSAE-based MEG HFOs (SMO) detector to facilitate the clinical detection of HFOs. To the best of our knowledge, this is the first attempt to conduct HFOs detection in MEG using deep learning methods. After configuration optimization, our proposed SMO detector is outperformed other classic peer models by achieving 89.9% in accuracy, 88.2% in sensitivity, and 91.6% in specificity. Furthermore, we have tested the performance consistency of our model using various validation schemes. The distribution of performance metrics demonstrates that our model can achieve steady performance.

Md Foiez Ahmed;Selcuk Yasar;Sang Hyun Cho; "A Monte Carlo Model of a Benchtop X-Ray Fluorescence Computed Tomography System and Its Application to Validate a Deconvolution-Based X-Ray Fluorescence Signal Extraction Method," vol.37(11), pp.2483-2492, Nov. 2018. In this study, we developed and validated a Geant4-based Monte Carlo (MC) model of an experimental benchtop X-ray fluorescence (XRF) computed tomography (XFCT) system for quantitative imaging of metallic nanoparticles such as gold nanoparticles (GNPs) injected into small animals for preclinical testing of various NP-based diagnostic and therapeutic approaches. Detailed hardware components of the current benchtop XFCT system, including the X-ray source, excitation beam collimation and filtration, custom imaging phantoms with GNP solutions, and single/ring/linear array detectors with custom collimation, were incorporated into the MC model. In conjunction with a known CdTe detector response function, a deconvolution-based XRF signal extraction method was also developed in this study, which enabled complete separation of gold K-shell XRF peaks even when they almost overlapped and facilitated extraction of XRF signals from a broadband Compton scattered photon background. The extracted signal-to-background ratios were comparable with those expected using an ideal detector with high enough energy resolution (e.g., 0.1 keV full-width at half-maximum). Once convoluted with the CdTe detector response function, the MC-calculated spectra for excitation beams or emitted photons and XFCT image spatial resolutions agreed well with those measured experimentally. Thus, the current MC model can be used to optimize the beam/imaging parameters (e.g., beam geometry, excitation X-ray beam energy, and X-ray filter material) as well as the design of critical hardware components (e.g., detector collimators) within the current benchtop XFCT system. Also, the current XRF signal extraction method can relax the usual stringent requirement of detector energy resolution while not degrading the sensitivity of benchtop XFCT.

Huazhu Fu;Jun Cheng;Yanwu Xu;Changqing Zhang;Damon Wing Kee Wong;Jiang Liu;Xiaochun Cao; "Disc-Aware Ensemble Network for Glaucoma Screening From Fundus Image," vol.37(11), pp.2493-2501, Nov. 2018. Glaucoma is a chronic eye disease that leads to irreversible vision loss. Most of the existing automatic screening methods first segment the main structure and subsequently calculate the clinical measurement for the detection and screening of glaucoma. However, these measurement-based methods rely heavily on the segmentation accuracy and ignore various visual features. In this paper, we introduce a deep learning technique to gain additional image-relevant information and screen glaucoma from the fundus image directly. Specifically, a novel disc-aware ensemble network for automatic glaucoma screening is proposed, which integrates the deep hierarchical context of the global fundus image and the local optic disc region. Four deep streams on different levels and modules are, respectively, considered as global image stream, segmentation-guided network, local disc region stream, and disc polar transformation stream. Finally, the output probabilities of different streams are fused as the final screening result. The experiments on two glaucoma data sets (SCES and new SINDI data sets) show that our method outperforms other state-of-the-art algorithms.

Corin F. Otesteanu;Sergio J. Sanabria;Orcun Goksel; "Robust Reconstruction of Elasticity Using Ultrasound Imaging and Multi-Frequency Excitations," vol.37(11), pp.2502-2513, Nov. 2018. Biomedical parameters of tissue can be important indicators for clinical diagnosis. One such parameter that reflects tissue stiffness is elasticity, the imaging of which is called elastography. In this paper, we use displacements from harmonic excitations to solve the inverse problem of elasticity based on a finite-element method (FEM) formulation. This leads to iterative solution of nonlinear and nonconvex problems. In this paper, we show the importance and selection of viable initializations in numerical simulation studies and propose techniques for the fusion of multiple initializations for ideal reconstructions of unknown tissue as well as combining information from excitations at multiple frequencies. Results show that our method leads up to 76% decrease in root-mean-squared error (RMSE) and 9.9 dB increase in contrast-to-noise ratio (CNR) in simulations with noise, when compared to conventional iterative FEM without multiple initializations and frequencies. As the wave patterns in individually selected frequencies may introduce artifacts, a joint inverse-problem solution of multi-frequency excitations is introduced as a robust solution, where CNR improvements of up to 11.9 dB are observed. We also present the methods on a tissue-mimicking gelatin phantom study using mechanical excitation and ultrafast plane-wave ultrasound imaging, where the RMSE was improved by up to 51%. An experiment of ablation via heating an ex-vivo bovine liver shows that reconstruction artifacts are reduced with our proposed method.

Olivier Bernard;Alain Lalande;Clement Zotti;Frederick Cervenansky;Xin Yang;Pheng-Ann Heng;Irem Cetin;Karim Lekadir;Oscar Camara;Miguel Angel Gonzalez Ballester;Gerard Sanroma;Sandy Napel;Steffen Petersen;Georgios Tziritas;Elias Grinias;Mahendra Khened;Varghese Alex Kollerathu;Ganapathy Krishnamurthi;Marc-Michel Rohé;Xavier Pennec;Maxime Sermesant;Fabian Isensee;Paul Jäger;Klaus H. Maier-Hein;Peter M. Full;Ivo Wolf;Sandy Engelhardt;Christian F. Baumgartner;Lisa M. Koch;Jelmer M. Wolterink;Ivana Išgum;Yeonggul Jang;Yoonmi Hong;Jay Patravali;Shubham Jain;Olivier Humbert;Pierre-Marc Jodoin; "Deep Learning Techniques for Automatic MRI Cardiac Multi-Structures Segmentation and Diagnosis: Is the Problem Solved?," vol.37(11), pp.2514-2525, Nov. 2018. Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.

G. Korcyl;P. Białas;C. Curceanu;E. Czerwiński;K. Dulski;B. Flak;A. Gajos;B. Głowacz;M. Gorgol;B. C. Hiesmayr;B. Jasińska;K. Kacprzak;M. Kajetanowicz;D. Kisielewska;P. Kowalski;T. Kozik;N. Krawczyk;W. Krzemień;E. Kubicz;M. Mohammed;Sz. Niedźwiecki;M. Pawlik-Niedźwiecka;M. Pałka;L. Raczyński;P. Rajda;Z. Rudy;P. Salabura;N. G. Sharma;S. Sharma;R. Y. Shopa;M. Skurzok;M. Silarski;P. Strzempek;A. Wieczorek;W. Wiślicki;R. Zaleski;B. Zgardzińska;M. Zieliński;P. Moskal; "Evaluation of Single-Chip, Real-Time Tomographic Data Processing on FPGA SoC Devices," vol.37(11), pp.2526-2535, Nov. 2018. A novel approach to tomographic data processing has been developed and evaluated using the Jagiellonian positron emission tomography scanner as an example. We propose a system in which there is no need for powerful, local to the scanner processing facility, capable to reconstruct images on the fly. Instead, we introduce a field programmable gate array system-on-chip platform connected directly to data streams coming from the scanner, which can perform event building, filtering, coincidence search, and region-of-response reconstruction by the programmable logic and visualization by the integrated processors. The platform significantly reduces data volume converting raw data to a list-mode representation, while generating visualization on the fly.

Jun Cheng;Zhengguo Li;Zaiwang Gu;Huazhu Fu;Damon Wing Kee Wong;Jiang Liu; "Structure-Preserving Guided Retinal Image Filtering and Its Application for Optic Disk Analysis," vol.37(11), pp.2536-2546, Nov. 2018. Retinal fundus photographs have been used in the diagnosis of many ocular diseases such as glaucoma, pathological myopia, age-related macular degeneration, and diabetic retinopathy. With the development of computer science, computer aided diagnosis has been developed to process and analyze the retinal images automatically. One of the challenges in the analysis is that the quality of the retinal image is often degraded. For example, a cataract in human lens will attenuate the retinal image, just as a cloudy camera lens which reduces the quality of a photograph. It often obscures the details in the retinal images and posts challenges in retinal image processing and analyzing tasks. In this paper, we approximate the degradation of the retinal images as a combination of human-lens attenuation and scattering. A novel structure-preserving guided retinal image filtering (SGRIF) is then proposed to restore images based on the attenuation and scattering model. The proposed SGRIF consists of a step of global structure transferring and a step of global edge-preserving smoothing. Our results show that the proposed SGRIF method is able to improve the contrast of retinal images, measured by histogram flatness measure, histogram spread, and variability of local luminosity. In addition, we further explored the benefits of SGRIF for subsequent retinal image processing and analyzing tasks. In the two applications of deep learning-based optic cup segmentation and sparse learning-based cup-to-disk ratio (CDR) computation, our results show that we are able to achieve more accurate optic cup segmentation and CDR measurements from images processed by SGRIF.

Sandro Queirós;Pedro Morais;Daniel Barbosa;Jaime C. Fonseca;João L. Vilaça;Jan D’Hooge; "MITT: Medical Image Tracking Toolbox," vol.37(11), pp.2547-2557, Nov. 2018. Over the years, medical image tracking has gained considerable attention from both medical and research communities due to its widespread utility in a multitude of clinical applications, from functional assessment during diagnosis and therapy planning to structure tracking or image fusion during image-guided interventions. Despite the ever-increasing number of image tracking methods available, most still consist of independent implementations with specific target applications, lacking the versatility to deal with distinct end-goals without the need for methodological tailoring and/or exhaustive tuning of numerous parameters. With this in mind, we have developed the medical image tracking toolbox (MITT)—a software package designed to ease customization of image tracking solutions in the medical field. While its workflow principles make it suitable to work with 2-D or 3-D image sequences, its modules offer versatility to set up computationally efficient tracking solutions, even for users with limited programming skills. MITT is implemented in both C/C++ and MATLAB, including several variants of an object-based image tracking algorithm and allowing to track multiple types of objects (i.e., contours, multi-contours, surfaces, and multi-surfaces) with several customization features. In this paper, the toolbox is presented, its features discussed, and illustrative examples of its usage in the cardiology field provided, demonstrating its versatility, simplicity, and time efficiency.

* "9th International EMBS Micro and Nanotechnology in Medicine Conference," vol.37(11), pp.2558-2558, Nov. 2018.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "EMBS Micro and Nanotechnology in Medicine Conference," vol.37(11), pp.2559-2559, Nov. 2018.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "IEEE Life Sciences Conference," vol.37(11), pp.2560-2560, Nov. 2018.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "IEEE Transactions on Medical Imaging information for authors," vol.37(11), pp.C3-C3, Nov. 2018.* Provides instructions and guidelines to prospective authors who wish to submit manuscripts.

IET Image Processing - new TOC (2018 November 15) [Website]

Madhuri Yadav;Ravindra Kumar Purwar;Mamta Mittal; "Handwritten Hindi character recognition: a review," vol.12(11), pp.1919-1933, 11 2018. As the years passed by, computers became more powerful and automation became the need of generation. Humans tried to automate their work and replace themselves with machines. This effort of transition from manual to automatic gave rise to various research fields, and document character recognition is one such field. From the last few years, there is a sincere contribution from researchers for the development of optical character recognition systems for various scripts and languages. As a result of intensive research and development, there has been a significant improvement in handwritten devnagari text recognition. The main focus of this study is detailed survey of existing techniques for recognition of offline handwritten Hindi characters. It addresses all the aspects of Hindi character recognition starting from database to various phases of character recognition. The most relevant techniques of preprocessing, feature extraction and classification are discussed in various sections of this study. Moreover, this study is a zest of work accepted and published by research community in recent years. This study benefits its readers by discussing limitations of existing techniques and by providing beneficial directions of research in this field.

Branka Stojanović;Oge Marques;Aleksandar Nešković; "Deep learning-based approach to latent overlapped fingerprints mask segmentation," vol.12(11), pp.1934-1942, 11 2018. Overlapped fingerprints can be potentially present in several civil applications and criminal investigations. Segmentation of overlapped fingerprints is a required step in the process of fingerprint separation and subsequent verification. Overlapped fingerprint segmentation is performed manually (and the resulting manually drawn masks are a required additional input) in all of the overlapped latent fingerprints separation approaches in the literature, which make them only semi-automatic. This study proposes a novel overlapped fingerprint mask segmentation approach, thereby filling that gap in the development of fully automated fingerprint separation solutions. The proposed method uses convolutional neural networks to classify image blocks into three classes - background, single region, and overlapped region. The proposed approach shows satisfactory performance on three different datasets and opens the door for full automation of fingerprint separation algorithms, which is a very promising research area.

Neda Noormohamadi;Peyman Adibi;Sayyed Mohammad Saeed Ehsani; "Semantic image segmentation using an improved hierarchical graphical model," vol.12(11), pp.1943-1950, 11 2018. Hierarchical graphical models can incorporate jointly several tasks in a unified framework. By applying this approach, information exchange among tasks would improve the results. A hierarchical conditional random field (CRF) is proposed here to improve the semantic image segmentation. Although this newly proposed model applies the information of several tasks, its run time is comparable with the contemporary approaches. This method is evaluated on MSRC dataset and has shown similar or better segmentation accuracy in comparison with models where CRFs or hierarchical models are adopted.

Taihao Li;Cuifen Du;Tuya Naren;Zhiqiang Chen;Shupeng Liu;Jianshe Zhou;Xiaoyin Xu; "Using feature points and angles between them to recognise facial expression by a neural network approach," vol.12(11), pp.1951-1955, 11 2018. In this study, the authors propose a neural network (NN) method that uses feature points and the angles formed between the points to recognise facial expressions. Accurate facial expression recognition is an important part of affective computing with many practical applications. Yet, achieving acceptable levels of facial recognition accuracy has proven difficult. Feature points and the distances between the points are used to model basic expressions in NN-based approaches, but, in some cases, they cannot generate satisfactory performance. They expand on the characterisation of facial expression by considering the angles formed between feature points to augment the amount of information that is sent to the NNs. Furthermore, to circumvent a common challenge in facial expressions recognition, which is the difficulty of differentiating among several expressions, they designed a post-processing step to assess the output of the NN against a threshold. The whole method makes a decision only when the output of the NN exceeds the threshold. Otherwise, the frame under consideration is assigned to a `no decision' class. They tested our method on the widely used facial expression CK + database and found that it can achieve good accuracy.

Sudeshna Sil Kar;Santi P. Maity; "Gradation of diabetic retinopathy on reconstructed image using compressed sensing," vol.12(11), pp.1956-1963, 11 2018. This study explores neovascularisation and lesion detection in an integrated framework for gradation in diabetic retinopathy (DR). Imaging is assumed to be done from sub-sample measurements following compressed sensing. Blind estimation of the scale of the matched filter (MF) followed by fuzzy entropy maximisation is done for extraction and classification of the thick and the thin vessels. Mutual information (MI) between vessel density and tortuosity of the thin vessel class is maximised in two dimensions (2D) for neovascularisation detection. For lesion detection, MI between the maximum MF response and the maximum Laplacian of Gaussian filter response is jointly maximised in 2D. The outcomes are then combined in a common platform for gradation in DR. Simulation results demonstrate that 95% images of each of DRIVE, STARE and DIARETDB1 databases and 94% images of MESSIDOR database are correctly graded by the proposed method when 80% measurement space is considered.

Adel Kermi;Khaled Andjouh;Ferhat Zidane; "Fully automated brain tumour segmentation system in 3D-MRI using symmetry analysis of brain and level sets," vol.12(11), pp.1964-1971, 11 2018. This study presents a new fully automated, fast, and accurate brain tumour segmentation method which automatically detects and extracts whole tumours from 3D-MRI. The proposed method is based on a hybrid approach that relies on a brain symmetry analysis method and a combining region-based and boundary-based segmentation methods. The segmentation process consists of three main stages. In the first one, image pre-processing is applied to remove any noise, and to extract the brain from the head image. In the second stage, automated tumour detection is performed. It is based essentially on FBB method using brain symmetry. The obtained result constitutes the automatic initialisation of a deformable model, thus removing the need of selecting the initial region of interest by the user. Finally, the third stage focuses on the application of region growing combined with 3D deformable model based on geodesic level-set to detect the tumour boundaries containing the initial region, computed previously, regardless of its shape and size. The proposed segmentation system has been tested and evaluated on 3D-MRIs of 285 subjects with different tumour types and shapes obtained from BraTS'2017 dataset. The obtained results turn out to be promising and objective as well as close to ground truth data.

Rihab Lajili;Karim Kalti;Asma Touil;Basel Solaiman;Najoua Essoukri Ben Amara; "Two-step evidential fusion approach for accurate breast region segmentation in mammograms," vol.12(11), pp.1972-1982, 11 2018. In mammograms, the breast skin line often appears ambiguous and poorly defined. This is mainly due to the breast organ compression during the image acquisition process along with the inherent low density of the tissue in that area. The accurate delimitation of the breast region becomes a challenging task to conventional segmentation techniques. In this study, the authors propose a new segmentation approach allowing to overcome this challenge. This approach is based on the application of two complementary segmentation techniques exploring each, respectively, the grey-scale intensities and the local-homogeneity domains. The knowledge resulting from each segmentation technique is considered as a knowledge source and is modelled using the belief functions formalism. The two considered knowledge sources are then fused using an iterative process. The obtained results show the efficiency of the proposed evidential approach especially in terms of ambiguity removal and decision quality improvement for accurate breast border delimitation (which is often under-segmented and assimilated to the background by most of the existing segmentation techniques).

Hussein Al-Bandawi;Guang Deng; "Blind image quality assessment based on Benford's law," vol.12(11), pp.1983-1993, 11 2018. The goal of blind image quality assessment (IQA) is to predict the quality of an image as perceived by human observers without using a reference image. The authors explore a new approach which predicts the image quality based on the conformity of the first digit distribution (FDD) of natural images in the transform domain with Benford's law (BL). The conformity is measured by the symmetric Kullback-Leibler divergence. They first show that while in the transform domain the FDD of a natural image conforms with BL well, the FDD of a distorted natural image violates this conformity. They then train a non-linear regression model which maps features derived from the FDD to the subjective evaluation score of an image. The non-linear mapping is trained using Gaussian process regression with a rational quadratic kernel. The selection of this particular non-linear regression tool is based on extensive experiments and evaluations of many regression tools. They conduct experiments to test the proposed technique using five databases. Results demonstrated that its performance is competitive with those state-of-the-art blind IQA algorithms. In particular, the overall performance of the proposed technique is among the best in all algorithms tested.

Hukum Singh; "Watermarking image encryption using deterministic phase mask and singular value decomposition in fractional Mellin transform domain," vol.12(11), pp.1994-2001, 11 2018. The aim of this study is watermarking image encryption based on the fractional Mellin transform (FrMT) and singular value decomposition (SVD) using deterministic phase masks (DPMs). DPMs are used in the input as well as in the frequency planes of double random phase encoding. The use of DPM structured phase mask provides an advantage of extra encryption parameters, besides overcoming the problem of axis alignment associated with an optical setup. The encrypted image resulting from the application of FrMT is attenuated by a factor and then combined with a host image to provide a watermarked image. Afterwards, SVD is performed to get three decomposed matrices, i.e. one diagonal matrix and two unitary matrices. The decryption process is the reverse of encryption. Digital implementation of the proposed scheme has been performed using MATLAB R2014a ( The watermark image is retrieved by using the corresponding FrMT orders and conjugate of DPMs. Use of the FrMT provides enhanced security due to the non-linear nature of the transform. The effect of noise on the watermarked image has also been investigated. Mean square error between the output and input watermarks shows the accuracy of the proposed scheme.

Srinivasan Ramakrishnan;Sivasamy Nithya; "Two improved extension of local binary pattern descriptors using wavelet transform for texture classification," vol.12(11), pp.2002-2010, 11 2018. Texture image analysis plays a pivotal role in pattern recognition and image retrieval. In this study, two improved local binary pattern descriptors are proposed using wavelet transform for texture analysis. The two proposed methods, namely wavelet domain local statistical binary pattern (WLSBP) and directional WLSBP (dWLSBPα) both consist of three stages. In the first stage, discrete wavelet decomposition is applied to decompose the image. In the second stage, the proposed statistical parameters are computed from the decomposed image, which results in binary value 0 or 1. Then, the binary values are transformed into WLSBP/dWLSBPα label. In the third stage, the histogram is built using the WLSBP/dWLSBPα labels. The proposed WLSBP and dWLSBPα differ in terms of considering the neighbours. The proposed WLSBP considers the neighbours circularly, whereas dWLSBPα considers the neighbours in the same orientation through the central wavelet coefficient. The proposed approaches have been applied for copy-move forgery detection. Experiments show that the performance of the proposed methods has improved retrieval rate compared with existing methods on both Brodatz and Outex databases.

Safia Raslain;Fella Hachouf;Soumia Kharfouchi; "Using a generalised method of moment approach and 2D-generalised autoregressive conditional heteroscedasticity modelling for denoising ultrasound images," vol.12(11), pp.2011-2022, 11 2018. This study presents a novel approach for ultrasound (US) images denoising. It concerns a class of generalised method of moments estimators with interesting asymptotic properties for wavelet coefficients 2D generalised autoregressive conditional heteroscedasticity modelling. Afterwards, these estimators can be used for removing noise from US images. Indeed, a minimum mean -square error method is applied for estimating the clean wavelet image coefficients. To judge the quality of the denoising procedure, a link between the denoising efficiency procedure and a proposed asymmetry measure is established. Several tests have been carried out to prove the performance of the proposed approach. The obtained results are compared with those of contemporary image denoising methods using usual image quality assessment metrics and two proposed no-reference quality metrics.

Mingjie Liu;Cheng-Bin Jin;Bin Yang;Xuenan Cui;Hakil Kim; "Occlusion-robust object tracking based on the confidence of online selected hierarchical features," vol.12(11), pp.2023-2029, 11 2018. In recent years, convolutional neural networks (CNNs) have been widely used for visual object tracking, especially in combination with correlation filters (CFs). However, the increasing complex CNN models introduce more useless information, which may decrease the tracking performance. This study proposes an online feature map selection method to remove noisy and irrelevant feature maps from different convolutional layers of CNN, which can reduce computation redundancy and improve tracking accuracy. Furthermore, a novel appearance model update strategy, which exploits the feedback from the peak value of response maps, is developed to avoid model corruption. Finally, an extensive evaluation of the proposed method was conducted over OTB-2013 and OTB-2015 datasets, and compared with different kinds of trackers, including deep learning-based trackers and CF-based trackers. The results demonstrate that the proposed method achieves a highly satisfactory performance.

Lin Cong;Shifei Ding;Lijuan Wang;Aijuan Zhang;Weikuan Jia; "Image segmentation algorithm based on superpixel clustering," vol.12(11), pp.2030-2035, 11 2018. The main task of image segmentation is to partition an image into disjoint sets of pixels called clusters. Spectral clustering algorithm has been developed rapidly in recent years and it has been widely used in image segmentation. The traditional spectral clustering algorithm requires huge amount of computation to process colour images with high resolution. While one possible solution is reducing image resolution, but it will lead to the loss of image information and reduce segmentation performance. To overcome the problem of traditional spectral clustering, an image segmentation algorithm based on superpixel clustering is proposed. Firstly, the algorithm uses the superpixel preprocessing technique to quickly divide the image into a certain number of superpixel regions with specific information. Then, the similarity matrix is used to provide the input information to the spectral clustering algorithm to cluster the superpixel regions and get the final image segmentation results. The experiment results show that the proposed algorithm can effectively improve the performance in image segmentation compared with the traditional spectral clustering algorithm, and finally the substantial improvement has been obtained in respect of computational complexity, processing time and the overall segmentation effect.

Fen Xiao;Wenzheng Deng;Liangchan Peng;Chunhong Cao;Kai Hu;Xieping Gao; "Multi-scale deep neural network for salient object detection," vol.12(11), pp.2036-2041, 11 2018. Salient object detection is a fundamental problem and has been received a great deal of attention in computer vision. Recently, deep learning model became a powerful tool for image feature extraction. In this study, the authors propose a multi-scale deep neural network (MSDNN) for salient object detection. The proposed model first extracts global high-level features and context information over the whole source image with the recurrent convolutional neural network. Then several stacked deconvolutional layers are adopted to get the multi-scale feature representation and obtain a series of saliency maps. Finally, the authors investigate a fusion convolution module to build a final pixel level saliency map. The proposed model is extensively evaluated on six salient object detection benchmark datasets. Results show that the authors' deep model significantly outperforms other 12 state-of-the-art approaches.

Jing-Hua Zhang;Yan Zhang;Zhi-Guang Shi; "Enhancement of dim targets in a sea background based on long-wave infrared polarisation features," vol.12(11), pp.2042-2050, 11 2018. According to Fresnel's formula and the energy conservation law, a model combining the infrared reflected effect and emitted effect is developed to calculate the polarisation degree. With this model, the polarisation degree difference between the sea surface and ship target in long-wave infrared is simulated. To solve the problem of dim targets detection in a sea background, based on the polarisation difference of the sea surface and ship targets, a method of the non-subsampled shearlets transformation is proposed to fuse the intensity image and polarisation image. The algorithm of distribution coefficients is applied to improve the contrast ratio between targets to background in low-frequency subbands. The denoise scheme of the adaptive threshold is adopted to suppress noise and the conceptions of local direction contrast and region gradient are used as a choosing scheme to the preserve features and edges of images in high-frequency subbands. Image evaluation indices of target contrast with the background and local signal-to-noise ratio are used to evaluate the enhancement effect of fused images. Results show that the evaluation indices of fused images with polarisation features are significantly improved, and comparisons with existing methods demonstrate the effectiveness and reliability of the proposed method.

Neha Gupta;Gargi V. Pillai;Samit Ari; "Change detection in Landsat images based on local neighbourhood information," vol.12(11), pp.2051-2058, 11 2018. In this study, a novel technique is proposed to detect the changes in bitemporal multispectral images. Utilisation of the local neighbourhood information in any image processing task may provide good noise immunity and reduces false alarms. Motivated by this, Otsu's thresholding of local information based approach is proposed in this work. It shows the effective performance in change detection of bitemporal Landsat images which suffer from different atmospheric and sunlight conditions. To get the local information around each pixel, both bitemporal images are partitioned into overlapping image blocks. Every block of the first image is concatenated with the corresponding block of the second image for each pixel position. Thus, the information of the concatenated block is considered as inter-block information. Further, Otsu's method is applied on the concatenated block for threshold calculation. Depending on the threshold, binary values are generated. Finally, binary values of both images for all bands are compared by XOR operation to detect it as the background i.e. unchanged pixel or foreground i.e. changed pixel. On the basis of majority class present in XOR output, binary change map is generated. Experiments conducted on Landsat images show that the proposed method provides better performance compared to reported techniques.

Lin Peng;Jun Liu; "Detection and analysis of large-scale WT blade surface cracks based on UAV-taken images," vol.12(11), pp.2059-2064, 11 2018. Aiming at the high cost and the poor working environment of the detection of large-scale wind turbine (WT) cracks, an analytic detection method based on blade images taken by unmanned aerial vehicles (UAVs) is proposed in this study. For the characteristics of the UAV shooting and the location of the WT, the pre-processing of motion blurring, image noise reduction and image enhancement is used to make the target area and crack details more clear and complete. Then, a crack analysis method based on the grey-scale value is proposed, taking into account the distribution, severity and development trend of the cracks, so that the blind area in the daily detection of the WT can be reduced, the subsequent maintenance of the WT blade is made more accurate, and essentially the operation and maintenance costs be reduced considerably.

Joydeb Kumar Sana;Md. Monirul Islam; "PLT-based spectral features for texture image retrieval," vol.12(11), pp.2065-2074, 11 2018. Effective texture feature is an essential component in any content-based image retrieval system. In this study, new texture features based on image enhancement technique are presented. The authors have effectively exploited power-law transform (PLT) to extract new spectral texture features called PLT-based spectral features. Extensive experiments on the Brodatz texture database and Salzburg Textures image database prove the effectiveness of the proposed techniques and show that the proposed features significantly outperform the widely used Gabor and curvelet features. The proposed features are also compared with recently published Gaussian copula models of Gabor feature and local tetra patterns (LTrP). The experimental results confirm that the proposed features have more tolerance to scale, orientation and illumination distortion than the state-of-the-art Gabor, curvelet, Gaussian copula models of Gabor and LTrPs.

Vallikutti Sathananthavathi;Ganesan Indumathi; "BAT algorithm inspired retinal blood vessel segmentation," vol.12(11), pp.2075-2083, 11 2018. The automated extraction of retinal blood vessels is the course of action in the medical analysis of retinal diseases. The proposed methodology for the retinal vessel segmentation is based on BAT algorithm and random forest classifier. A feature vector of 40-dimensional including local, phase and morphological features is extracted and the feature set which minimises the classifier error is identified by BAT algorithm. The selected features are also identified as the dominant features in the classification. Performance of the proposed method is analysed by the publicly available databases such as digital retinal images for vessel extraction and structured analysis of the retina. The authors' proposed method is highly sensitive to identify the blood vessels, in view of the fact that it corresponds to the ability of the method to identify the blood vessels correctly. BAT algorithm-based proposed method achieves very high sensitivity and accuracy of about 82.85 and 95.34%, respectively.

Johannes H. Uhl;Stefan Leyk;Yao-Yi Chiang;Weiwei Duan;Craig A. Knoblock; "Spatialising uncertainty in image segmentation using weakly supervised convolutional neural networks: a case study from historical map processing," vol.12(11), pp.2084-2091, 11 2018. Convolutional neural networks (CNNs) such as encoder-decoder CNNs have increasingly been employed for semantic image segmentation at the pixel-level requiring pixel-level training labels, which are rarely available in real-world scenarios. In practice, weakly annotated training data at the image patch level are often used for pixel-level segmentation tasks, requiring further processing to obtain accurate results, mainly because the translation invariance of the CNN-based inference can turn into an impeding property leading to segmentation results of coarser spatial granularity compared with the original image. However, the inherent uncertainty in the segmented image and its relationships to translation invariance, CNN architecture, and classification scheme has never been analysed from an explicitly spatial perspective. Therefore, the authors propose measures to spatially visualise and assess class decision confidence based on spatially dense CNN predictions, resulting in continuous decision confidence surfaces. They find that such a visual-analytical method contributes to a better understanding of the spatial variability of class score confidence derived from weakly supervised CNN-based classifiers. They exemplify this approach by incorporating decision confidence surfaces into a processing chain for the extraction of human settlement features from historical map documents based on weakly annotated training data using different CNN architectures and classification schemes.

Badal Soni;Pradip K. Das;Dalton Meitei Thounaojam; "Keypoints based enhanced multiple copy-move forgeries detection system using density-based spatial clustering of application with noise clustering algorithm," vol.12(11), pp.2092-2099, 11 2018. In this study, the problem of detecting if an image has tampered is inquired; especially, the attention has been paid to the case in which the portion of an image is copied and then pasted onto another region to create a duplication or to hide some important portion of the image. The proposed copy-move forgery detection system is based on the scale-invariant feature transform (SIFT) features extraction and density-based clustering algorithm. The extracted SIFT features are matched using the generalised two nearest neighbours (2NN) procedure. Thereafter, the density-based clustering algorithm is utilised to improve the detection results. The proposed system is tested using MICC-F220, MICC-F2000 and MICC-F8multi datasets. Due to the generalised 2NN matching procedure, the proposed system is able to detect multiple forgeries present in the image. Experimental results show that the performance of the system is quite satisfactory in terms of computational time as well as detection accuracy.

Renoh Johnson Chalakkal;Waleed Habib Abdulla;Sinumol Sukumaran Thulaseedharan; "Automatic detection and segmentation of optic disc and fovea in retinal images," vol.12(11), pp.2100-2110, 11 2018. Feature extraction from retinal images is gaining popularity worldwide as many pathologies are proved having connections with these features. Automatic detection of these features makes it easier for the specialist ophthalmologists to analyse them without spending exhaustive time to segment them manually. The proposed method automatically detects the optic disc (OD) using histogram-based template matching combined with the maximum sum of vessel information in the retinal image. The OD region is segmented by using the circular Hough transform. For detecting fovea, the retinal image is uniformly divided into three horizontal strips and the strip including the detected OD is selected. Contrast of the horizontal strip containing the OD region is then enhanced using a series of image processing steps. The macula region is first detected in the OD strip using various morphological operations and connected component analysis. The fovea is located inside this detected macular region. The proposed method achieves an OD detection accuracy over 95% upon testing on seven public databases and on our locally developed database, University of Auckland Diabetic Retinopathy database (UoA-DR). The average OD boundary segmentation overlap score, sensitivity and fovea detection accuracy achieved are 0.86, 0.968 and 97.26% respectively.

Long Vuong Tung;Minh Le Dinh;Xiem HoangVan;Trieu Duong Dinh;Tien Huu Vu;Ha Thanh Le; "View synthesis method for 3D video coding based on temporal and inter view correlation," vol.12(11), pp.2111-2118, 11 2018. Recently, in three-dimensional (3D) television, the temporal correlation between consecutive frames of the intermediate view is used together with the inter-view correlation to improve the quality of the synthesised view. However, most temporal methods are based on the motion vector fields (MVFs) calculated by the optical flow or block-based motion estimation which has very high computational complexity. To alleviate this issue, the authors propose a temporal-disparity-based view synthesis (TDVS) method, which uses the MVFs extracted from the bitstreams of side views and motion warping technique to create the temporal correlation between views in the intermediate position. Then a motion compensation technique is used to create a temporal-based view. Finally, the temporal-based view is fused with a disparity-based view which is generated by a traditional depth image-based rendering technique to create the final synthesised view. The fusion of these views is performed based on the side information which is determined and encoded at the sender-side of the 3D video system using a dynamic programming algorithm and rate-distortion optimisation scheme. Experimental results show that the proposed method can achieve the synthesised view with appreciable improvements in comparison with the view synthesis reference software 1D fast (VSRS-1D Fast) for several test sequences.

Sumit Datta;Bhabesh Deka; "Efficient interpolated compressed sensing reconstruction scheme for 3D MRI," vol.12(11), pp.2119-2127, 11 2018. 3D magnetic resonance imaging (3D MRI) is one of the most preferred medical imaging modalities for the analysis of anatomical structures where acquisition of multiple slices along the slice select gradient direction is very common. In 2D multi-slice acquisition, adjacent slices are highly correlated because of very narrow inter-slice gaps. Application of compressed sensing (CS) in MRI significantly reduces traditional MRI scan time due to random undersampling. The authors first propose a fast interpolation technique to estimate missing samples in the k-space of a highly undersampled slice (H-slice) from k-space (s) of neighbouring lightly undersampled slice/s (L-slice). Subsequently, an efficient multislice CS-MRI reconstruction technique based on weighted wavelet forest sparsity, and joint total variation regularisation norms is applied simultaneously on both interpolated H and non-interpolated L-slices. Simulation results show that the proposed CS reconstruction for 3D MRI is not only computationally faster but significant improvements in terms of visual quality and quantitative performance metrics are also achieved compared to the existing methods.

Neetu Singh;Abhinav Gupta;Roop C. Jain; "Statistical characterisation of block variance and AC DCT coefficients for power law enhanced images," vol.12(11), pp.2128-2137, 11 2018. The study of statistical distributions of alternating current (AC) discrete cosine transform (DCT) coefficients is one of the key techniques for digital images. In this study, we have analysed original and power law enhanced images in the logarithmic domain. The logarithmic domain linearises the otherwise nonlinear relation between original and power law enhanced images. We have experimentally proved that Gamma distribution is the best distribution for characterisation of block variance in terms of Jensen-Shannon divergence. Therefore, a composite distribution, Gaussian-Gamma, is employed for characterisation of AC DCT coefficients. We have analytically derived and experimentally verified that the scale parameters of power law enhanced image are proportional to scale parameters of the original image whereas shape parameters remain unchanged. On further experimentation, it is established that scale and shape parameters of the composite statistical distribution of AC DCT coefficients do not change if images are compressed in JPEG format after power law enhancement. Furthermore, a novel feature set of scale parameters is constructed and is applied to train decision tree to classify original, brightened, and darkened images. The comparison of achieved classification results with the state-of-the-art show the efficacy of proposed analysis.

IEEE Transactions on Signal Processing - new TOC (2018 November 15) [Website]

Seyyed Hamed Fouladi;Sung-En Chiu;Bhaskar D. Rao;Ilangko Balasingham; "Recovery of Independent Sparse Sources From Linear Mixtures Using Sparse Bayesian Learning," vol.66(24), pp.6332-6346, Dec.15, 15 2018. Classical algorithms for the multiple measurement vector (MMV) problem assume either independent columns for the solution matrix or certain models of correlation among the columns. The correlation structure in the previous MMV formulation does not capture the signals well for some applications like photoplethysmography (PPG) signal extraction where the signals are independent and linearly mixed in a certain manner. In practice, the mixtures of these signals are observed through different channels. In order to capture this structure, we decompose the solution matrix into multiplication of a sparse matrix composed of independent components, and a linear mixing matrix. We derive a new condition that guarantees a unique solution for this linear mixing MMV problem. The condition can be much less restrictive than the conditions for the typical MMV problem in previous works. We also propose two novel sparse Bayesian learning (SBL) algorithms, independent component analysis sparse Bayesian learning, (ICASBL), and fast independent component sparse Bayesian learning, which capture the linear mixture structure. Analysis of the global and local minima of the ICASBL cost function is also provided, and similar to the typical SBL cost function it is shown that the local minima are sparse and that the global minima have maximum sparsity. Experimental results show that the proposed algorithms outperform traditional approaches and can recover the signal with fewer number of measurements in the linear mixing MMV setting.

Muhammad Asad Lodhi;Waheed U. Bajwa; "Detection Theory for Union of Subspaces," vol.66(24), pp.6347-6362, Dec.15, 15 2018. The focus of this paper is on detection theory for union of subspaces (UoS). To this end, generalized likelihood ratio tests (GLRTs) are presented for detection of signals conforming to the UoS model and detection of the corresponding “active” subspace. One of the main contributions of this paper is bounds on the performances of these GLRTs in terms of geometry of subspaces under various assumptions on the observation noise. The insights obtained through geometrical interpretation of the GLRTs are also validated through extensive numerical experiments on both synthetic and real-world data.

Hiroki Kuroda;Masao Yamagishi;Isao Yamada; "Exploiting Sparsity in Tight-Dimensional Spaces for Piecewise Continuous Signal Recovery," vol.66(24), pp.6363-6376, Dec.15, 15 2018. Recovery of certain piecewise continuous signals from noisy observations has been a major challenge in sciences and engineering. In this paper, in a tight-dimensional representation space, we exploit sparsity hidden in a class of possibly discontinuous signals named finite-dimensional piecewise continuous (FPC) signals. More precisely, we propose a tight-dimensional linear transformation which reveals a certain sparsity in discrete samples of the FPC signals. This transformation is designed by exploiting the fact that most of the consecutive samples are contained in special subspaces. Numerical experiments on recovery of piecewise polynomial signals and piecewise sinusoidal signals show the effectiveness of the revealed sparsity.

Yixian Liu;Yingbin Liang;Shuguang Cui; "Data-Driven Nonparametric Existence and Association Problems," vol.66(24), pp.6377-6389, Dec.15, 15 2018. We investigate two closely related nonparametric hypothesis testing problems. In the first problem (i.e., the existence problem), we test whether a testing data stream is generated by one of a set of composite distributions. In the second problem (i.e., the association problem), we test which one of the multiple distributions generates a testing data stream. We assume that some distributions in the set are unknown, and instead, only training sequences generated by the corresponding distributions are available. For both problems, we construct the generalized likelihood tests and characterize the error exponents of the maximum error probabilities. For the existence problem, we show that the error exponent is mainly captured by the Chernoff information between the set of composite distributions and alternative distributions. For the association problem, we show that the error exponent is captured by the minimum Chernoff information between each pair of distributions as well as the Kullback-Leibler Divergences between the approximated distributions (via training sequences) and the true distributions. We also show that the ratio between the lengths of training and testing sequences plays an important role in determining the error decay rate.

Cheng Qian;Xiao Fu;Nicholas D. Sidiropoulos;Ye Yang; "Tensor-Based Channel Estimation for Dual-Polarized Massive MIMO Systems," vol.66(24), pp.6390-6403, Dec.15, 15 2018. The 3GPP suggests to combine dual polarized (DP) antenna arrays with the double directional (DD) channel model for downlink channel estimation. This combination strikes a good balance between high-capacity communications and parsimonious channel modeling, and also brings limited feedback schemes for downlink channel state information within reach—since such channel can be fully characterized by several key parameters. However, most existing channel estimation work under the DD model has not yet considered DP arrays, perhaps because of the complex array manifold and the resulting difficulty in algorithm design. In this paper, we first reveal that the DD channel with DP arrays at the transmitter and receiver can be naturally modeled as a low-rank tensor, and thus the key parameters of the channel can be effectively estimated via tensor decomposition algorithms. On the theory side, we show that the DD–DP parameters are identifiable under mild conditions, by leveraging identifiability of low-rank tensors. Furthermore, a compressed tensor decomposition algorithm is developed for alleviating the downlink training overhead. We show that, by using judiciously designed pilot structure, the channel parameters are still guaranteed to be identified via the compressed tensor decomposition formulation even when the size of the pilot sequence is much smaller than what is needed for conventional channel identification methods, such as linear least squares and matched filtering. Extensive simulations are employed to showcase the effectiveness of the proposed method.

Jun Shi;Xiaoping Liu;Xiaojie Fang;Xuejun Sha;Wei Xiang;Qinyu Zhang; "Linear Canonical Matched Filter: Theory, Design, and Applications," vol.66(24), pp.6404-6417, Dec.15, 15 2018. The linear canonical transform (LCT) is a multiparameter unitary transform that generalizes a large number of classical transforms with application to signal processing and optics. Many of its fundamental properties are already known; however, little attention has been paid to the design and implementation of matched filters in the LCT domain. The objective of this paper is to design this type of filters that maximize the output signal-to-noise ratio, dubbed the linear canonical matched filter (LCMF). We first derive some facts of the LCT spectral analysis for random signals. Then, by applying the derived results, the LCMF design theory associated with the LCT is developed. Moreover, the implementation and basic properties of the LCMF are presented. The introduction of the LCMF invites a new interpretation of the ambiguity function and the correlation function of the LCT. Finally, we provide several applications for the theoretical derivations.

Dionysios S. Kalogerias;Athina P. Petropulu; "Spatially Controlled Relay Beamforming," vol.66(24), pp.6418-6433, Dec.15, 15 2018. We consider the problem of enhancing Quality-of-Service (QoS) in mobile relay beamforming networks, by optimally controlling relay motion, in the presence of a dynamic channel. We assume a time-slotted system, where the relays update their positions before the beginning of each slot. Modeling the wireless channel as a Gaussian spatiotemporal field, we propose a novel 2-stage stochastic programming approach for optimally specifying relay positions and beamforming weights, such that the expected QoS of the network is maximized, based on causal channel state information and under a total relay power budget. This results in a scheme where, at each time slot, apart from optimally beamforming to the destination, the relays also optimally decide their positions at the next slot, based on causal experience. The stochastic program considered is shown to be equivalent to a set of simple subproblems, which may be solved in a naturally distributed fashion, one at each relay. However, exact evaluation of the objective of each subproblem is impossible. To mitigate this issue, we propose three efficient, theoretically grounded surrogates to the original subproblems, which rely on the Sample Average Approximation method, the Gauss-Hermite Quadrature, and the Method of Statistical Differentials, respectively. The efficacy and several properties of the proposed approach are demonstrated via simulations. In particular, we report a substantial improvement of about <inline-formula> <tex-math notation="LaTeX">${text{80}{%}}$</tex-math></inline-formula> on the average network QoS at steady state, compared to randomized relay motion. This shows that strategic relay motion control can result in substantial performance gains, as far as QoS maximization is concerned.

Sofia Suvorova;Andrew Melatos;Rob J. Evans;William Moran;Patrick Clearwater;Ling Sun; "Phase-Continuous Frequency Line Track-Before-Detect of a Tone With Slow Frequency Variation," vol.66(24), pp.6434-6442, Dec.15, 15 2018. We consider optimal Bayesian detection of a slowly varying tone of unknown amplitude in situations characterized by very low signal-to-noise ratio (SNR) and a large number of measurements, as found in certain gravitational wave and passive sonar problems. We use a hidden Markov model (HMM) framework but, unlike typical HMM-based frequency line tracking methods, we develop a true track-before-detect algorithm, which does not threshold the blocked Fourier data and only considers frequency trails that have phase continuity across all HMM steps. We model the frequency and phase evolution as a phase-wrapped Ornstein–Uhlenbeck process. The resulting optimal detector is computationally efficient. The detectability improvement arising from phase continuity is characterized via comparative simulation for a mock, simplified gravitational wave search problem.

Ryo Hayakawa;Kazunori Hayashi; "Discreteness-Aware Approximate Message Passing for Discrete-Valued Vector Reconstruction," vol.66(24), pp.6443-6457, Dec.15, 15 2018. This paper considers the reconstruction of a discrete-valued random vector from possibly underdetermined linear measurements using sum-of-absolute-value (SOAV) optimization. The proposed algorithm, referred to as discreteness-aware approximate message passing (DAMP), is based on the idea of approximate message passing (AMP), which has been originally proposed for compressed sensing. The DAMP algorithm has low computational complexity and its performance in the large system limit can be predicted analytically via state evolution framework, where we provide a condition for the exact reconstruction with DAMP in the noise-free case. From the analysis, we also propose a method to determine the parameters of the SOAV optimization. Moreover, based on the state evolution, we provide Bayes optimal DAMP, which has the minimum mean-square-error at each iteration of the algorithm. Simulation results show that the DAMP algorithms can reconstruct the discrete-valued vector from underdetermined linear measurements and the empirical performance agrees with our theoretical results in large-scale systems. When the problem size is not large enough, the SOAV optimization with the proposed parameters can achieve better performance than the DAMP algorithms for high signal-to-noise ratio.

Bracha Laufer-Goldshtein;Ronen Talmon;Sharon Gannot; "Source Counting and Separation Based on Simplex Analysis," vol.66(24), pp.6458-6473, Dec.15, 15 2018. Blind source separation is addressed, using a novel data-driven approach, based on a well-established probabilistic model. The proposed method is specifically designed for separation of multichannel audio mixtures. The algorithm relies on spectral decomposition of the correlation matrix between different time frames. The probabilistic model implies that the column space of the correlation matrix is spanned by the probabilities of the various speakers across time. The number of speakers is recovered by the eigenvalue decay, and the eigenvectors form a simplex of the speakers’ probabilities. Time frames dominated by each of the speakers are identified exploiting convex geometry tools on the recovered simplex. The mixing acoustic channels are estimated utilizing the identified sets of frames, and a linear umixing is performed to extract the individual speakers. The derived simplexes are visually demonstrated for mixtures of two, three, and four speakers. We also conduct a comprehensive experimental study, showing high separation capabilities in various reverberation conditions.

Enrica d’Afflisio;Paolo Braca;Leonardo M. Millefiori;Peter Willett; "Detecting Anomalous Deviations From Standard Maritime Routes Using the Ornstein–Uhlenbeck Process," vol.66(24), pp.6474-6487, Dec.15, 15 2018. A novel anomaly detection procedure based on the Ornstein–Uhlenbeck (OU) mean-reverting stochastic process is presented. The considered anomaly is a vessel that deviates from a planned route, changing its nominal velocity <inline-formula><tex-math notation="LaTeX">$boldsymbol{v}_0$</tex-math></inline-formula>. In order to hide this behavior, the vessel switches off its automatic identification system (AIS) device for a time <inline-formula> <tex-math notation="LaTeX">$T$</tex-math></inline-formula> and then tries to revert to the previous nominal velocity <inline-formula><tex-math notation="LaTeX">$boldsymbol{v}_0$</tex-math></inline-formula>. The decision that has to be made is declaring that a deviation either happened or not, relying only upon two consecutive AIS contacts. Furthermore, the extension to the scenario in which multiple contacts (e.g., radar) are available during the time period <inline-formula><tex-math notation="LaTeX">$T$</tex-math></inline-formula> is also considered. A proper statistical hypothesis testing procedure that builds on the changes in the OU process long-term velocity parameter <inline-formula><tex-math notation="LaTeX">$boldsymbol{v}_0$</tex-math></inline-formula> of the vessel is the core of the proposed approach and enables the solution of the anomaly detection problem. Closed analytical forms are provided for the detection and false alarm probabilities of the hypothesis test.

Amr Elnakeeb;Urbashi Mitra; "Line Constrained Estimation With Applications to Target Tracking: Exploiting Sparsity and Low-Rank," vol.66(24), pp.6488-6502, Dec.15, 15 2018. Trajectory estimation of moving targets is examined; in particular, quasi-linear trajectories are considered. Background subtraction methods, exploiting low-rank backgrounds, and sparse features of interest are extended to incorporate linear constraints. The line constraint is enforced via a rotation that yields an additional low rank condition. The proposed method is applied to single object tracking in video, wherein the trajectory can be parameterized as a line. The optimization is solved via the augmented Lagrange multiplier method. An average performance improvement of 4 dB is observed over previous background subtraction methods for estimating the position and velocity of the target. Furthermore, about a 6.2 dB gain is seen over previous target tracking methods that do not exploit the linear nature of the trajectory. The Cramér–Rao bound (CRB) for background subtraction with a linear constraint is derived and numerical results show that the proposed method achieves near optimal performance via comparison to the CRB. An aggregated error is shown to converge to zero and a boundedness analysis is conducted which suggests that the iterative algorithm is convergent as confirmed by simulations. Finally, the proposed technique is applied to real video data and is shown to be effective in estimating quasi-linear trajectories.

Charilaos I. Kanatsoulis;Xiao Fu;Nicholas D. Sidiropoulos;Wing-Kin Ma; "Hyperspectral Super-Resolution: A Coupled Tensor Factorization Approach," vol.66(24), pp.6503-6517, Dec.15, 15 2018. Hyperspectral super-resolution refers to the problem of fusing a hyperspectral image (HSI) and a multispectral image (MSI) to produce a super-resolution image (SRI) that admits fine spatial and spectral resolutions. State-of-the-art methods approach the problem via low-rank matrix approximations to the matricized HSI and MSI. These methods are effective to some extent, but a number of challenges remain. First, HSIs and MSIs are naturally third-order tensors (data “cubes”) and thus matricization is prone to a loss of structural information, which could degrade performance. Second, it is unclear whether these low-rank matrix-based fusion strategies can guarantee the identifiability of the SRI under realistic assumptions. However, identifiability plays a pivotal role in estimation problems and usually has a significant impact on practical performance. Third, a majority of the existing methods assume known (or easily estimated) degradation operators from the SRI to the corresponding HSI and MSI, which is hardly the case in practice. In this paper, we propose to tackle the super-resolution problem from a tensor perspective. Specifically, we utilize the multidimensional structure of the HSI and MSI to propose a coupled tensor factorization framework that can effectively overcome the aforementioned issues. The proposed approach guarantees the identifiability of the SRI under mild and realistic conditions. Furthermore, it works with little knowledge about the degradation operators, which is clearly a favorable feature in practice. Semi-real scenarios are simulated to showcase the effectiveness of the proposed approach.

Augusto Aubry;Antonio De Maio;Alessio Zappone;Meisam Razaviyayn;Zhi-Quan Luo; "A New Sequential Optimization Procedure and Its Applications to Resource Allocation for Wireless Systems," vol.66(24), pp.6518-6533, Dec.15, 15 2018. A novel optimization framework for resource allocation in wireless networks and radar systems is proposed, which merges the methods of maximum block improvement (MBI) and of sequential optimization. A detailed convergence proof is provided, showing that the proposed algorithm is able to monotonically increase the objective value while ensuring that every limit point of the generated variable sequence fulfills the problem first-order optimality conditions under very mild hypothesis. These results extend available convergence results on MBI and sequential optimization, significantly widening the range of applications that can be handled by the proposed framework compared to available approaches. This point is illustrated in detail presenting relevant applications from both the cellular and radar context, which fall under the umbrella of the developed optimization method.

Jun Shi;Xiaoping Liu;Yanan Zhao;Shuo Shi;Xuejun Sha;Qinyu Zhang; "Filter Design for Constrained Signal Reconstruction in Linear Canonical Transform Domain," vol.66(24), pp.6534-6548, Dec.15, 15 2018. The linear canonical transform (LCT), which includes many classical transforms, has increasingly emerged as a powerful tool for optics and signal processing. Signal reconstruction associated with the LCT has blossomed in recent years. However, many existing reconstruction algorithms for the LCT can only handle noise-free measurements, and when noise is present, they will become ill posed. In this paper, we address the problem of reconstructing an analog signal from noise-corrupted measurements in the LCT domain. A general methodology is proposed to solve this problem in which the analog signal is recovered from ideal samples of its filtered version in a unified way. The proposed methodology allows for arbitrary measurement and reconstruction schemes in the LCT domain. We formulate signal reconstruction in an LCT-based function space, which is the span of integer translates and chirp-modulation of a generating function, with coefficients derived from digitally filtering noise corrupted measurements in the LCT domain. Several alternative methods for designing digital filters in the LCT domain are also suggested using different criteria. The validity of the theoretical derivations is demonstrated via numerical simulation.

Jun Liu;Jinwang Han;Zi-Jing Zhang;Jian Li; "Bayesian Detection for MIMO Radar in Gaussian Clutter," vol.66(24), pp.6549-6559, Dec.15, 15 2018. For colocated multiple-input multiple-output (MIMO) radar, we investigate the target detection problem in Gaussian clutter with unknown covariance matrix with known inverse complex Wishart distribution as its prior probability density function. We propose three detectors in the Bayesian framework according to the criteria of the generalized likelihood ratio test, Rao test, and Wald test. The two main advantages of the proposed Bayesian detectors are as follows: first, no training data are required; and second, a priori knowledge about the clutter is incorporated in the decision rules to achieve detection performance gains. Numerical simulations show that the proposed Bayesian detectors outperform their non-Bayesian counterparts, especially when the sample number of the transmitted waveform is small. In addition, the proposed Bayesian Wald test is the most robust against the mismatch in the receive steering vector, and the proposed Bayesian Rao test exhibits the strongest rejection capability of mismatched signals.

* "List of Reviewers," vol.66(24), pp.6560-6569, Dec.15, 2018.*

IEEE Signal Processing Letters - new TOC (2018 November 15) [Website]

* "Table of Contents," vol.25(12), pp.1761-1762, Dec. 2018.* Presents the table of contents for this issue of the publication.

* "Table of Contents [Edics[Name:_blank]]," vol.25(12), pp.1763-1764, Dec. 2018.* Presents the table of contents for this issue of the publication.

Ahmed I. Zayed; "Sampling of Signals Bandlimited to a Disc in the Linear Canonical Transform Domain," vol.25(12), pp.1765-1769, Dec. 2018. We derive sampling formula for signals that are bandlimited to a disc of radius R in the linear canonical transform (LCT) domain. By bandlimitedness in a disc Din the LCT domain, we mean that the LCT F (ω) of a signal f(t) vanishes outside the disc D. We first express the signal in polar coordinates and then obtain the sampling formula. The samples of the angle θ are taken at 2N + 1 uniformly distributed points on the unit circle while the samples of the radial distance r are taken at the zeros of the Bessel function. As a special case, we obtain sampling formula for signals that are bandlimited to a disc in the fractional Fourier transform domain.

Le Gao;Xifeng Li;Dongjie Bi;Yongle Xie; "A <inline-formula><tex-math notation="LaTeX">$ q$</tex-math></inline-formula>-Gaussian Maximum Correntropy Adaptive Filtering Algorithm for Robust Sparse Recovery in Impulsive Noise," vol.25(12), pp.1770-1774, Dec. 2018. This letter proposes a robust formulation for sparse signal reconstruction from compressed measurements corrupted by impulsive noise, which exploits the q-Gaussian generalized correntropy (1 <; q <; 3) as the loss function for the residual error and utilizes a ℓ0-norm penalty term for sparsity inducing. To solve this formulation efficiently, we develop a gradient-based adaptive filtering algorithm which incorporates a zero-attracting regularization term into the framework of adaptive filtering. This new proposed algorithm blending the advantages of adaptive filtering and q-Gaussian generalized correntropy can obtain accurate reconstruction and satisfactory robustness with a proper shape parameter q. Numerical experiments on both synthetic sparse signals and natural images are conducted to illustrate the superior recovery performance of the proposed algorithm to the state-of-the-art robust sparse signal reconstruction algorithms.

Jianbo Ma;Vidhyasaharan Sethu;Eliathamby Ambikairajah;Kong Aik Lee; "Generalized Variability Model for Speaker Verification," vol.25(12), pp.1775-1779, Dec. 2018. In this letter, we propose a generalized variability model as an extension to the total variability model. While the total variability model employs a standard normal prior distribution in its typical setup, the proposed generalized variability model relaxes this assumption and allows the latent variable distribution to be a mixture of Gaussians. The conventional total variability model can then be viewed as a special case of this generalized version where the number of mixture components is constrained to one. This proposed model is validated in the context of speaker verification tasks on both the standard and extended NIST SRE 2010 datasets. Experimental results show that modeling the distribution of the latent variables as a mixture of Gaussians leads to a better performance under all conditions and a greater gain can be expected for speaker verification using short utterances.

Vishnu Palakkal;C. S. Ramalingam; "Improving the Estimation of Sinusoidal Frequencies and Direction-of-Arrival Using Line Spectral Frequencies," vol.25(12), pp.1780-1784, Dec. 2018. In this letter we address the classic problem of estimating the frequencies of multiple sinusoids in the presence of noise. We use line spectral frequencies (LSFs) and introduce the idea of “closeness of LSFs” and propose two methods based on it. The minimum phase annihilating filter A(z) is estimated using MUSIC. The corresponding LSFs are then processed to estimate the frequencies, as opposed to estimating them from the roots that are closest to the unit circle. The closeness of LSFs criterion leads to a lower threshold when compared to MUSIC, with improvements ranging from 0 db to 2 dB for the well-known two-sinusoids example; with pseudo-noise resampling the improvement is up to 4 dB. When applied to direction of arrival estimation, for the example considered by Shaghaghi and Vorobyov, the threshold improves by 6 dB and matches that given by the stochastic ML method.

Le Xiao;Hongbin Li;Yimin Liu;Xiqin Wang; "Distributed Target Detection Based on the Volume Cross-Correlation Function," vol.25(12), pp.1785-1789, Dec. 2018. This letter addresses the detection of a subspace distributed target signal obscured by disturbance. The disturbance consists of a clutter component with an unknown subspace structure and a white noise component with unknown noise power. A detection strategy is proposed based on the volume cross-correlation function, which provides a metric that measures the linear (in) dependency between two subspaces. Simulation results indicate that the proposed detector can achieve better performance than several peer methods, without resorting to secondary data and a priori knowledge about the clutter subspace including its rank.

Behzad Kamgar-Parsi;Behrooz Kamgar-Parsi;Kian Kamgar-Parsi; "On Computing Gradient of Products in Discretized Spaces and Its Effects in PDE Image Processing," vol.25(12), pp.1790-1794, Dec. 2018. Many digital signal and image processing methods involve computing the gradient of products of functions. However, the product rule for derivatives in continuous spaces, <inline-formula><tex-math notation="LaTeX">$partial (fg)=gpartial f + f partial g$</tex-math></inline-formula>, does not generally hold in discretized spaces. Hence, computing the gradient of products becomes ambiguous as the results depend on whether to treat the product <inline-formula><tex-math notation="LaTeX">$fg$</tex-math></inline-formula> as a single function or treat <inline-formula><tex-math notation="LaTeX">$f$</tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$g$</tex-math></inline-formula> as two separate functions and use the product rule. The two alternatives lead to different results, particularly for iterative solutions of differential equations since these small differences compound. Although this ambiguity is well-known, thus far a resolution has not been proposed in the literature. In this letter, we propose a mathematically rigorous procedure that selects the better alternative in the sense of yielding approximations that are closer to the continuous space results. As an example, we discuss the Perona-Malik anisotropic diffusion equation used in image processing.

Sophie M. Fosson; "A Biconvex Analysis for Lasso <inline-formula><tex-math notation="LaTeX">$ell _1$</tex-math> </inline-formula> Reweighting," vol.25(12), pp.1795-1799, Dec. 2018. Iterative l1 reweighting algorithms are very popular in sparse signal recovery and compressed sensing, since in the practice they have been observed to outperform classical 11 methods. Nevertheless, the theoretical analysis of their convergence is a critical point, and generally is limited to the convergence of the functional to a local minimum or to subsequence convergence. In this letter, we propose a new convergence analysis of a Lasso l1 reweighting method, based on the observation that the algorithm is an alternated convex search for a biconvex problem. Based on that, we are able to prove the numerical convergence of the sequence of the iterates generated by the algorithm. Furthermore, we propose an alternative iterative soft thresholding procedure, which is faster than the main algorithm.

Yunzhi Zhuge;Gang Yang;Pingping Zhang;Huchuan Lu; "Boundary-Guided Feature Aggregation Network for Salient Object Detection," vol.25(12), pp.1800-1804, Dec. 2018. Fully convolutional networks (FCN) has significantly improved the performance of many pixel-labeling tasks, such as semantic segmentation and depth estimation. However, it still remains nontrivial to thoroughly utilize the multilevel convolutional feature maps and boundary information for salient object detection. In this letter, we propose a novel FCN framework to integrate multilevel convolutional features recurrently with the guidance of object boundary information. First, a deep convolutional network is used to extract multilevel feature maps and separately aggregate them into multiple resolutions, which can be used to generate coarse saliency maps. Meanwhile, another boundary information extraction branch is proposed to generate boundary features. Finally, an attention-based feature fusion module is designed to fuse boundary information into salient regions to achieve accurate boundary inference and semantic enhancement. The final saliency maps are the combination of the predicted boundary maps and integrated saliency maps, which are more closer to the ground truths. Experiments and analysis on four large-scale benchmarks verify that our framework achieves new state-of-the-art results.

Seong-Eun Kim;Demba Ba;Emery N. Brown; "A Multitaper Frequency-Domain Bootstrap Method," vol.25(12), pp.1805-1809, Dec. 2018. Spectral properties of the electroencephalogram (EEG) are commonly analyzed to characterize the brain's oscillatory properties in basic science and clinical neuroscience studies. The spectrum is a function that describes power as a function of frequency. To date inference procedures for spectra have focused on constructing confidence intervals at single frequencies using large sample-based analytic procedures or jackknife techniques. These procedures perform well when the frequencies of interest are chosen before the analysis. When these frequencies are chosen after some of the data have been analyzed, the validity of these conditional inferences is not addressed. If power at more than one frequency is investigated, corrections for multiple comparisons must also be incorporated. To develop a statistical inference approach that considers the spectrum as a function defined across frequencies, we combine multitaper spectral methods with a frequency-domain bootstrap (FDB) procedure. The multitaper method is optimal for minimizing the bias-variance tradeoff in spectral estimation. The FDB makes it possible to conduct Monte Carlo based inferences for any part of the spectrum by drawing random samples that respect the dependence structure in the EEG time series. We show that our multitaper FDB procedure performs well in simulation studies and in analyses comparing EEG recordings of children from two different age groups receiving general anesthesia.

Andre Saito Guerreiro;Gustavo Fraidenraich; "Phase Corrected ICA for Uplink Massive MIMO," vol.25(12), pp.1810-1814, Dec. 2018. Channel state information (CSI) is fundamental to the performance of massive multiple-input multiple-output systems. However, systems that rely on pilots for the CSI estimation are susceptible to pilot contamination, as neighboring cells may reuse the same pilot sequences. Independent component analysis (ICA) has been widely used in recent studies to perform blind decoding, in order to circumvent the effects of pilot contamination. In this letter, we introduce a modified version of fastICA, the phase corrected ICA that reduces phase rotation on the estimated constellations, and thus, results in better performance of blind receivers. Through simulations, we show that our method significantly reduces symbol error rates compared to standard ICA based methods. Furthermore, we show that the increase in complexity is small at high signal-to-noise rate.

Dongdong Li;Gongjian Wen;Yangliu Kuai;Fatih Porikli; "End-to-End Feature Integration for Correlation Filter Tracking With Channel Attention," vol.25(12), pp.1815-1819, Dec. 2018. Recently, the performance advancement of discriminative correlation filter (DCF) based trackers is predominantly driven by the use of deep convolutional features. As convolutional features from multiple layers capture different target information, existing works integrate hierarchical convolutional features to enhance target representation. However, these works separate feature integration from DCF learning and hardly benefit from end-to-end training. In this letter, we incorporates feature integration and DCF learning in a unified convolutional neural network. This network reformulates feature integration as a differential module that concatenates features from the shallow and deep layers. A channel attention mechanism is introduced to adaptively impose channel-wise weight on the integrated features. Experimental results on OTB100 and UAV123 demonstrate that our method achieves significant performance improvement while running in real-time.

Robin Rajamäki;Visa Koivunen; "Sparse Active Rectangular Array With Few Closely Spaced Elements," vol.25(12), pp.1820-1824, Dec. 2018. Sparse sensor arrays offer a cost effective alternative to uniform arrays. By utilizing the co-array , a sparse array can match the performance of a filled array, despite having significantly fewer sensors. However, even sparse arrays can have many closely spaced elements, which may deteriorate the array performance in the presence of mutual coupling. This letter proposes a novel sparse planar array configuration with few unit inter-element spacings. This concentric rectangular array (CRA) is designed for active sensing tasks, such as microwave or ultrasound imaging, in which the same elements are used for both transmission and reception. The properties of the CRA are compared to two well-known sparse geometries: the boundary array and the minimum-redundancy array (MRA). Numerical searches reveal that the CRA is the MRA with the fewest unit element displacements for certain array dimensions.

Akshay Malhotra;Ioannis D. Schizas; "MILP-Based Unsupervised Clustering," vol.25(12), pp.1825-1829, Dec. 2018. In this letter, we discuss the problem of unsupervised clustering of sensor signals based on their information content. In the past, the problem has been formulated as a matrix factorization problem and has been solved with different variants of gradient descent. We reformulate the nonconvex cost function as a mixed integer linear programing problem with explicit clustering constraints and solve it with branch and bound, while introducing a scalable variant to reduce the computational time. The proposed method is applied to clustering problems in hyperspectral imaging and multiview image clustering and extensive results have been presented demonstrating the superiority of the novel framework over existing alternatives.

Guobing Qian;Shiyuan Wang;Lidan Wang;Shukai Duan; "Convergence Analysis of a Fixed Point Algorithm Under Maximum Complex Correntropy Criterion," vol.25(12), pp.1830-1834, Dec. 2018. With the emergence of complex correntropy, the maximum complex correntropy criterion (MCCC) has been applied to the complex-domain adaptive filtering. The MCCC uses the fixed point method to find the optimal solution, which provides good robustness in the non-Gaussian noise environment, especially for the impulse noise. However, the convergence analysis for the fixed point method is limited to the real-domain filtering. In this letter, we provide the convergence analysis of fixed point based MCCC algorithm in complex-domain filtering. First, by using the matrix inversion lemma, we rewrite the MCCC algorithm to a gradient-like version. In addition, we provide two computationally efficient versions of MCCC. Then, we provide the stability analysis and obtain the excess mean square error for MCCC. Finally, simulation results confirm the correctness of the convergence analysis in this letter.

Qiuyun Zou;Haochuan Zhang;Chao-Kai Wen;Shi Jin;Rong Yu; "Concise Derivation for Generalized Approximate Message Passing Using Expectation Propagation," vol.25(12), pp.1835-1839, Dec. 2018. Generalized approximate message passing (GAMP) is an efficient algorithm for the estimation of independent identically distributed random signals under generalized linear model. The sum-product GAMP has long been recognized as an approximate implementation of the sum-product loopy belief propagation. In this letter, we propose to view the message passing in a new perspective of expectation propagation (EP). Comparing with the previous methods that were based on Taylor expansions, the proposed EP method could unify the derivations for the real and the complex GAMP, with a difference only in the setup of Gaussian densities.

Jason Zalev;Michael C. Kolios; "Image Reconstruction Combined With Interference Removal Using a Mixed-Domain Proximal Operator," vol.25(12), pp.1840-1844, Dec. 2018. In certain imaging systems, frames of acquired raw data are preprocessed with a filtering stage before being processed with an image reconstruction stage. During these sequential stages, distortion may arise if valid signal cannot adequately be distinguished from interference. To avoid distortion, a mixed-domain proximal operator is mathematically formulated, which permits interference to be separated from data concurrently during a combined processing stage. The technique is demonstrated with an application involving optoacoustic imaging. Reconstruction is performed with an accelerated proximal gradient method. Total-variation minimization is used to promote smoothness in the image domain. In the data domain, interference is modeled as a low-rank matrix, which corresponds to a few time dependent interference components being coupled to each channel by determined amounts. Results are presented that demonstrate the ability to separate interference on a digital phantom used to simulate optoacoustic signals.

Sinan Gezici;Pramod K. Varshney; "On the Optimality of Likelihood Ratio Test for Prospect Theory-Based Binary Hypothesis Testing," vol.25(12), pp.1845-1849, Dec. 2018. In this letter, the optimality of the likelihood ratio test (LRT) is investigated for binary hypothesis testing problems in the presence of a behavioral decision-maker. By utilizing prospect theory, a behavioral decision-maker is modeled to cognitively distort probabilities and costs based on some weight and value functions, respectively. It is proved that the LRT may or may not be an optimal decision rule for prospect theory-based binary hypothesis testing, and conditions are derived to specify different scenarios. In addition, it is shown that when the LRT is an optimal decision rule, it corresponds to a randomized decision rule in some cases; i.e., nonrandomized LRTs may not be optimal. This is unlike Bayesian binary hypothesis testing, in which the optimal decision rule can always be expressed in the form of a nonrandomized LRT. Finally, it is proved that the optimal decision rule for prospect theory-based binary hypothesis testing can always be represented by a decision rule that randomizes at most two LRTs. Two examples are presented to corroborate the theoretical results.

Canqun Xiang;Lu Zhang;Yi Tang;Wenbin Zou;Chen Xu; "MS-CapsNet: A Novel Multi-Scale Capsule Network," vol.25(12), pp.1850-1854, Dec. 2018. Capsule network is a novel architecture to encode the properties and spatial relationships of the feature in an image, which shows encouraging results on image classification. However, the original capsule network is not suitable for some classification tasks, where the target objects are complex internal representations. Hence, we propose a multi-scale capsule network that is more robust and efficient for feature representation in image classification. The proposed multi-scale capsule network consists of two stages. In the first stage, structural and semantic information are obtained by multi-scale feature extraction. In the second stage, the hierarchy of features is encoded to multi-dimensional primary capsules. Moreover, we propose an improved dropout to enhance the robustness of the capsule network. Experimental results show that our method has a competitive performance on FashionMNIST and CIFAR10 datasets.

Shun Liu;Xu Sun;Weichao Xu;Yun Zhang;Jisheng Dai; "Null Distribution of Volume Under Ordered Three-Class ROC Surface (VUS) With Continuous Measurements," vol.25(12), pp.1855-1859, Dec. 2018. Receiver operating characteristic (ROC) analysis has become an indispensable tool in medical care, with a major application to characterizing the performance of binary diagnostic tests in clinical practice. In many circumstances, however, the diagnostic test has three outcomes, that is, the abnormalities are two sided. To deal with this scenario, this letter develops a recursive algorithm for computing the exact null distribution of the volume under the ordered three-class ROC surface (VUS) for samples following continuous distributions. Based on the asymptotic normality, an approximately normal distribution with exact mean and variance is also proposed, which is hoped to be useful for large-sample scenarios. Moreover, an efficient rank-based formula, in linearithmic time, is established for nonparametric estimation of VUS. Monte Carlo simulations verify the usefulness of the theoretical and algorithmic findings in this letter.

Eepuri Kiran Kumar;P. V. V. Kishore;Maddala Teja Kiran Kumar;Dande Anil Kumar;A. S. C. S. Sastry; "Three-Dimensional Sign Language Recognition With Angular Velocity Maps and Connived Feature ResNet," vol.25(12), pp.1860-1864, Dec. 2018. The objective of this letter is to design a unique spatio-temporal feature map characterization for three-dimensional (3-D) sign (or action) data. Current maps characterize geometric features, such as joint distances and angles or both, which could not accurately model the relative joint variations in a 3-D sign (or action) location data. Therefore, we propose a new color-coded feature map called joint angular velocity maps to accurately model the 3-D joint motions. Instead of using traditional convolutional neural networks (CNNs), we propose to develop a new ResNet architecture called connived feature ResNet, which has a CNN layer in the feedforward loop of the densely connected standard ResNet architecture. We show that this architecture avoids using dropout in the last layers and achieves the desired goal in less number of iterations compared to other ResNet and CNN based architectures used for sign (action) classification. To test our proposed model, we use our own motion captured 3-D sign language data (BVC3DSL) and other publicly available skeletal action data: CMU, HDM05, and NTU RGBD.

Karin Schnass; "Average Performance of Orthogonal Matching Pursuit (OMP) for Sparse Approximation," vol.25(12), pp.1865-1869, Dec. 2018. We present a theoretical analysis of the average performance of orthogonal matching pursuit (OMP) for sparse approximation. For signals that are generated from a dictionary with <inline-formula><tex-math notation="LaTeX">$K$ </tex-math></inline-formula> atoms and coherence <inline-formula><tex-math notation="LaTeX">$mu$</tex-math> </inline-formula> and coefficients corresponding to a geometric sequence with parameter <inline-formula> <tex-math notation="LaTeX">$alpha < 1$</tex-math></inline-formula>, we show that OMP is successful with high probability as long as the sparsity level <inline-formula><tex-math notation="LaTeX">$S$</tex-math></inline-formula> scales as <inline-formula><tex-math notation="LaTeX">$Smu ^2 log K lesssim 1-alpha$</tex-math></inline-formula>. This improves by an order of magnitude over worst case results and shows that OMP and its famous competitor Basis Pursuit outperform each other depending on the setting.

Aziz Koçanaoğulları;Yeganeh M. Marghi;Murat Akçakaya;Deniz Erdoğmuş; "Optimal Query Selection Using Multi-Armed Bandits," vol.25(12), pp.1870-1874, Dec. 2018. Query selection for latent variable estimation is conventionally performed by opting for observations with low noise or optimizing information-theoretic objectives related to reducing the level of estimated uncertainty based on the current best estimate. In these approaches, typically, the system makes a decision by leveraging the current available information about the state. However, trusting the current best estimate results in poor query selection when truth is far from the current estimate, and this negatively impacts the speed and accuracy of the latent variable estimation procedure. We introduce a novel sequential adaptive action value function for query selection using the multi-armed bandit framework, which allows us to find a tractable solution. For this adaptive-sequential query selection method, we analytically show: 1) performance improvement in the query selection for a dynamical system; and 2) the conditions where the model outperforms competitors. We also present favorable empirical assessments of the performance for this method, compared to alternative methods, both using Monte Carlo simulations and human-in-the-loop experiments with a brain–computer interface typing system, where the language model provides the prior information.

Zhiguo Ding;Derrick Wing Kwan Ng;Robert Schober;H. Vincent Poor; "Delay Minimization for NOMA-MEC Offloading," vol.25(12), pp.1875-1879, Dec. 2018. This letter considers the minimization of the offloading delay for nonorthogonal multiple access assisted mobile edge computing (NOMA-MEC). By transforming the delay minimization problem into a form of fractional programming, two iterative algorithms based on, respectively, Dinkelbach's method and Newton's method are proposed. The optimality of both methods is proved and their convergence is compared. Furthermore, criteria for choosing between three possible modes, namely orthogonal multiple access, pure NOMA, and hybrid NOMA, for MEC offloading are established.

Hui Yu;Kai Wang;Yan Li; "Multiscale Representations Fusion With Joint Multiple Reconstructions Autoencoder for Intelligent Fault Diagnosis," vol.25(12), pp.1880-1884, Dec. 2018. Existing intelligent fault diagnosis methods depend mostly on single-scale vibration signals, which not only ignore the latent useful information of other different scales, but also underestimate the complementary benefits across scales. In this work, we show the advantage of learning the combination of multiscale information by aiming to automatically capture complementary and discriminative feature representations from different scales of vibration signals. Specifically, we combine the merit of different activation functions in a joint learning fashion and propose a novel joint multiple reconstructions autoencoder (JMRAE), whose training objective is to jointly optimize multiple reconstruction losses. The JMRAE aims to jointly learn discriminative and robust scale-specific feature representations. In addition, we design a new multiscale representations fusion network (MRFN) model to effectively fuse multiscale feature representations learned concurrently by per-scale JMRAE model for maximizing the discriminative capability of the scale-fused features. The objective of MRFN is to benefit all different scales from each other for improving the classification performance. Extensive experimental results demonstrate the effectiveness of the proposed method on two rolling bearing datasets. The code of the proposed method is available at

Yi Yang;Wei Cao;Shiqian Wu;Zhengguo Li; "Multi-Scale Fusion of Two Large-Exposure-Ratio Images," vol.25(12), pp.1885-1889, Dec. 2018. Existing multiscale exposure fusion (MEF) algorithms cannot preserve relative brightness in an image fused from two large-exposure-ratio images if high-light regions in the dark image are darker than shadow regions in the bright image. In this letter, a strategy by synthesizing a virtual image with a medium exposure is presented to brighten the high-light regions in the dark image and to darken the darkest regions in the bright image. The virtual image is generated via intensity mapping functions. In order to avoid possible color distortion in the virtual image due to one-to-many mapping, two intermediate virtual images with the same exposure time are generated by the two input images, and then merged together to produce the desired virtual image using properly defined weights. The final image is obtained by fusing the original two input images and the virtual image via a state-of-the-art MEF algorithm. Experimental results show that the relative brightness is preserved much better and the MEF-SSIM is significantly improved by the proposed algorithm.

Kaiwen Jiang;Feng Qian;Ce Song;Bao Zhang; "An Approach to Overcome Occlusions in Visual Tracking: By Occlusion Estimating Agency and Self-Adapting Learning Rate for Filter's Training," vol.25(12), pp.1890-1894, Dec. 2018. Visual tracking methods have been successful in recent years. Correlation filter (CF) based methods significantly advanced state-of-the-art tracking. The advancement in CF tracking performance is predominantly attributed to powerful features and sophisticated online learning formulations. However, there would be trouble if the tracker indiscriminately learned samples. Particularly, when the target is severely occluded or out-of-view, the tracker will continuously learn the wrong information, resulting target loss in the following frames. In this study, aiming to avoid incorrect training when occlusions occur, we propose a regional color histogram-based occlusion estimating agency (RCHBOEA), which estimates the occlusion level and then instructs, based on the result, the tracker to work in one of two modes: normal or lost. In the normal mode, an occlusion level-based self-adopting learning rate is used for tracker training. In the lost mode, the tracker pauses its training and conducts a search and recapture strategy on a wider searching area. Our method can easily complement CF-based trackers. In our experiments, we employed four CF-based trackers as a baseline: discriminative CFs (DCF), kernelized CFs (KCF), background-aware CFs (BACF), and efficient convolution operators for tracking: hand-crafted feature version (ECO_HC). We performed extensive experiments on the standard benchmarks: VIVID, OTB50, and OTB100. The results demonstrated that combined with RCHBOEA, the trackers achieved a remarkable improvement.

* "List of Reviewers," vol.25(12), pp.1895-1904, Dec. 2018.* Presents the list of reviewers who contributed to this publication in 2018.

IEEE Journal of Selected Topics in Signal Processing - new TOC (2018 November 15) [Website]

* "Table of Contents," vol.12(5), pp.819-820, Oct. 2018.* Presents the table of contents for this issue of the publication.

M. Rodrigues;H. Bolcskei;S. Draper;Y. Eldar;V. Tan; "Introduction to the Issue on Information-Theoretic Methods in Data Acquisition, Analysis, and Processing," vol.12(5), pp.821-824, Oct. 2018. The twenty papers that are included in this special section explore applications of information theoretic methods to emerging data science problems. In particular, the papers cover a wide range of topics that can broadly be organized into four themes: (1) data acquisition, (2) data analysis and processing, (3) statistics and machine learning, and (4) privacy and fairness.

Or Ordentlich;Gizem Tabak;Pavan Kumar Hanumolu;Andrew C. Singer;Gregory W. Wornell; "A Modulo-Based Architecture for Analog-to-Digital Conversion," vol.12(5), pp.825-840, Oct. 2018. Systems that capture and process analog signals must first acquire them through an analog-to-digital converter. While subsequent digital processing can remove statistical correlations present in the acquired data, the dynamic range of the converter is typically scaled to match that of the input analog signal. The present paper develops an approach for analog-to-digital conversion that aims at minimizing the number of bits per sample at the output of the converter. This is attained by reducing the dynamic range of the analog signal by performing a modulo operation on its amplitude, and then quantizing the result. While the converter itself is universal and agnostic of the statistics of the signal, the decoder operation on the output of the quantizer can exploit the statistical structure in order to unwrap the modulo folding. The performance of this method is shown to approach information theoretical limits, as captured by the rate-distortion function, in various settings. An architecture for modulo analog-to-digital conversion via ring oscillators is suggested, and its merits are numerically demonstrated.

Photios A. Stavrou;Jan Østergaard;Charalambos D. Charalambous; "Zero-Delay Rate Distortion via Filtering for Vector-Valued Gaussian Sources," vol.12(5), pp.841-856, Oct. 2018. We deal with zero-delay source coding of a vector-valued Gauss-Markov source subject to a mean-squared error (MSE) fidelity criterion characterized by the operational zero-delay vector-valued Gaussian rate distortion function (RDF). We address this problem by considering the nonanticipative RDF (NRDF), which is a lower bound to the causal optimal performance theoretically attainable function (or simply causal RDF) and operational zero-delay RDF. We recall the realization that corresponds to the optimal “test-channel” of the Gaussian NRDF, when considering a vector Gauss-Markov source subject to a MSE distortion in the finite time horizon. Then, we introduce sufficient conditions to show existence of solution for this problem in the infinite time horizon (or asymptotic regime). For the asymptotic regime, we use the asymptotic characterization of the Gaussian NRDF to provide a new equivalent realization scheme with feedback, which is characterized by a resource allocation (reverse-waterfilling) problem across the dimension of the vector source. We leverage the new realization to derive a predictive coding scheme via lattice quantization with subtractive dither and joint memoryless entropy coding. This coding scheme offers an upper bound to the operational zero-delay vector-valued Gaussian RDF. When we use scalar quantization, then for r active dimensions of the vector Gauss-Markov source the gap between the obtained lower and theoretical upper bounds is less than or equal to 0.254r + 1 bits/vector. However, we further show that it is possible when we use vector quantization, and assume infinite dimensional Gauss-Markov sources to make the previous gap to be negligible, i.e., Gaussian NRDF approximates the operational zero-delay Gaussian RDF. We also extend our results to vector-valued Gaussian sources of any finite memory under mild conditions. Our theoretical framework is demonstrated with illustrative numerical experiments.

Gabor Hannak;Alessandro Perelli;Norbert Goertz;Gerald Matz;Mike E. Davies; "Performance Analysis of Approximate Message Passing for Distributed Compressed Sensing," vol.12(5), pp.857-870, Oct. 2018. Bayesian approximate message passing (BAMP) is an efficient method in compressed sensing that is nearly optimal in the minimum mean squared error (MMSE) sense. Multiple measurement vector (MMV)-BAMP performs joint recovery of multiple vectors with identical support and accounts for correlations in the signal of interest and in the noise. In this paper, we show how to reduce the complexity of vector BAMP via a simple joint decorrelation (diagonalization) transform of the signal and noise vectors, which also facilitates the subsequent performance analysis. We prove that the corresponding state evolution is equivariant with respect to the joint decorrelation transform and preserves diagonality of the residual noise covariance for the Bernoulli-Gauss prior. We use these results to analyze the dynamics and the mean squared error (MSE) performance of BAMP via the replica method, and thereby understand the impact of signal correlation and number of jointly sparse signals. Finally, we evaluate an application of MMV-BAMP for single-pixel imaging with correlated color channels and thereby explore the performance gain of joint recovery compared to conventional BAMP reconstruction as well as group lasso.

Anusha Lalitha;Nancy Ronquillo;Tara Javidi; "Improved Target Acquisition Rates With Feedback Codes," vol.12(5), pp.871-885, Oct. 2018. This paper considers the problem of acquiring an unknown target location (among a finite number of locations) via a sequence of measurements, where each measurement consists of simultaneously probing a group of locations. The resulting observation consists of a sum of an indicator of the target's presence in the probed region, and a zero mean Gaussian noise term whose variance is a function of the measurement vector. An equivalence between the target acquisition problem and channel coding over a binary input additive white Gaussian noise (BAWGN) channel with state and feedback is established. Utilizing this information theoretic perspective, a two-stage adaptive target search strategy based on the sorted Posterior Matching channel coding strategy is proposed. Furthermore, using information theoretic converses, the fundamental limits on the target acquisition rate for adaptive and non-adaptive strategies are characterized. As a corollary to the non-asymptotic upper bound of the expected number of measurements under the proposed two-stage strategy, and to non-asymptotic lower bound of the expected number of measurements for optimal non-adaptive search strategy, a lower bound on the adaptivity gain is obtained. The adaptivity gain is further investigated in different asymptotic regimes of interest.

Simon Mak;Yao Xie; "Maximum Entropy Low-Rank Matrix Recovery," vol.12(5), pp.886-901, Oct. 2018. We propose a novel, information-theoretic method, called MaxEnt, for efficient data acquisition for low-rank matrix recovery. This proposed method has important applications to a wide range of problems, including image processing and text document indexing. Fundamental to our design approach is the so-called maximum entropy principle, which states that the measurement masks that maximize the entropy of observations, also maximize the information gain on the unknown matrix X. Coupled with a low-rank stochastic model for X, such a principle 1) reveals novel connections between information-theoretic sampling and subspace packings, and 2) yields efficient mask construction algorithms for matrix recovery, which significantly outperform random measurements. We illustrate the effectiveness of MaxEnt in simulation experiments, and demonstrate its usefulness in two real-world applications on image recovery and text document indexing.

Jonathan Scarlett;Volkan Cevher; "Near-Optimal Noisy Group Testing via Separate Decoding of Items," vol.12(5), pp.902-915, Oct. 2018. The group testing problem consists of determining a small set of defective items from a larger set of items based on a number of tests, and is relevant in applications such as medical testing, communication protocols, pattern matching, and more. In this paper, we revisit an efficient algorithm for noisy group testing in which each item is decoded separately (Malyutov and Mateev, 1980), and develop novel performance guarantees via an information-theoretic framework for general noise models. For the special cases of no noise and symmetric noise, we find that the asymptotic number of tests required for vanishing error probability is within a factor log 2 ≈ 0.7 of the information-theoretic optimum at low-sparsity levels, and that with a small fraction of allowed incorrectly decoded items, this guarantee extends to all sublinear sparsity levels. In addition, we provide a converse bound showing that if one tries to move slightly beyond our low-sparsity achievability threshold using separate decoding of items and independent identically distributed randomized testing, the average number of items decoded incorrectly approaches that of a trivial decoder.

Haifeng Li;Jian Wang;Xin Yuan; "On the Fundamental Limit of Multipath Matching Pursuit," vol.12(5), pp.916-927, Oct. 2018. Multipath matching pursuit (MMP) is a recent extension of the orthogonal matching pursuit algorithm that recovers sparse signals with a tree-searching strategy. In this paper, we present a new analysis for the MMP algorithm using the restricted isometry property. Our result shows that if the sampling matrix A ∈ Rm×n satisfies the RIP of order K + L with isometry constant δK + L <; √L/K + L then the MMP accurately recovers any K-sparse signal x ∈ Rn from the samples y = Ax, where L is the number of child paths for each candidate of the algorithm. Moreover, through a counterexample, we show that the proposed bound is optimal in that the MMP algorithm may fail under δK + L = √L/K + L.

Ravi Kiran Raman;Lav R. Varshney; "Universal Joint Image Clustering and Registration Using Multivariate Information Measures," vol.12(5), pp.928-943, Oct. 2018. We consider the problem of universal joint clustering and registration of images. Image clustering focuses on grouping similar images, while image registration refers to the task of aligning copies of an image that have been subject to rigid-body transformations, such as rotations and translations. We first study registering two images using maximum mutual information and prove its asymptotic optimality. We then show the shortcomings of pairwise registration in multi-image registration, and design an asymptotically optimal algorithm based on multi-information. Further, we define a novel multivariate information functional to perform joint clustering and registration of images, and prove consistency of the algorithm. Finally, we consider registration and clustering of numerous limited-resolution images, defining algorithms that are order-optimal in scaling of number of pixels in each image with the number of images.

Hussein Saad;Aria Nosratinia; "Community Detection With Side Information: Exact Recovery Under the Stochastic Block Model," vol.12(5), pp.944-958, Oct. 2018. The community detection problem involves making inferences about node labels in a graph, based on observing the graph edges. This paper studies the effect of additional, nongraphical side information on the phase transition of exact recovery in the binary stochastic block model with n nodes. When side information consists of noisy labels with error probability α, it is shown that phase transition is improved if and only if log ((1-α)/α) = Ω (log (n)). When side information consists of revealing a fraction 1 - ε of the labels, it is shown that phase transition is improved if and only if log (1/ε) = Ω (log (n)). For a more general side information consisting of K features, two scenarios are studied, first, K is fixed while the likelihood of each feature with respect to corresponding node label evolves with n, and, second, the number of features K varies with n but the likelihood of each feature is fixed. In each case, we find when side information improves the exact recovery phase transition and by how much. In the process of deriving inner bounds, a variation of an efficient algorithm is proposed for community detection with side information that uses a partial recovery algorithm combined with a local improvement procedure.

Kwangjun Ahn;Kangwook Lee;Changho Suh; "Hypergraph Spectral Clustering in the Weighted Stochastic Block Model," vol.12(5), pp.959-974, Oct. 2018. Spectral clustering is a celebrated algorithm that partitions the objects based on pairwise similarity information. While this approach has been successfully applied to a variety of domains, it comes with limitations. The reason is that there are many other applications in which only multiway similarity measures are available. This motivates us to explore the multiway measurement setting. In this paper, we develop two algorithms intended for such setting: hypergraph spectral clustering (HSC) and hypergraph spectral clustering with local refinement (HSCLR). Our main contribution lies in performance analysis of the polytime algorithms under a random hypergraph model, which we name the weighted stochastic block model, in which objects and multiway measures are modeled as nodes and weights of hyperedges, respectively. Denoting by n the number of nodes, our analysis reveals the following: 1) HSC outputs a partition which is better than a random guess if the sum of edge weights (to be explained later) is Ω(n); 2) HSC outputs a partition which coincides with the hidden partition except for a vanishing fraction of nodes if the sum of edge weights is ω(n); and 3) HSCLR exactly recovers the hidden partition if the sum of edge weights is on the order of n log n. Our results improve upon the state of the arts recently established under the model and they first settle the orderwise optimal results for the binary edge weight case. Moreover, we show that our results lead to efficient sketching algorithms for subspace clustering, a computer vision application. Finally, we show that HSCLR achieves the information-theoretic limits for a special yet practically relevant model, thereby showing no computational barrier for the case.

Mine Alsan;Ranjitha Prasad;Vincent Y. F. Tan; "Lower Bounds on the Bayes Risk of the Bayesian BTL Model With Applications to Comparison Graphs," vol.12(5), pp.975-988, Oct. 2018. We consider the problem of aggregating pairwise comparisons to obtain a consensus ranking order over a collection of objects. We use the popular Bradley-Terry-Luce (BTL) model, which allows us to probabilistically describe pairwise comparisons between objects. In particular, we employ the Bayesian BTL model, which allows for meaningful prior assumptions and to cope with situations where the number of objects is large and the number of comparisons between some objects is small or even zero. For the conventional Bayesian BTL model, we derive information-theoretic lower bounds on the Bayes risk of estimators for norm-based distortion functions. We compare the information-theoretic lower bound with the Bayesian Cramér-Rao lower bound we derive for the case when the Bayes risk is the mean squared error. We illustrate the utility of the bounds through simulations by comparing them with the error performance of an expectation-maximization-based inference algorithm proposed for the Bayesian BTL model. We draw parallels between pairwise comparisons in the BTL model and interplayer games represented as edges in an Erdös-Rényi graph and analyze the effect of various graph structures on the lower bounds. We also extend the information-theoretic and Bayesian Cramér-Rao lower bounds to the more general Bayesian BTL model, which takes into account home-field advantage.

Minje Jang;Sunghyun Kim;Changho Suh; "Top-<inline-formula><tex-math notation="LaTeX">$K$</tex-math></inline-formula> Rank Aggregation From <inline-formula><tex-math notation="LaTeX">$M$</tex-math></inline-formula>-Wise Comparisons," vol.12(5), pp.989-1004, Oct. 2018. Suppose one aims to identify only the top-K among a large collection of n items provided M-wise comparison data, where a set of M items in each data sample are ranked in order of individual preference. Natural questions that arise are as follows: 1) how one can reliably achieve the top-K rank aggregation task; and 2) how many M-wise samples one needs to achieve it. In this paper, we answer these two questions. First, we devise an algorithm that effectively converts M-wise samples into pairwise ones and employs a spectral method using the refined data. Second, we consider the Plackett-Luce (PL) model, a well-established statistical model, and characterize the minimal number of M-wise samples (i.e., sample complexity) required for reliable top-K ranking. It turns out to be inversely proportional to M. To characterize it, we derive a lower bound on the sample complexity and prove that our algorithm achieves the bound. Moreover, we conduct extensive numerical experiments to demonstrate that our algorithm not only attains the fundamental limit under the PL model but also provides robust ranking performance for a variety of applications that may not precisely fit the model. We corroborate our theoretical result using synthetic datasets, confirming that the sample complexity decreases at the rate of ,1. Also, we verify the applicability of our algorithm in practice, showing that it performs well on various real-world datasets.

Qunwei Li;Tiexing Wang;Donald J. Bucci;Yingbin Liang;Biao Chen;Pramod K. Varshney; "Nonparametric Composite Hypothesis Testing in an Asymptotic Regime," vol.12(5), pp.1005-1014, Oct. 2018. We investigate the nonparametric, composite hypothesis testing problem for arbitrary unknown distributions in the asymptotic regime where both the sample size and the number of hypothesis grow exponentially large. Such asymptotic analysis is important in many practical problems, where the number of variations that can exist within a family of distributions can be countably infinite. We introduce the notion of discrimination capacity, which captures the largest exponential growth rate of the number of hypothesis relative to the sample size so that there exists a test with asymptotically vanishing probability of error. Our approach is based on various distributional distance metrics in order to incorporate the generative model of the data. We provide analyses of the error exponent using the maximum mean discrepancy and Kolmogorov-Smirnov distance and characterize the corresponding discrimination rates, i.e., lower bounds on the discrimination capacity, for these tests. Finally, an upper bound on the discrimination capacity based on Fano's inequality is developed. Numerical results are presented to validate the theoretical results.

Ishan Jindal;Matthew Nokleby; "Classification and Representation via Separable Subspaces: Performance Limits and Algorithms," vol.12(5), pp.1015-1030, Oct. 2018. We study the classification performance of Kronecker-structured (K-S) subpsace models in two asymptotic regimes and develop an algorithm for fast and compact K-S subspace learning for better classification and representation of multidimensional signals by exploiting the structure in the signal. First, we study the classification performance in terms of diversity order and pairwise geometry of the subspaces. We derive an exact expression for the diversity order as a function of the signal and subspace dimensions of a K-S model. Next, we study the classification capacity, the maximum rate at which the number of classes can grow as the signal dimension goes to infinity. Then, we describe a fast algorithm for Kronecker-structured learning of discriminative dictionaries (K-SLD2). Finally, we evaluate the empirical classification performance of K-S models for the synthetic data, showing that they agree with the diversity order analysis. We also evaluate the performance of K-SLD2 on synthetic and real-world datasets showing that the K-SLD2 balances compact signal representation and good classification performance.

Ahmed M. Alaa;Mihaela van der Schaar; "Bayesian Nonparametric Causal Inference: Information Rates and Learning Algorithms," vol.12(5), pp.1031-1046, Oct. 2018. We investigate the problem of estimating the causal effect of a treatment on individual subjects from observational data; this is a central problem in various application domains, including healthcare, social sciences, and online advertising. Within the Neyman-Rubin potential outcomes model, we use the Kullback-Leibler (KL) divergence between the estimated and true distributions as a measure of accuracy of the estimate, and we define the information rate of the Bayesian causal inference procedure as the (asymptotic equivalence class of the) expected value of the KL divergence between the estimated and true distributions as a function of the number of samples. Using Fano's method, we establish a fundamental limit on the information rate that can be achieved by any Bayesian estimator, and show that this fundamental limit is independent of the selection bias in the observational data. We characterize the Bayesian priors on the potential (factual and counterfactual) outcomes that achieve the optimal information rate. We go on to propose a prior adaptation procedure (which we call the information-based empirical Bayes procedure) that optimizes the Bayesian prior by maximizing an information-theoretic criterion on the recovered causal effects rather than maximizing the marginal likelihood of the observed (factual) data. Building on our analysis, we construct an information-optimal Bayesian causal inference algorithm. This algorithm embeds the potential outcomes in a vector-valued reproducing Kernel Hilbert space, and uses a multitask Gaussian process prior over that space to infer the individualized causal effects. We show that for such a prior, the proposed information-based empirical Bayes method adapts the smoothness of the multitask Gaussian process to the true smoothness of the causal effect function by balancing a tradeoff between the factual bias and the counterfactual variance. We conduct experiments on a well-known real-world dataset and show that o- r model significantly outperforms the state-of-the-art causal inference models.

Zahra Shakeri;Anand D. Sarwate;Waheed U. Bajwa; "Identifiability of Kronecker-Structured Dictionaries for Tensor Data," vol.12(5), pp.1047-1062, Oct. 2018. This paper derives sufficient conditions for local recovery of coordinate dictionaries comprising a Kronecker-structured dictionary that is used for representing Kth-order tensor data. Tensor observations are assumed to be generated from a Kronecker-structured dictionary multiplied by sparse coefficient tensors that follow the separable sparsity model. This paper provides sufficient conditions on the underlying coordinate dictionaries, coefficient and noise distributions, and number of samples that guarantee recovery of the individual coordinate dictionaries up to a specified error, as a local minimum of the objective function, with high probability. In particular, the sample complexity to recover K coordinate dictionaries with dimensions mk × pk up to estimation error εk is shown to be maxk∈[K] O(mkpk3 εk-2).

Matias Vera;Leonardo Rey Vega;Pablo Piantanida; "Compression-Based Regularization With an Application to Multitask Learning," vol.12(5), pp.1063-1076, Oct. 2018. This paper investigates, from information theoretic grounds, a learning problem based on the principle that any regularity in a given dataset can be exploited to extract compact features from data, i.e., using fewer bits than needed to fully describe the data itself, in order to build meaningful representations of a relevant content (multiple labels). We begin studying a multitask learning (MTL) problem from the average (over the tasks) of misclassification probability point of view and linking it with the popular cross-entropy criterion. Our approach allows an information theoretic formulation of an MTL problem as a supervised learning framework, in which the prediction models for several related tasks are learned jointly from common representations to achieve better generalization performance. More precisely, our formulation of the MTL problem can be interpreted as an information bottleneck problem with side information at the decoder. Based on that, we present an iterative algorithm for computing the optimal tradeoffs and some of its convergence properties are studied. An important feature of this algorithm is to provide a natural safeguard against overfitting, because it minimizes the average risk taking into account a penalization induced by the model complexity. Remarkably, empirical results illustrate that there exists an optimal information rate minimizing the excess risk, which depends on the nature and the amount of available training data. Applications to hierarchical text categorization and distributional word clusters are also investigated, extending previous works.

Dragana Bajović;Kanghang He;Lina Stanković;Dejan Vukobratović;Vladimir Stanković; "Optimal Detection and Error Exponents for Hidden Semi-Markov Models," vol.12(5), pp.1077-1092, Oct. 2018. We study detection of random signals corrupted by noise that over time switch their values (states) between a finite set of possible values, where the switchings occur at unknown points in time. We model such signals as hidden semi-Markov signals, which generalize classical Markov chains by introducing explicit (possibly nongeometric) distribution for the time spent in each state. Assuming two possible signal states and Gaussian noise, we derive optimal likelihood ratio test and show that it has a computationally tractable form of a matrix product, with the number of matrices involved in the product being the number of process observations. The product matrices are independent and identically distributed, constructed by a simple measurement modulation of the sparse semi-Markov model transition matrix that we define in the paper. Using this result, we show that the Neyman-Pearson error exponent is equal to the top Lyapunov exponent for the corresponding random matrices. Using theory of large deviations, we derive a lower bound on the error exponent. Finally, we show that this bound is tight by means of numerical simulations.

Matthew P. Johnson;Liang Zhao;Supriyo Chakraborty; "Achieving Pareto-Optimal MI-Based Privacy-Utility Tradeoffs Under Full Data," vol.12(5), pp.1093-1105, Oct. 2018. We study a fine-grained model in which a perturbed version of some data (D) is to be disclosed, with the aims of permitting the receiver to accurately infer some useful aspects (X = f(D)) of it, while preventing her from inferring other private aspects (Y = g(D)). Correlation between the bases for these inferences necessitates compromise between these goals. Determining exactly how the disclosure (M) will be probabilistically generated (from D), somehow trading off between making I(M; X) large and I(M; Y ) small, is cast as an algorithmic optimization problem. In 2013, Chakraborty etal. provided optimal solutions for the two extreme points on these objectives' Pareto frontier: maximizing I(M; X) s.t. I(M; Y ) = 0 (“perfect privacy,” via linear programming(LP))andminimizingI(M; Y ) s.t.H(X|M) = 0 (“perfect utility,” for which the trivial solution M = X is optimal). We show that when minimizing I(M; Y ) - βI(M; X), we can restrict ourselves w.l.o.g. to solutions satisfying several normal-form conditions, which leads to 1) an alternative convex programming formulation when β ∈ [0, 1], which we provide a practical optimal algorithm for, and 2) proof that M = X is actually optimal for all β ≥ 1. This solves the primary open problem posed by Chakraborty et al. (It also provides a faster solution than Chakraborty et al.'s LP for the “perfect privacy” special case).

Flavio du Pin Calmon;Dennis Wei;Bhanukiran Vinzamuri;Karthikeyan Natesan Ramamurthy;Kush R. Varshney; "Data Pre-Processing for Discrimination Prevention: Information-Theoretic Optimization and Analysis," vol.12(5), pp.1106-1119, Oct. 2018. Non-discrimination is a recognized objective in algorithmic decision making. In this paper, we introduce a novel probabilistic formulation of data pre-processing for reducing discrimination. We propose a convex optimization for learning a data transformation with three goals: controlling group discrimination, limiting distortion in individual data samples, and preserving utility. Several theoretical properties are established, including conditions for convexity, a characterization of the impact of limited sample size on discrimination and utility guarantees, and a connection between discrimination and estimation. Two instances of the proposed optimization are applied to datasets, including one on real-world criminal recidivism. Results show that discrimination can be greatly reduced at a small cost in classification accuracy and with precise control of individual distortion.

IEEE Signal Processing Magazine - new TOC (2018 November 15) [Website]

* "Front Cover," vol.35(6), pp.C1-C1, Nov. 2018.*

* "Table of Contents," vol.35(6), pp.1-2, Nov. 2018.*

* "Masthead," vol.35(6), pp.2-2, Nov. 2018.*

Ali H. Sayed; "Twinkle, Twinkle, Little Star [President's Message[Name:_blank]]," vol.35(6), pp.5-7, Nov. 2018. Presents the President’s message for this issue of the publication.

John Edwards; "Something to Talk About: Signal Processing in Speech and Audiology Research: Promising Investigations Explore New Opportunities in Human Communication [Special Reports[Name:_blank]]," vol.35(6), pp.8-12, Nov. 2018. Speech, the expression of thoughts and feelings by articulating sounds, is an ability so taken for granted that few people bother to think about how complex and nuanced the process actually is. Yet, as more devices gain the ability to listen to and interpret what speakers are saying, speech and audiology technologies are attracting the interest of a growing number of academic researchers. Signal processing is now playing a critical role in making speech detection and recognition more accurate, flexible, and reliable for use in a wide range of research and everyday applications. Singing mice Vocalization plays a critical role in social communication across many species. Male mice, for example, generate ultrasonic vocalizations (USVs) in the presence of females. Both male and female mice “sing” during friendly social encounters.

John Edwards; "Signal Processing Leads to New Clinical Medicine Approaches: Innovative Methods Promise Improved Patient Diagnoses and Treatments [Special Reports[Name:_blank]]," vol.35(6), pp.12-15, Nov. 2018. Popular consumer and business technologies, such as smartphones, tablets, wearable devices, and sophisticated photoimaging-all driven or supported by signal processing-are leading to a generation of powerful new diagnostic tools designed to help physicians working in clinical medicine. In Rochester, New York, for instance, a team of engineers and clinicians at the Rochester Institute of Technology (RIT) and the University of Rochester Medical Center (URMC) is developing a video-based smartphone/tablet-based health app (Figure 1) that is designed to serve as a clinical tool to assess atrial fibrillation (AF), a heartrhythm disorder that afflicts millions of people worldwide. Co-project leaders are Gill Tsouri, an associate professor of electrical engineering in RIT's Kate Gleason College of Engineering, who is developing both the app and its video system algorithm, and Jean Philippe Couderc, a biomedical engineer and assistant director of the University of Rochester Heart Research Follow-Up Program Lab, who will head the clinical trials.

Jie Ding;Vahid Tarokh;Yuhong Yang; "Model Selection Techniques: An Overview," vol.35(6), pp.16-34, Nov. 2018. In the era of big data, analysts usually explore various statistical models or machine-learning methods for observed data to facilitate scientific discoveries or gain predictive power. Whatever data and fitting procedures are employed, a crucial step is to select the most appropriate model or method from a set of candidates. Model selection is a key ingredient in data analysis for reliable and reproducible statistical inference or prediction, and thus it is central to scientific studies in such fields as ecology, economics, engineering, finance, political science, biology, and epidemiology. There has been a long history of model selection techniques that arise from researches in statistics, information theory, and signal processing. A considerable number of methods has been proposed, following different philosophies and exhibiting varying performances. The purpose of this article is to provide a comprehensive overview of them, in terms of their motivation, large sample performance, and applicability. We provide integrated and practically relevant discussions on theoretical properties of state-of-the-art model selection approaches. We also share our thoughts on some controversial views on the practice of model selection.

Deborah Cohen;Yonina C. Eldar; "Sub-Nyquist Radar Systems: Temporal, Spectral, and Spatial Compression," vol.35(6), pp.35-58, Nov. 2018. Radar is an acronym for "radio detection and ranging." However, the functions of today's radar systems, both in civilian and military applications, go beyond simple target detection and localization; they extend to tracking, imaging, classification, and more and involve different types of radar systems, such as through-the-wall [1], ground-penetration [2], automotive [3], and weather [4]. Although radar technology has been well established for decades, a new line of compressed radars has recently emerged. These aim at reducing the complexity of classic radar systems by exploiting inherent prior information on the structure of the received signal from the targets. The goal of this article is to review these novel sub-Nyquist radars and their potential applications.

Giulio Giaconi;Deniz Gunduz;H. Vincent Poor; "Privacy-Aware Smart Metering: Progress and Challenges," vol.35(6), pp.59-78, Nov. 2018. The next-generation energy network, the so-called smart grid (SG), promises tremendous increases in efficiency, safety, and flexibility in managing the electricity grid as compared to the legacy energy network. This is needed today more than ever, as global energy consumption is growing at an unprecedented rate and renewable energy sources (RESs) must be seamlessly integrated into the grid to assure a sustainable human development.

Rafael C. Gonzalez; "Deep Convolutional Neural Networks [Lecture Notes[Name:_blank]]," vol.35(6), pp.79-87, Nov. 2018. Neural networks are a subset of the field of artificial intelligence (AI). The predominant types of neural networks used for multidimensional signal processing are deep convolutional neural networks (CNNs). The term deep refers generically to networks having from a "few" to several dozen or more convolution layers, and deep learning refers to methodologies for training these systems to automatically learn their functional parameters using data representative of a specific problem domain of interest. CNNs are currently being used in a broad spectrum of application areas, all of which share the common objective of being able to automatically learn features from (typically massive) data bases and to generalize their responses to circumstances not encountered during the learning phase. Ultimately, the learned features can be used for tasks such as classifying the types of signals the CNN is expected to process. The purpose of this "Lecture Notes" article is twofold: 1) to introduce the fundamental architecture of CNNs and 2) to illustrate, via a computational example, how CNNs are trained and used in practice to solve a specific class of problems.

Zafar Rafii; "Sliding Discrete Fourier Transform with Kernel Windowing [Lecture Notes[Name:_blank]]," vol.35(6), pp.88-92, Nov. 2018. The sliding discrete Fourier transform (SDFT) is an efficient method for computing the <italic>N</italic>-point DFT of a given signal starting at a given sample from the <italic>N</italic>-point DFT of the same signal starting at the previous sample [1]. However, the SDFT does not allow the use of a window function, generally incorporated in the computation of the DFT to reduce spectral leakage, as it would break its sliding property. This article will show how windowing can be included in the SDFT by using a kernel derived from the window function, while keeping the process computationally efficient. In addition, this approach allows for turning other transforms, such as the modified discrete cosine transform (MDCT), into efficient sliding versions of themselves.

* "Join the IEEE Signal Processing Cup 2019," vol.35(6), pp.92-92, Nov. 2018.*

Alexander Bertrand; "Utility Metrics for Assessment and Subset Selection of Input Variables for Linear Estimation [Tips & Tricks[Name:_blank]]," vol.35(6), pp.93-99, Nov. 2018. This tutorial article introduces the utility metric and its generalizations, which allow for a quick-and-dirty quantitative assessment of the relative importance of the different input variables in a linear estimation model. In particular, we show how these metrics can be cheaply calculated, thereby making them very attractive for model interpretation, online signal quality assessment, or greedy variable selection. The main goal of this article is to provide a transparent and consistent framework that consolidates, unifies, and extends the existing results in this area. In particular, we 1) introduce the basic utility metric and show how it can be calculated at virtually no cost, 2) generalize it toward group-utility and noise-impact metrics, and 3) further extend it to cope with linearly dependent inputs and minimum norm requirements.

Zsolt Kollar;Ferenc Plesznik;Simon Trumpf; "Observer-Based Recursive Sliding Discrete Fourier Transform [Tips & Tricks[Name:_blank]]," vol.35(6), pp.100-106, Nov. 2018. In the field of digital signal analysis and processing, the ubiquitous domain transformation is the discrete Fourier transform (DFT), which converts the signal of interest within a limited time window from discrete time to the discrete frequency domain. The active use in real-time or quasi-real-time applications has been made possible by a family of fast implementations of the DFT, called fast Fourier transform (FFT) algorithms.

* "Dates Ahead," vol.35(6), pp.107-107, Nov. 2018.*

* "2018 Index IEEE Signal Processing Magazine Vol. 35," vol.35(6), pp.108-122, Nov. 2018.*

Michiel Bacchiani;Eric Fosler-Lussier; "An Overview of the IEEE SPS Speech and Language Technical Committee [In the Spotlight[Name:_blank]]," vol.35(6), pp.125-126, Nov. 2018. As part of the IEEE Signal Processing Society (SPS), the Speech and Language Technical Committee (SLTC) promotes research and development activities for technologies that are used to process speech and natural language. Much of the SLTC's efforts are devoted to the annual IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), where the SLTC manages the review of papers covering speech and language and organizes conference sessions, special sessions, and tutorials. In addition, it promotes and supports various workshops, most prominently the Automatic Speech Recognition and Understanding (ASRU) and the Spoken Language Technology (SLT) workshops.

Erik Meijering;Arrate Munoz-Barrutia; "Spotlight on Bioimaging and Signal Processing [In the Spotlight[Name:_blank]]," vol.35(6), pp.128-125, Nov. 2018. The Bio-Imaging and Signal Processing Technical Committee (BISP-TC) of the IEEE Signal Processing Society (SPS) promotes activities in the broad technical areas of computerized image and signal processing with a clear focus on applications in biology and medicine. Specific topics of interest include image reconstruction, compressed sensing, superresolution, image restoration, registration and segmentation, pattern recognition, object detection, localization, tracking, quantification and classification, machine learning, multimodal image and signal fusion, analytics, visualization, and statistical modeling. Application areas covered by the TC include biomedical imaging from nano to macroscale, encompassing all modalities of molecular imaging and microscopy, anatomical imaging, and functional imaging, as well as genomic signal processing, computational biology, and bioinformatics, with the ultimate overarching aim of enabling precision medicine.

IET Signal Processing - new TOC (2018 November 15) [Website]

Xiaolong Hu;Hongbing Ji;Mingjie Wang; "[[][CBMeMBer filter with adaptive target birth intensity][Name:_blank]]," vol.12(8), pp.937-948, 10 2018. Appropriately modelling target-birth intensity is a significant but challenging issue in multi-target tracking systems. In existing cardinality-balanced multi-target multi-Bernoulli (CBMeMBer) filters, a priori knowledge about the locations where targets appear is required to model the target-birth intensity. Since the newborn targets can appear anywhere within the observation region, it is impractical to obtain such prior information. In this study, a novel CBMeMBer filter with adaptive target-birth intensity is presented, considering the newborn and surviving targets separately. The target-birth function of the target-birth intensity is modelled using current measurements rather than the known birth locations, and the target-birth magnitude is assigned by an allocation function rather than equally assigned. The new CBMeMBer filter can remove the restriction on the requirement of prior birth location information and can adapt well after continuous missing detection occurs. Simulations of the sequential Monte Carlo and Gaussian mixture implementations demonstrate the effectiveness of the proposed filter.

Marcos A. A Pinheiro;Antonio Petraglia;Mariane R. Petraglia; "Performance of analogue, digital and hybrid finite impulse response filter banks affected by realisation errors and noise," vol.12(8), pp.949-956, 10 2018. In this study, the performance of analogue, digital and hybrid finite impulse response filter banks affected by coefficient errors, mismatches among sub-processors, and noise is investigated. A simple yet important class, the so-called time-interleaved filter bank, is also considered. Analytic models are derived to properly describe the impact of these imperfections on the filter bank performance. An estimator for the signal-to-noise ratio of these filter banks is advanced, from which comparisons are made among a large variety of structures. Computer simulations are carried out to verify the theory.

Ali Moussa;Mathieu Pouliquen;Miloud Frikel;Sayda Bedoui;Kamel Abderrahim;Mohammed M'Saad; "Blind equalisation in the presence of bounded noise," vol.12(8), pp.957-965, 10 2018. This study addresses the blind equalisation problem in the presence of bounded noise using an optimal bounding ellipsoid algorithm. This provides an adequate blind equalisation algorithm with an accurate parameter estimation. A fundamental analysis of the involved equaliser is performed to emphasise its underlying properties. This fundamental result is corroborated by promising simulation results.

HongZhong Tang;Xiao Li;Xiang Wang;Lizhen Mao;Ling Zhu; "Unsupervised feature learning via prior information-based similarity metric learning for face verification," vol.12(8), pp.966-974, 10 2018. Here, an efficient framework is developed to address the problem of unconstrained face verification. In particular, an unsupervised feature learning method for face image representation and a novel similarity metric model are discussed. First, the authors propose an unsupervised feature learning method with sparse auto-encoder (SAE) based on local descriptor (SAELD). A set of filter operators are learned based on SAE model from local patches, and face descriptors are extracted by applying the set of filter operators to convolve images. This can address the face discriminative representation issue of unconstrained face verification. Then pairwise SAELD descriptors are projected into the weighted subspace. Furthermore, a prior information-based similarity metric learning model is presented, in which the metric matrix is learned by enforcing a regularisation term based on the prior similar and discriminative information. This idea can improve the robustness to intra-personal variations and discrimination to inter-personal variations. Experimental results show that the proposed method has competitive performance compared with several state-of-the-art methods on challenging labelled faces in the wild data set.

Jun Tae Kim;Sung Hoon Jung;Kwang-Hyun Cho; "Efficient harmonic peak detection of vowel sounds for enhanced voice activity detection," vol.12(8), pp.975-982, 10 2018. Voice activity detection (VAD) involves discriminating speech segments from background noise and is a critical step in numerous speech-related applications. However, distinguishing speech from noise based on the properties of noise is fallible, because it is difficult to predict and characterise the noise occurring in real life. In this study, the authors instead focus on the intrinsic characteristics of speech. The harmonic peaks of vowel sounds have higher energies than the other spectral components of speech and are the speech features most likely to survive in most cases of severe noise. Therefore, the energy differences between harmonic peaks and other spectral features show promise for enabling robust VAD. To exploit this feature, the harmonic peaks must be accurately located. For this purpose, this study proposes an efficient harmonic peak location detection (HPD) method. Based on extensive experiments conducted in the presence of various noise types and signal-to-noise ratios, we found that VAD with the proposed HPD approach outperforms existing VAD methods and does so with reasonable computational cost and higher robustness.

Chenguang Shi;Fei Wang;Mathini Sellathurai;Jianjiang Zhou; "Non-cooperative game-theoretic distributed power control technique for radar network based on low probability of intercept," vol.12(8), pp.983-991, 10 2018. Here, the problem of non-cooperative game-theoretic distributed power control is studied in a radar network system based on low probability of intercept (LPI) subject to the signal-to-interference-plus-noise ratio (SINR) constraint and the transmit power constraint of each radar, where all the radars in the network share the same frequency band. The objective is to improve the LPI performance by reducing the transmit power caused by some radars' SINRs over the specified threshold. First, a novel LPI performance-oriented utility function is defined as a metric to evaluate power control. Then, consider that radars in the network are self-interested to maximise their own utilities, the distributed power control problem is formulated as a non-cooperative game, and an iterative power control algorithm is proposed that converges quickly to the Nash equilibrium (NE) of the non-cooperative game. Finally, the existence and uniqueness of NE are proved analytically. Numerical simulation results are provided to demonstrate that, compared with other methods, the presented algorithm not only guarantees the minimum SINR requirements of all radars but also improves the LPI performance for radar network.

Karthikeyan Elumalai;Shailesh Kumar;Brejesh Lall;Rakesh Kumar Patney; "Denoising of pre-stack seismic data using subspace estimation methods," vol.12(8), pp.992-999, 10 2018. Denoising is one of the core steps in seismic data processing flow. The seismic gather consists of multiple traces captured at different receivers. A set of receivers observe waves which are reflected from the same reflection point. Those traces need to be grouped together as they contain the same information about the earth subsurface layers. This is done by finding a common mid-point (CMP) between the source and geophones. The time delay between CMP gathered traces are corrected by the normal move out correction method but the individual traces are corrupted by noise. In this paper we, propose a method for denoising individual traces. The set of traces can be modelled as belonging to a low-dimensional subspace of an ambient signal space. This allows for construction of sparse representations of each trace in terms of other traces in the CMP gather. The resulting sparse representations are subsequently utilised to construct approximations of individual traces and thus, noise is suppressed. We constructed, the approximations using orthogonal matching pursuit. We applied proposed method to synthetic and field seismic data, the proposed technique performs better on established benchmarks while capturing the true locations of weak reflections and effectively attenuating the random noise.

Yanhui Xi;Xiaodong Zhang;Zewen Li;Xiangjun Zeng;Xin Tang;Yonglin Cui;Hui Xiao; "Double-ended travelling-wave fault location based on residual analysis using an adaptive EKF," vol.12(8), pp.1000-1008, 10 2018. This paper presents the estimated residuals for detecting the traveling wave front using an adaptive extended Kalman filter based on the maximum likelihood (EKF-ML), which uses the maximum likelihood method to adaptively optimize the error covariance matrices and the initial conditions. In some situations, such as faults close to the bus or close to zero inception angle, or faults with high fault resistances, the transient waves can become weak and even become lost in the noise, which makes the discrimination of the traveling wave front become more difficult. Aiming at this, residuals between the observed values and the estimated values using the adaptive EKF exhibit remarkable singularities, and can be used for exactly determining the wave front. Thus, the exact arrival time of the initial wave head can be determined and then the fault distance can be calculated precisely. The effectiveness of exacting mutation feature using the proposed method has been demonstrated by a simulated instantaneous pulse. And it has been tested with different types of faults using ATP simulation. Simulation results verify that the estimated residuals are highly sensitive to traveling wave front and less sensitive to modeling uncertainty (such as noise disturbance).

Luay Yassin Taha;Esam Abdel-Raheem; "Efficient blind source extraction of noisy mixture utilising a class of parallel linear predictor filters," vol.12(8), pp.1009-1016, 10 2018. This study presents a novel blind source extraction of a noisy mixture using a class of parallel linear predictor filters. Analysis of a noisy mixture equation is carried out to address new autoregressive source signal model based on the covariance matrix of the whitened data. A method of interchanging the rules of filter inputs is proposed such that this matrix becomes the filter input while the estimated source signals are considered as the parallel filter coefficients. As the matrix has unity norm and unity eigenvalues, the filter becomes independent on the mixture signal norm and eigenvalues variations, thus solving drastically the ambiguity due to the dependency of the filter on the mixture power levels if the mixture is considered as the filter input. Furthermore, the unity eigenvalues of the matrix result in a very fast convergence in two iterations. Simulation results show that the model is capable of extracting the unknown source signals and removing noise when the input signal-to-noise ratio is varied from -20 to 80 dB.

Xiaowei Hu;Yiduo Guo;Qichao Ge;Yutong Su; "Fast SL0 algorithm for 3D imaging using bistatic MIMO radar," vol.12(8), pp.1017-1022, 10 2018. Multiple-input-multiple-output (MIMO) radar is attractive in moving targets imaging, which is able to solve the difficulty of complex motion compensation. In this article, a three-dimensional imaging method using bistatic MIMO radar is proposed. Compared with monostatic MIMO radar, the bistatic system can provide the complementary information, and the imaging process is not the same as the monostatic case. Furthermore, considering the image sparsity and the spatial limitation of radar targets, a fast smoothed L0 norm algorithm is proposed to achieve the high resolution in cross-range directions with limited antennas. The experimental results demonstrate the validity and efficiency of the proposed method.

Xiao-Zhi Zhang;Bingo Wing-Kuen Ling;Ran Tao;Zhi-Jing Yang;Wai-Lok Woo;Saeid Sanei;Kok L. Teo; "Optimal design of orders of DFrFTs for sparse representations," vol.12(8), pp.1023-1033, 10 2018. This study proposes an optimal design of the orders of the discrete fractional Fourier transforms (DFrFTs) and construct an overcomplete transform using the DFrFTs with these orders for performing the sparse representations. The design problem is formulated as an optimisation problem with an L1-norm non-convex objective function. To avoid all the orders of the DFrFTs to be the same, the exclusive OR of two constraints are imposed. The constrained optimisation problem is further reformulated to an optimal frequency sampling problem. A method based on solving the roots of a set of harmonic functions is employed for finding the optimal sampling frequencies. As the designed overcomplete transform can exploit the physical meanings of the signals in terms of representing the signals as the sums of the components in the time-frequency plane, the designed overcomplete transform can be applied to many applications.

Jiayi Liu;Jianjun Wang;Feng Zhang; "Reconstruction analysis of block-sparse signal via truncated <inline-formula><tex-math notation="TeX">$ell _2/ell _1$</tex-math>2/1<inline-graphic xlink:href="IET-SPR.2018.0004.IM1.gif" /></inline-formula>-minimisation with redundant dictionaries," vol.12(8), pp.1034-1042, 10 2018. Here, the authors discuss the recovery of signals from under-sampled data in which signals are nearly block sparse via a truncated $ℓ2/ℓ1 method with redundant dictionaries. The authors show that the obtained results are better than the previous recovery result in the existence of noise. Furthermore, the authors conduct an alternating direction method of multipliers algorithm to solve the signals recovery problem. Moreover, the numerical experiments prove the strong robustness and stability of truncated $ℓ2/ℓ1 method with redundant dictionaries (t-D-block-ℓ1) in the presence of noise.

Hamid Shiri;Mohammad Ali Tinati;Marian Codreanu;Ghanbar Azarnia; "Distributed sparse diffusion estimation with reduced communication cost," vol.12(8), pp.1043-1052, 10 2018. The issue considered in the current study is the problem of adaptive distributed estimation based on diffusion strategy which can exploit sparsity in improving estimation error and reducing communications. It has been shown that distributed estimation leads to a good performance in terms of the error value, convergence rate, and robustness against node and link failures in wireless sensor networks. However, the main focus of many works in the field of distributed estimation research is on convergence speed and estimation error, neglecting the fact that communications among the nodes require a lot of transmissions. In this work, the focus is on a solution based on sparse diffusion least mean squares (LMS) algorithm, and a new version of sparse diffusion LMS algorithm is proposed which takes both communications and error cost into account. Also, the computation complexity and communication cost for every node of the network, as well as performance analysis of the proposed strategy, is provided. The performance of the proposed method in comparison with the existing methods is illustrated by means of simulations in terms of computational and communicational cost, and flexibility to signal changes.

Seyedeh Nasim Adnani;Mohammad Ali Tinati;Ghanbar Azarnia;Tohid Yousefi Rezaii; "Energy-efficient data reconstruction algorithm for spatially- and temporally correlated data in wireless sensor networks," vol.12(8), pp.1053-1062, 10 2018. This study introduces a novel algorithm for reconstruction of wireless sensor networks data which inherently have spatial and temporal correlations. The authors' algorithm is based on compressed sensing (CS) and benefits from sliding window processing. This new algorithm rearranges the data in form of a cube and uses this representation to extract more information about the data. There are two optimisation loops which are solved simultaneously and periodically reconstruct one part of the whole signal from measurements that arrive at the sink. In particular, the first reconstruction loop, which uses a modified version of basis pursuit reconstruction algorithm, is meant for reconstruction of a temporal data which is extracted from the data cube, and the second loop which uses a modified version of reweighted l1-norm algorithm is for reconstruction of data windows. The authors used a special kind of binary sparse random measurement matrices for sampling which is equipped with a condition to get samples as variously as possible and this, in turn, balances the duty among sensors and provides more information from the field. Simulation results verify that the proposed algorithm achieves better reconstruction accuracy and less energy consumption in comparison with state-of-the-art CS reconstruction methods.

Hamid Nouasria;Mohamed Et-tolba; "Sensing matrix based on Kasami codes for compressive sensing," vol.12(8), pp.1064-1072, 10 2018. Compressive sensing (CS) aims at acquiring sparse or compressible signals at a sampling rate much lower than Nyquist frequency. It allows for the original signal to be reconstructed from a small number of measurements. This involves an appropriate design of the sensing matrix to ensure signal recovery while reducing the number of measurements. In this study, the authors propose an improved deterministic Kasami sensing matrix whose columns have an enhanced orthogonality property. They demonstrate that the proposed matrix is suitable for existing recovery algorithms. Moreover, it outperforms the most existing sensing matrices in terms of the rate of exact reconstruction. In addition, it is shown that the use of the new sensing matrix for CS reduces significantly the memory requirement and the computational complexity.

IEEE Transactions on Geoscience and Remote Sensing - new TOC (2018 November 15) [Website]

* "[Front cover[Name:_blank]]," vol.56(11), pp.C1-C1, Nov. 2018.* Presents the front cover for this issue of the publication.

* "IEEE Transactions on Geoscience and Remote Sensing publication information," vol.56(11), pp.C2-C2, Nov. 2018.* Provides a listing of current staff, committee members and society officers.

* "Table of contents," vol.56(11), pp.6261-6868, Nov. 2018.* Presents the table of contents for this issue of this publication.

Faxian Cao;Zhijing Yang;Jinchang Ren;Wing-Kuen Ling;Huimin Zhao;Meijun Sun;Jón Atli Benediktsson; "Sparse Representation-Based Augmented Multinomial Logistic Extreme Learning Machine With Weighted Composite Features for Spectral–Spatial Classification of Hyperspectral Images," vol.56(11), pp.6263-6279, Nov. 2018. Although extreme learning machine (ELM) has successfully been applied to a number of pattern recognition problems, only with the original ELM it can hardly yield high accuracy for the classification of hyperspectral images (HSIs) due to two main drawbacks. The first is due to the randomly generated initial weights and bias, which cannot guarantee optimal output of ELM. The second is the lack of spatial information in the classifier as the conventional ELM only utilizes spectral information for classification of HSI. To tackle these two problems, a new framework for ELM-based spectral–spatial classification of HSI is proposed, where probabilistic modeling with sparse representation and weighted composite features (WCFs) is employed to derive the optimized output weights and extract spatial features. First, ELM is represented as a concave logarithmic-likelihood function under statistical modeling using the maximum a posteriori estimator. Second, sparse representation is applied to the Laplacian prior to efficiently determine a logarithmic posterior with a unique maximum in order to solve the ill-posed problem of ELM. The variable splitting and the augmented Lagrangian are subsequently used to further reduce the computation complexity of the proposed algorithm. Third, the spatial information is extracted using the WCFs to construct the spectral–spatial classification framework. In addition, the lower bound of the proposed method is derived by a rigorous mathematical proof. Experimental results on three publicly available HSI data sets demonstrate that the proposed methodology outperforms ELM and also a number of state-of-the-art approaches.

Bo-Hui Tang; "Nonlinear Split-Window Algorithms for Estimating Land and Sea Surface Temperatures From Simulated Chinese Gaofen-5 Satellite Data," vol.56(11), pp.6280-6289, Nov. 2018. This paper proposes a different thermal channel combination split-window (DTCC-SW) method to estimate the land surface temperature (LST) and sea ST (SST) from the Chinese Gaofen-5 (GF-5) satellite thermal infrared (TIR) data. A nonlinear combination of two adjacent channels CH8.20(centered at <inline-formula> <tex-math notation="LaTeX">$8.20~mu text{m}$ </tex-math></inline-formula>) and CH8.63 (centered at <inline-formula> <tex-math notation="LaTeX">$8.63~mu text{m}$ </tex-math></inline-formula>) was proposed to estimate LST for low-emissivity surfaces. A nonlinear combination of two adjacent channels, CH10.80 (centered at <inline-formula> <tex-math notation="LaTeX">$10.80~mu text{m}$ </tex-math></inline-formula>) and CH11.95 (centered at <inline-formula> <tex-math notation="LaTeX">$11.92~mu text{m}$ </tex-math></inline-formula>), was developed to estimate LST and SST for high-emissivity surfaces under dry atmospheric conditions, and a nonlinear combination of two channels, CH8.63 and CH11.95, was used to estimate LST and SST for high-emissivity surfaces under wet atmospheric conditions. The numerical values of the DTCC-SW coefficients were obtained using a statistical regression method from synthetic data simulated with an accurate atmospheric radiative transfer model moderate spectral resolution atmospheric transmittance mode 5 over a wide range of atmospheric and surface conditions. The LST (SST), mean emissivity, and atmospheric water vapor content were divided into several tractable subranges to improve the fitting accuracy. The experimental results and the preliminary evaluation results showed that the root-mean-square error between the actual and estimated LSTs (SSTs) is less than 0.7 K (0.3 K), provided that the land surface emissivities are known, which indicates that the proposed DTCC-SW method can accurately estimate the LST and SST from the GF-5 TIR data.

Maria Emmanuel;S. V. Sunilkumar;M. Muhsin;B. Suneel Kumar;N. Nagendra;P. R. Satheesh Chandran;Geetha Ramkumar;K. Rajeev; "Intercomparison of Cryogenic Frost-Point Hygrometer Observations With Radiosonde, SAPHIR, MLS, and Reanalysis Datasets Over Indian Peninsula," vol.56(11), pp.6290-6295, Nov. 2018. This paper assesses the performance of water vapor measurements by Sondeur Atmosphérique du Profild’ Humidité Intertropicale par Radiométrie (SAPHIR), microwave limb sounder (MLS), and the global reanalysis water vapor data [Modern Era Retrospective analysis for Research and Applications (MERRA) and European Center for Medium-Range Weather Forecasts reanalysis (ERA) interim] by comparing with water vapor-measured in situ by cryogenic frost-point hygrometer (CFH) at two tropical stations, Trivandrum (8.5°N, 76.9°E) and Hyderabad (17.47°N, 78.58°E). The iMet-1 radiosonde though overestimates frost-point temperature (3%–5%) in the 4–10-km altitude region with respect to CFH, it agrees well with CFH observations in the upper troposphere and the lower troposphere with a mean difference <2% and ~1%, respectively. SAPHIR–CFH intercomparison is done for the troposphere region, and MLS, MERRA, and ERA interim are compared with CFH in the lower stratosphere region. SAPHIR and CFH comparison shows a reasonable agreement between both the datasets with a relative humidity difference of about 15% in the lower and middle troposphere and a dry bias of ~40% in the upper troposphere. Intercomparison between CFH and MLS water vapor mixing ratios (WVMRs) shows a small wet bias (−10% to −20%) for the MLS in the lower stratospheric region between 100 and 50 hPa and a small dry bias (<10%) above that region. This paper shows that the MLS always underestimates when the CFH WVMR is greate- than 6 ppmv and overestimates when the WVMR is less than 2 ppmv. The intercomparison of CFH with MERRA and ERA interim specific humidity shows results similar to that with MLS.

Susan Stillman;Xubin Zeng; "Evaluation of SMAP Soil Moisture Relative to Five Other Satellite Products Using the Climate Reference Network Measurements Over USA," vol.56(11), pp.6296-6305, Nov. 2018. Satellite platforms provide a unique opportunity to retrieve global soil moisture. The Level 3 soil moisture (L3SMP), enhanced Level 3 soil moisture (L3SMP-E) and Level 4 surface soil moisture (L4SM) products of the latest soil moisture satellite mission, Soil Moisture Active Passive (SMAP), are evaluated relative to five other satellite products using the Climate Reference Network (CRN) with more than 110 stations (each with three in situ probes) over USA from 2009 to present. This large number of stations allows for the categorization of SMAP performance based on land cover types, complementing prior efforts based on the few core validation areas with many in situ observations within a single satellite pixel. The SMAP as well as Aquarius products clearly outperform the other products. Over all land cover types, L3SMP, L3SMP-E, and L4SM are better in the summer than in the winter and they perform best over short vegetation. L4SM has higher correlations compared with CRN than L3SMP over tall and short vegetation whereas L3SMP has higher correlations over crops. On average, L3SMP-E performs as well as L3SMP. There is a mismatch between the point in situ measurements and satellite pixel retrievals driven by subpixel precipitation variability, and its impact on these results is also assessed.

Shuaiyi Shi;Tianhai Cheng;Xingfa Gu;Hong Guo;Hao Chen;Ying Wang;Yu Wu; "Multisensor Data Synergy of Terra-MODIS, Aqua-MODIS, and Suomi NPP-VIIRS for the Retrieval of Aerosol Optical Depth and Land Surface Reflectance Properties," vol.56(11), pp.6306-6323, Nov. 2018. A novel multisensor synergy method, using data from Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Terra and Aqua as well as Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership, is presented to retrieve optical atmosphere-surface properties. By adopting three-sensor observations’ synergy, the proposed method can grasp the multitemporal characteristics of aerosol optical depth (AOD) and the multidirectional characteristics of surface reflectance. In addition, the bidirectional reflectance distribution function (BRDF) can be derived at daily scale by adopting the novel shape function constrained BRDF retrieval (SFCBR) method. The 550-nm AOD retrieval result of the proposed method is validated by AErosol RObotic NETwork (AERONET) measurement at Beijing, XiangHe, Noto, and Gwangju_GIST sites with <inline-formula> <tex-math notation="LaTeX">$R^{2}$ </tex-math></inline-formula> equaling to 0.78, 0.75, 0.70, and 0.75, respectively. Compared with MODIS/VIIRS AOD official product, the proposed method shows higher coverage rate (especially in AERONET Beijing site with approximately 50% increase) with comparative accuracy. The expected error of the retrieved AOD from the proposed method is estimated as <inline-formula> <tex-math notation="LaTeX">$Delta tau = pm 0.05 pm 0.24tau $ </tex-math></inline-formula>. The correlation coefficients of BRDF-derived albedo time series between the proposed method and MODIS BRDF/Albedo product can reach up to 0.85 with an obvious improvement in temporal resolution by adopting an SFCBR method. The average relative differences of BRDF shape function between the retrieval result and MODIS BRDF/Albedo product in all directions equal to 0.026, 0.036, 0.037, and 0.016 in AERONET Beijing, XiangHe, Noto, and Gwangju_GIST sites, respectively.

Ian Paynter;Daniel Genest;Edward Saenz;Francesco Peri;Zhan Li;Alan Strahler;Crystal Schaaf; "Quality Assessment of Terrestrial Laser Scanner Ecosystem Observations Using Pulse Trajectories," vol.56(11), pp.6324-6333, Nov. 2018. Considering the trajectories of pulses from terrestrial laser scanners (TLS) can provide refined models of occlusion and improve the assessment of observation quality in forests and other ecosystems. By considering the space traversed by light detection and ranging (lidar) pulses, we can separate empty regions of an ecosystem sample from unobserved regions of an ecosystem sample. We apply this method of TLS observation quality assessment, and analyze Compact Biomass Lidar 2 (CBL2) TLS observations of a single tree and of a deciduous forest stand. We show the contribution of information from each TLS scan to be inconsistent and the combination of multiple scans to have diminishing returns for new information, without guaranteeing complete coverage of a sample. We quantitatively investigate the effects of imposing information quality requirements on TLS sampling, for example, requiring minimum numbers of observations in each region or requiring regions to be observed from a minimum number of independent scans. We show empirically that rigid, predefined TLS sampling schemes, even with hypothetically dense coverage, cannot guarantee successful samples in geometrically complex systems such as forests. Through these methods, we lay the groundwork for on-the-fly assessment of observation quality according to several modeling-relevant metrics which enhance TLS ecosystem assessment. We also establish the value of flexible deployment options for TLS instruments, including the ability to deploy at a variety of heights.

Lior Gazit;Hagit Messer; "Sufficient Conditions for Reconstructing 2-D Rainfall Maps," vol.56(11), pp.6334-6343, Nov. 2018. The ground level rainfall at a given time is modeled as a 2-D spatial random process <inline-formula> <tex-math notation="LaTeX">${r}(textit {x, y})$ </tex-math></inline-formula>, the rain field. Existing measurement equipment, such as rain gauges, weather stations, or recently proposed microwave links, samples <inline-formula> <tex-math notation="LaTeX">${r}(textit {x, y})$ </tex-math></inline-formula> spatially in specific points or along lines. Given these samples, our purpose is to reconstruct <inline-formula> <tex-math notation="LaTeX">${r}(textit {x, y})$ </tex-math></inline-formula>. In this paper, we study the question: “under what conditions can a given topology of ground measurements guarantee reconstructability of the rain field?” Based on the assumption that rain fields are sparse, we present a statistical approach to this problem by first characterizing the statistics of the measurements, and then answering the question by applying methods from compressed sensing theory, and in particular Donoho and Tanner’s phase transition diagram for sparse recovery. We conclude by suggesting a solution in a form of a simple diagram, allowing one to evaluate the potential reconstruction of <inline-formula> <tex-math notation="LaTeX">${r}(textit {x, y})$ </tex-math></inline-formula> in different resolutions without the need for computations.

Ruben Fernandez-Beltran;Antonio Plaza;Javier Plaza;Filiberto Pla; "Hyperspectral Unmixing Based on Dual-Depth Sparse Probabilistic Latent Semantic Analysis," vol.56(11), pp.6344-6360, Nov. 2018. This paper presents a novel approach for spectral unmixing of remotely sensed hyperspectral data. It exploits probabilistic latent topics in order to take advantage of the semantics pervading the latent topic space when identifying spectral signatures and estimating fractional abundances from hyperspectral images. Despite the contrasted potential of topic models to uncover image semantics, they have been merely used in hyperspectral unmixing as a straightforward data decomposition process. This limits their actual capabilities to provide semantic representations of the spectral data. The proposed model, called dual-depth sparse probabilistic latent semantic analysis (DEpLSA), makes use of two different levels of topics to exploit the semantic patterns extracted from the initial spectral space in order to relieve the ill-posed nature of the unmixing problem. In other words, DEpLSA defines a first level of deep topics to capture the semantic representations of the spectra, and a second level of restricted topics to estimate endmembers and abundances over this semantic space. An experimental comparison in conducted using the two standard topic models and the seven state-of-the-art unmixing methods available in the literature. Our experiments, conducted using four different hyperspectral images, reveal that the proposed approach is able to provide competitive advantages over available unmixing approaches.

John Ray Bergado;Claudio Persello;Alfred Stein; "Recurrent Multiresolution Convolutional Networks for VHR Image Classification," vol.56(11), pp.6361-6374, Nov. 2018. Classification of very high-resolution (VHR) satellite images has three major challenges: 1) inherent low intraclass and high interclass spectral similarities; 2) mismatching resolution of available bands; and 3) the need to regularize noisy classification maps. Conventional methods have addressed these challenges by adopting separate stages of image fusion, feature extraction, and postclassification map regularization. These processing stages, however, are not jointly optimizing the classification task at hand. In this paper, we propose a single-stage framework embedding the processing stages in a recurrent multiresolution convolutional network trained in an end-to-end manner. The feedforward version of the network, called FuseNet, aims to match the resolution of the panchromatic and multispectral bands in a VHR image using convolutional layers with corresponding downsampling and upsampling operations. Contextual label information is incorporated into FuseNet by means of a recurrent version called ReuseNet. We compared FuseNet and ReuseNet against the use of separate processing steps for both image fusions, e.g., pansharpening and resampling through interpolation and map regularization such as conditional random fields. We carried out our experiments on a land-cover classification task using a Worldview-03 image of Quezon City, Philippines, and the International Society for Photogrammetry and Remote Sensing 2-D semantic labeling benchmark data set of Vaihingen, Germany. FuseNet and ReuseNet surpass the baseline approaches in both the quantitative and qualitative results.

Guangjian Yan;Yiyi Tong;Kai Yan;Xihan Mu;Qing Chu;Yingji Zhou;Yanan Liu;Jianbo Qi;Linyuan Li;Yelu Zeng;Hongmin Zhou;Donghui Xie;Wuming Zhang; "Temporal Extrapolation of Daily Downward Shortwave Radiation Over Cloud-Free Rugged Terrains. Part 1: Analysis of Topographic Effects," vol.56(11), pp.6375-6394, Nov. 2018. Estimation of daily downward shortwave radiation (DSR) is of great importance in global energy budget and climatic modeling. The combination of satellite-based instantaneous measurements and temporal extrapolation models is the most feasible way to capture daily radiation variations at large scales. However, previous studies did not pay enough attention to topographic effects and simple temporal extrapolation methods were applied directly to rugged terrains which cover a large amount of the land surface. This paper, divided into two parts, aims at analyzing the topographic uncertainties of existing models and proposing a better method based on a mountain radiative transfer (MRT) model to calculate daily DSR. As the first part, this paper analyze the spatiotemporal variations of DSR influenced by topographic effects and checks the applicability of three temporal extrapolation methods on cloud-free days. Considering that clouds also have a strong influence on solar radiation, cloud-free days are chosen for targeted analysis of topographic effects on DSR. Three indices, the coefficient of variation, entropy-based dispersion coefficient (CH), and sill of semivariogram, are put forward to give a quantitative description of spatial heterogeneity. Our results show that the topography can dramatically strengthen the spatial heterogeneity of DSR. The index, CH, has an advantage for quantifying spatial heterogeneity as it offers a tradeoff between accuracy and efficiency. Spatial heterogeneity distorts the daily variation of DSR. Application of extrapolation methods in rugged terrains leads to overestimation of daily average DSR up to 60 W/m2 and a maximum 200 W/m2 error of instantaneous DSR on cloud-free days. This paper makes a quantitative analysis of topographic effects under different spatiotemporal conditions, which lays the foundation for developing a new extrapolation method.

Sai-Nan Shi;Peng-Lang Shui; "Sea-Surface Floating Small Target Detection by One-Class Classifier in Time-Frequency Feature Space," vol.56(11), pp.6395-6411, Nov. 2018. This paper presents one feature-based detector to find sea-surface floating small targets. In integration time of the order of seconds, target returns exhibit time-frequency (TF) characteristics different from sea clutter. The normalized smoothed pseudo-Wigner–Ville distribution (SPWVD) is proposed to enhance TF characteristics of target returns, which is computed from the SPWVDs of time series at the cell under test (CUT) and reference cells around the CUT. The differences between target returns and the TF pattern of sea clutter are congregated on the normalized SPWVD. From that the ridge integration (RI) is computed and significant TF points from each time slice form a binary image. The number of connected regions and the maximum size of connected regions in the binary image are extracted and are combined with the RI into a 3-D feature vector. Due to the unavailability of the feature vector samples of radar returns with target, a one-class classifier with a controllable false alarm rate is constructed from the feature vector samples of sea clutter by the fast convex hull learning algorithm. As a result, a new feature-based detector is designed. It is compared with the tri-feature-based detector using amplitude and Doppler features and the fractal-based detector using the Hurst exponent of amplitude time series on the recognized IPIX radar database for floating small target detection. The results show that a significant improvement in detection performance is attained.

Lucilla Alfonsi;Gabriella Povero;Luca Spogli;Claudio Cesaroni;Biagio Forte;Cathryn N. Mitchell;Robert Burston;Sreeja Vadakke Veettil;Marcio Aquino;Virginia Klausner;Marcio T. A. H. Muella;Michael Pezzopane;Alessandra Giuntini;Ingrid Hunstad;Giorgiana De Franceschi;Elvira Musicò;Marco Pini;Vinh La The;Hieu Tran Trung;Asnawi Husin;Sri Ekawati;Charisma Victoria de la Cruz-Cayapan;Mardina Abdullah;Noridawaty Mat Daud;Le Huy Minh;Nicolas Floury; "Analysis of the Regional Ionosphere at Low Latitudes in Support of the Biomass ESA Mission," vol.56(11), pp.6412-6424, Nov. 2018. Biomass is a spaceborn polarimetric P-band (435 MHz) synthetic aperture radar (SAR) in a dawn–dusk low Earth orbit. Its principal objective is to measure biomass content and change in all the Earth’s forests. The ionosphere introduces the Faraday rotation on every pulse emitted by low-frequency SAR and scintillations when the pulse traverses a region of plasma irregularities, consequently impacting the quality of the imaging. Some of these effects are due to total electron content (TEC) and its gradients along the propagation path. Therefore, an accurate assessment of the ionospheric morphology and dynamics is necessary to properly understand the impact on image quality, especially in the equatorial and tropical regions. To this scope, we have conducted an in-depth investigation of the significant noise budget introduced by the two crests of the equatorial ionospheric anomaly (EIA) over Brazil and Southeast Asia. This paper is characterized by a novel approach to conceive a SAR-oriented ionospheric assessment, aimed at detecting and identifying spatial and temporal TEC gradients, including scintillation effects and traveling ionospheric disturbances, by means of Global Navigation Satellite Systems ground-based monitoring stations. The novelty of this approach resides in the customization of the information about the impact of the ionosphere on SAR imaging as derived by local dense networks of ground instruments operating during the passes of Biomass spacecraft. The results identify the EIA crests as the regions hosting the bulk of irregularities potentially causing degradation on SAR imaging. Interesting insights about the local characteristics of low-latitudes ionosphere are also highlighted.

Jifang Pei;Yulin Huang;Zhichao Sun;Yin Zhang;Jianyu Yang;Tat-Soon Yeo; "Multiview Synthetic Aperture Radar Automatic Target Recognition Optimization: Modeling and Implementation," vol.56(11), pp.6425-6439, Nov. 2018. Multiview synthetic aperture radar (SAR) images could provide much richer information for automatic target recognition (ATR) than from a single-view image. It is desirable to find optimal SAR platform flight paths and acquire a sequence of SAR images from appropriate views, so that multiview SAR ATR can be carried out accurately and efficiently. In this paper, a novel optimization framework for multiview SAR ATR is proposed and implemented. The geometry of the multiview SAR ATR is modeled according to the recognition mission and flight environment. Then, the multiview SAR ATR is abstracted and transformed into a constrained multiobjective optimization problem with objective functions considering the tradeoffs between recognition performance and efficiency and security. A specific approach based on convolutional neural network ensemble and constrained nondominated sorting genetic algorithm II is employed to solve the multiobjective optimization, and optimal flight paths and corresponding imaging viewpoints are obtained. The SAR sensor can thus choose an applicable flight path to acquire the multiview SAR images from different tradeoff solutions according to application requirements. Finally, accurate recognition results can be obtained based on those multiview SAR images. Extensive experiments have shown the validity and superiority of the proposed optimization framework of multiview SAR ATR.

Juan Mario Haut;Mercedes E. Paoletti;Javier Plaza;Jun Li;Antonio Plaza; "Active Learning With Convolutional Neural Networks for Hyperspectral Image Classification Using a New Bayesian Approach," vol.56(11), pp.6440-6461, Nov. 2018. Hyperspectral imaging is a widely used technique in remote sensing in which an imaging spectrometer collects hundreds of images (at different wavelength channels) for the same area on the surface of the earth. In the last two decades, several methods (unsupervised, supervised, and semisupervised) have been proposed to deal with the hyperspectral image classification problem. Supervised techniques have been generally more popular, despite the fact that it is difficult to collect labeled samples in real scenarios. In particular, deep neural networks, such as convolutional neural networks (CNNs), have recently shown a great potential to yield high performance in the hyperspectral image classification. However, these techniques require sufficient labeled samples in order to perform properly and generalize well. Obtaining labeled data is expensive and time consuming, and the high dimensionality of hyperspectral data makes it difficult to design classifiers based on limited samples (for instance, CNNs overfit quickly with small training sets). Active learning (AL) can deal with this problem by training the model with a small set of labeled samples that is reinforced by the acquisition of new unlabeled samples. In this paper, we develop a new AL-guided classification model that exploits both the spectral information and the spatial-contextual information in the hyperspectral data. The proposed model makes use of recently developed Bayesian CNNs. Our newly developed technique provides robust classification results when compared with other state-of-the-art techniques for hyperspectral image classification.

Randhir Singh;Rajat Acharya; "Development of a New Global Model for Estimating One-Minute Rainfall Rate," vol.56(11), pp.6462-6468, Nov. 2018. One-minute rain rate exceeded for 0.01% of an average year (known as R0.01) is an important parameter, which is required in rain fade models for planning both satellite and terrestrial links, especially in the microwave and millimeter-wave bands. This paper presents the development of a new global model for estimating high-resolution R0.01. The model is developed using the Climate Hazards Group Infrared Precipitation with Stations (CHIRPS), ERA-Interim reanalyzed total precipitable water, and the cumulative distribution functions of the measured R0.01 via genetic algorithm. The newly proposed model performs better than the International Telecommunication Union Radio Sector (ITU-R) recommendation and widely used ITU-R P.837-6 model, particularly over the tropical regions, where ITU-R P.837-6 strongly underestimates R0.01. Overall, when compared to ITU-R P.837-6, the estimated R0.01 by the proposed model is improved by ~15%. Furthermore, the proposed model can provide R0.01 with a much higher spatial resolution (~5 km) as compared to ITU-R P.837-6 (~110 km). Another key advantage of the proposed model over ITU-R P.837-6 is that the inputs required for this model are readily available from meteorological database as well as from space-based observing system. This paper also investigated the long-term (1981–2016) trends in R0.01. The analysis revealed significant increasing trends in R0.01 over most parts of the globe, India and northern part of Southern America exhibited the strongest (<inline-formula> <tex-math notation="LaTeX">$sim 0.5,,text {mm}cdot text {h}^{-1}cdot $ </tex-math></inline-formula> year−1) increasing trends. It is anticipated that R0.01 obtained with the newly proposed model will have large implications in rain attenuation modeling for planning both terrestrial and earth–satellite m- crowave links.

Gerald Baier;Cristian Rossi;Marie Lachaise;Xiao Xiang Zhu;Richard Bamler; "A Nonlocal InSAR Filter for High-Resolution DEM Generation From TanDEM-X Interferograms," vol.56(11), pp.6469-6483, Nov. 2018. This paper presents a nonlocal interferometric synthetic aperture radar (InSAR) filter with the goal of generating digital elevation models (DEMs) of higher resolution and accuracy from bistatic TanDEM-X strip map interferograms than with the processing chain used in production. The currently employed boxcar multilooking filter naturally decreases the resolution and has inherent limitations on what level of noise reduction can be achieved. The proposed filter is specifically designed to account for the inherent diversity of natural terrain by setting several filtering parameters adaptively. In particular, it considers the local fringe frequency and scene heterogeneity, ensuring proper denoising of interferograms with considerable underlying topography as well as urban areas. A comparison using synthetic and TanDEM-X bistatic strip map data sets with existing InSAR filters shows the effectiveness of the proposed techniques, most of which could readily be integrated into existing nonlocal filters. The resulting DEMs outclass the ones produced with the existing global TanDEM-X DEM processing chain by effectively increasing the resolution from 12 to 6 m and lowering the noise level by roughly a factor of two.

Eric B. Putman;Sorin C. Popescu; "Automated Estimation of Standing Dead Tree Volume Using Voxelized Terrestrial Lidar Data," vol.56(11), pp.6484-6503, Nov. 2018. Standing dead trees (SDTs) are an important forest component and impact a variety of ecosystem processes, yet the carbon pool dynamics of SDTs are poorly constrained in terrestrial carbon cycling models. The ability to model wood decay and carbon cycling in relation to detectable changes in tree structure and volume over time would greatly improve such models. The specific objectives of this paper were to: 1) develop an automated SDT volume estimation algorithm providing accurate volume estimates for trees scanned with terrestrial lidar in dense forests; 2) assess the volume estimation algorithm’s accuracy with respect to large and small branches; and 3) characterize the impact of occlusion with regards to volume estimation accuracy and the ability of the algorithm to mitigate challenges posed by lower quality point clouds. A voxel-based volume estimation algorithm, “TreeVolX,” was developed and incorporates several methods designed to robustly process point clouds of varying quality levels. The algorithm operates on horizontal voxel slices by segmenting the slice into distinct branch or stem sections then applying an adaptive contour interpolation and interior filling process to create solid reconstructed tree models. The concept of vertical point cloud resampling is introduced to facilitate the modeling of lower quality point clouds with a small voxel size. TreeVolX estimated large and small branch volume with a root-mean-square error of 7.3% and 13.8%, respectively, and the adaptive contour interpolation was shown to significantly reduce volume estimation errors in the case of significantly occluded point clouds.

Temesgen Gebrie Yitayew;Wolfgang Dierking;Dmitry V. Divine;Torbjørn Eltoft;Laurent Ferro-Famil;Anja Rösel;Jean Negrel; "Validation of Sea-Ice Topographic Heights Derived From TanDEM-X Interferometric SAR Data With Results From Laser Profiler and Photogrammetry," vol.56(11), pp.6504-6520, Nov. 2018. In this paper, the retrieval of sea-ice surface heights from the interferometric TanDEM-X data is investigated. The data were acquired over fast and drifting ice in Fram Strait located between Greenland and Svalbard. Additional measurements of the sea-ice surface topography were carried out using a stereo camera and a laser altimeter. The comparison of the surface elevation retrieved from TanDEM-X imagery with the results of the stereo camera measurements revealed that sea-ice ridges greater than 0.5 m can be estimated with a root-mean-square error of 0.3 m or less with the error decreasing as a function of ridge height. Although the helicopter-borne laser data are only available as 1-D profiles with a much higher across-track spatial resolution than the TanDEM-X data, they proved to be useful for the validation. The need for multilook averaging to reduce the phase noise is identified as the main challenge in achieving the spatial resolution necessary for retrieving sea-ice surface topography using synthetic aperture radar interferometry.

Yansheng Li;Yongjun Zhang;Xin Huang;Jiayi Ma; "Learning Source-Invariant Deep Hashing Convolutional Neural Networks for Cross-Source Remote Sensing Image Retrieval," vol.56(11), pp.6521-6536, Nov. 2018. Due to the urgent demand for remote sensing big data analysis, large-scale remote sensing image retrieval (LSRSIR) attracts increasing attention from researchers. Generally, LSRSIR can be divided into two categories as follows: uni-source LSRSIR (US-LSRSIR) and cross-source LSRSIR (CS-LSRSIR). More specifically, US-LSRSIR means the inquiry remote sensing image and images in the searching data set come from the same remote sensing data source, whereas CS-LSRSIR is designed to retrieve remote sensing images with a similar content to the inquiry remote sensing image that are from a different remote sensing data source. In the literature, US-LSRSIR has been widely exploited, but CS-LSRSIR is rarely discussed. In practical situations, remote sensing images from different kinds of remote sensing data sources are continually increasing, so there is a great motivation to exploit CS-LSRSIR. Therefore, this paper focuses on CS-LSRSIR. To cope with CS-LSRSIR, this paper proposes source-invariant deep hashing convolutional neural networks (SIDHCNNs), which can be optimized in an end-to-end manner using a series of well-designed optimization constraints. To quantitatively evaluate the proposed SIDHCNNs, we construct a dual-source remote sensing image data set that contains eight typical land-cover categories and 10 000 dual samples in each category. Extensive experiments show that the proposed SIDHCNNs can yield substantial improvements over several baselines involving the most recent techniques.

Yi Ren;Shi-Wei Zhao;Yongpin Chen;Decheng Hong;Qing Huo Liu; "Simulation of Low-Frequency Scattering From Penetrable Objects in Layered Medium by Current and Charge Integral Equations," vol.56(11), pp.6537-6546, Nov. 2018. This paper presents a novel accurate and stable current and charge integral equation (CCIE) solver for the low-frequency scattering of penetrable objects in layered medium (LM). To the best of our knowledge, this is the first time to extend CCIE into LM simulations at low frequency. In order to integrate the matrix-friendly LM Green’s functions (LMGFs) into CCIE, we have rederived them to define new quasi-vector/scalar potentials, which are able to annihilate the frequency singularity in the original LMGFs. Moreover, an effective preconditioner is adopted to improve the conditioning of impedance matrices. In comparison with other perconditioners, this method performs much better. The excellent performance of this new CCIE solver is then demonstrated by numerical experiments.

Zahra Sadeghi;Mohammad Javad Valadan Zoej;Andrew Hooper;Juan M. Lopez-Sanchez; "A New Polarimetric Persistent Scatterer Interferometry Method Using Temporal Coherence Optimization," vol.56(11), pp.6547-6555, Nov. 2018. While polarimetric persistent scatterer InSAR (PSI) is an effective technique for increasing the number and quality of selected PS pixels, existing methods are suboptimal; a polarimetric channel combination is selected for each pixel based either on amplitude, which works well only for high-amplitude scatterers such as man-made structures, or on the assumption that pixels in a surrounding window all have the same scattering mechanism. In this paper, we present a new polarimetric PSI method in which we use a phase-based criterion to select the optimal channel for each pixel, which can work well even in nonurban environments. This algorithm is based on polarimetric optimization of temporal coherence, as defined in the Stanford Method for PS (StaMPS), to identify the scatterers with stable phase characteristics. We form all possible copolar and cross-polar interferograms from the available polarimetric channels and find the optimum coefficients for each pixel using defined search spaces to optimize the temporal coherence. We apply our algorithm, PolStaMPS, to an area in the Tehran basin that is covered primarily by vegetation. Our results confirm that the algorithm substantially improves on StaMPS performance, increasing the number of PS pixels by 48%, 80%, and 82% with respect to HH+VV, VV, and HH channels, respectively, and increasing the signal-to-noise ratio of selected pixels.

Wei Pu;Junjie Wu;Yulin Huang;Xiaodong Wang;Jianyu Yang;Wenchao Li;Haiguang Yang; "Nonsystematic Range Cell Migration Analysis and Autofocus Correction for Bistatic Forward-looking SAR," vol.56(11), pp.6556-6570, Nov. 2018. In general, autofocus methods integrated with frequency-domain imaging algorithms are instrumental to obtain a well-focused bistatic forward-looking synthetic aperture radar (BFSAR) image in the presence of motion errors. Nevertheless, before applying autofocus methods to correct the azimuth phase errors, range cell migration (RCM) should be eliminated by the RCM correction (RCMC) procedure in frequency-domain imaging algorithms. With motion errors being taken into account, there always exists some residual nonsystematic RCM (NsRCM), which refers to the residual migration components after the RCMC procedure. For the conventional side-looking SAR, NsRCM is caused by motion errors. On the other hand, NsRCM of BFSAR is originated from motion errors before RCMC and the NsRCM amplified by the RCMC procedure. In this paper, we analyze the different types of NsRCM in BFSAR imaging and their relationship. Based on the analyses, we propose an autofocus NsRCM correction scheme for BFSAR imagery using frequency-domain imaging algorithms that can eliminate the range-dependent NsRCM. The proposed scheme consists of three steps. First, for the BFSAR data after range compression and RCMC, a division procedure is carried out in the azimuth direction. The subblocks with the highest signal-to-clutter ratio along the range direction are selected after the azimuth segmentation procedure. Second, for the selected subblocks, the total NsRCM is estimated based on the minimum-entropy criterion. Based on the estimation results, different parts of the NsRCM are obtained by solving an ordinary differential equation. Third, a two-step compensation of the NsRCM is executed to reach the spatially variant correction. Simulations and experimental results are provided to demonstrate that our proposed scheme is effective for BFSAR imaging.

Mark S. Haynes;Elaine Chapin;Dustin M. Schroeder; "Geometric Power Fall-Off in Radar Sounding," vol.56(11), pp.6571-6585, Nov. 2018. This paper reports the analysis of the geometric power fall-off of Fresnel zone scattering in radar sounding. Radar sounders can take advantage of strong coherent scattering from Fresnel zones at nadir which grow with sensor altitude. The strength of the signal and the actual rate of power fall-off, however, depend heavily on the surface properties. We first use the radar equation to separate geometric versus scattering mechanisms driving <inline-formula> <tex-math notation="LaTeX">$R^{2}$ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$R^{3}$ </tex-math></inline-formula>, or <inline-formula> <tex-math notation="LaTeX">$R^{4}$ </tex-math></inline-formula> fall-off. The Fresnel zone for planetary surfaces, where body curvature has an effect, is derived and its implications discussed. We show the impact of Gaussian and fractal surface roughness on the exponent of power fall-off and the transition from coherent to incoherent scattering. This is done in simulation and analytically to derive coherence loss functions. Finally, we study the effect of incoherent area fraction within the Fresnel zone. These results are intended to be used in radar link budgets, performance metrics, and scientific interpretation of sounding data.

Andrea Marinoni;Javier Plaza;Antonio Plaza;Paolo Gamba; "Estimating Nonlinearities in p-Linear Hyperspectral Mixtures," vol.56(11), pp.6586-6595, Nov. 2018. Accurately estimating the elements in Earth observations is crucial when assessing specific features such as air quality index, water pollution, or urbanization process behavior. Moreover, physical–chemical composition can be retrieved from hyperspectral images when proper spectral unmixing architectures are employed. Specifically, when linear and nonlinear combinations of endmembers (pure spectral components) are accurately characterized, hyperspectral unmixing plays a key role in understanding and quantifying phenomena occurring over the instantaneous field-of-view. Thus, reliable detection of nonlinear reflectance behavior can play a key role in enhancing hyperspectral unmixing performance. In this paper, two new methods for adaptive design of mixture models for hyperspectral unmixing are introduced. One of the methods relies on exploiting geometrical features of hyperspectral signatures in terms of nonorthogonal projections onto the space induced by the endmembers’ spectra. Then, an iterative process aims at understanding the order of local nonlinearity that is displayed by each endmember over every pixel. An improved version of an artificial neural network-based approach for nonlinearity order information is also considered and compared. Experimental results show that the proposed approaches are actually able to retrieve thorough information on the nature of the nonlinear effects over the image, while providing excellent performance in reconstructing the given data sets.

Ismael Hernández-Carrasco;Véronique Garçon;Jöel Sudre;Christoph Garbe;Hussein Yahia; "Increasing the Resolution of Ocean pCO2 Maps in the South Eastern Atlantic Ocean Merging Multifractal Satellite-Derived Ocean Variables," vol.56(11), pp.6596-6610, Nov. 2018. A new methodology has been developed in order to improve the description of the spatial and temporal variability of not well-resolved oceanic variables from other well-observed high-resolution oceanic variables. The method is based on the cross-scale inference of information, incorporating the common features of different multifractal high-resolution variables into a coarser one. An exercise of validation of the methodology has been performed based on the outputs of coupled physical-biogeochemical Regional Ocean Modeling System adapted to the eastern boundary upwelling systems at two spatial resolutions. Once the algorithm has been proved to be effective in increasing the spatial resolution of modeled partial pressure of CO2 at the surface ocean (pCO2), we have investigated the capability of our methodology when it is applied to remote sensing data, focusing on the improvement of the temporal description. In this regard, we have inferred daily pCO2 maps at high resolution (4 km, i.e., 1/24°) fusing monthly pCO2 data at low resolution (100 km, i.e., 1°) with the small-scale features contained in daily high-resolution maps of satellite sea surface temperature and Chlorophyll-a. The algorithm has been applied to the South Eastern Atlantic Ocean opening the possibility to obtain an accurate quantification of the CO2 fluxes in relevant coastal regions, such as the eastern boundary upwelling systems. Outputs of our algorithm have been compared with in situ measurements, showing that daily maps inferred from monthly products are in average 6 <inline-formula> <tex-math notation="LaTeX">$mu $ </tex-math></inline-formula>atm closer to the in situ values than the original coarser monthly maps. Furthermore, values of pCO2 have been improved in points close to the coast with respect to the original input data.

Sheng Nie;Cheng Wang;Xiaohuan Xi;Guoyuan Li;Shezhou Luo;Xuebo Yang;Pu Wang;Xiaoxiao Zhu; "Exploring the Influence of Various Factors on Slope Estimation Using Large-Footprint LiDAR Data," vol.56(11), pp.6611-6621, Nov. 2018. The accurate estimation of within-footprint slope is very important for measuring earth’s surface characteristics using satellite light detection and ranging (LiDAR) data. Several models have previously been proposed for slope estimation; however, these models have limitations in either accuracy or applicability. Therefore, the main purpose of this paper is to explore the influence of various factors (e.g., ground vertical extent, footprint size, footprint shape, and footprint orientation) on slope estimation to better estimate the within-footprint slope using large-footprint waveform LiDAR data. The results indicated that the absolute slope error due to the coupling effect of ground vertical extent and footprint size increased with an increase in the ratio of ground vertical extent and footprint size, while the relative slope error had an opposite trend. The slope error caused by footprint shape was relatively low when the footprint eccentricity was small. However, the slope error due to footprint shape grew rapidly when the footprint eccentricity became larger; thus, it is essential to fully take into account the influence of footprint shape on within-footprint slope estimation. In addition, the results suggest that the slope error changed regularly based on the intersection angle between footprint orientation and terrain aspect. This paper also provided guidance for the determination of an easy and practical model for within-footprint slope estimation. The determination of best model is dependent on the value of intersection angle. Once the intersection value is given, the best model can easily be determined. Using the best model, the within-footprint terrain slope can be estimated with high accuracy.

Chen Zhao;Zezong Chen;Chao He;Fei Xie;Xi Chen; "A Hybrid Beam-Forming and Direction-Finding Method for Wind Direction Sensing Based on HF Radar," vol.56(11), pp.6622-6629, Nov. 2018. Recent studies indicate that the poor antenna sidelobe level of phased-array high-frequency (HF) radars degrades the performance of Bragg ratio estimation for wind direction measurements. To explore this issue, this paper improves the previous model of wind direction estimation for phased-array HF radars, and the effect of unsatisfactory array patterns on Bragg ratio estimation is theoretically analyzed. Moreover, a hybrid beam-forming and direction-finding method, in which wind direction is measured by reducing the influence of unsatisfactory array patterns on Bragg ratio estimation, is proposed for multifrequency HF radars installed along the coast of Zhejiang Province in China. The procedure for implementing this scheme is also presented in detail. Wind directions derived from 8.05-, 10.7-, 16.8-, and 19.2-MHz radar data are compared with measurements collected with anemometers at three sampling locations to verify the proposed method, and the measurements with root-mean-square errors from 19.5° to 26.6° show that this strategy is effective for wind direction measurements. We recommend adopting the proposed method or using similar approaches to estimate wind direction based on HF radars, especially in situations where the array pattern is not sufficiently narrow.

Jing Liu;Qinhuo Liu;Hua Li;Yongming Du;Biao Cao; "An Improved Microwave Semiempirical Model for the Dielectric Behavior of Moist Soils," vol.56(11), pp.6630-6644, Nov. 2018. Soil semiempirical dielectric models (SEMs) are powerful, and they are generally considered a useful hybrid of both empirical and physical models. In this paper, the Wang–Schmugge dielectric model is improved to more accurately estimate the relative complex dielectric constants (CDCs) of moist soils. Instead of the Debye relaxation spectrum of liquid water located outside of the soil (i.e., free out-of-soil water) adopted in the Wang–Schmugge model, the Debye relaxation formula related to the free-water component inside the soil [i.e., free soil water (FSW)], which is correlated with the soil texture, is employed in the improved SEM. In addition, the effective conductivity loss term related to both soil texture and soil moisture is introduced to explain the ionic conductivity losses of FSW. Since the soil moisture influence is reduced at high frequencies, the effective conductivity loss term related to only the soil texture is also analyzed for 14–18 GHz. As in the Wang–Schmugge model, the relative CDC of bound soil water varies with the soil volumetric moisture content when the soil moisture is lower than the maximum bound water fraction in the new model, which takes a different approach than the Mironov mineralogy-based SEM. The proposed model obtains better fitting results than the three most widely employed SEMs. The improved model exhibits a significantly improved accuracy with a higher correlation coefficient (<inline-formula> <tex-math notation="LaTeX">$R^{2}$ </tex-math></inline-formula>), a closer 1:1 relationship, and a lower root-mean-square error, including in the L-band, and especially in the imaginary part of the L-band.

Risheng Huang;Xiaorun Li;Liaoying Zhao; "Hyperspectral Unmixing Based on Incremental Kernel Nonnegative Matrix Factorization," vol.56(11), pp.6645-6662, Nov. 2018. Kernel nonnegative matrix factorization (KNMF) is an extension of NMF designed to capture nonlinear dependence features in data matrix through kernel functions. In KNMF, the size of the kernel matrices is closely associated with the input data matrix, of which the calculation consumes a large amount of memory and computing resource. When applied on large-scale hyperspectral data, KNMF often meets the bottleneck of memory and may cause the overflow of memory. And when dealing with dynamically acquired data, KNMF requires recomputation of the whole data set when newly acquired data arrived, which produces huge memory and computing resource requirements. To reduce the usage of memory and improve the computational efficiency when applying KNMF on large scale and dynamic hyperspectral data, we extend KNMF by introducing partition matrix theory and considering the relationships among dividing blocks. The decomposition results of hyperspectral data are derived from much smaller scale matrices containing the formerly achieved results and the newly data blocks incrementally. In this paper, we propose an incremental KNMF (IKNMF) to reduce the computing requirements for large-scale data in hyperspectral unmixing. An improved IKNMF (IIKNMF) is also proposed to further improve the abundance results of IKNMF. Experiments are conducted on both synthetic and real hyperspectral data sets. The experimental results demonstrate that the proposed methods can effectively save memory resources without degrading the unmixing performance and the proposed IIKNMF can achieve better abundance results than IKNMF and KNMF.

Runhai Feng;Stefan M. Luthi;Dries Gisolf;Erika Angerer; "Reservoir Lithology Determination by Hidden Markov Random Fields Based on a Gaussian Mixture Model," vol.56(11), pp.6663-6673, Nov. 2018. In this paper, geological prior information is incorporated in the classification of reservoir lithologies after the adoption of Markov random fields (MRFs). The prediction of hidden lithologies is based on measured observations, such as seismic inversion results, which are associated with the latent categorical variables, based on the assumption of Gaussian distributions. Compared with other statistical methods, such as the Gaussian mixture model or <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula>-Means, which do not take spatial relationships into account, the hidden MRFs approach can connect the same or similar lithologies horizontally while ensuring a geologically reasonable vertical ordering. It is, therefore, able to exclude randomly appearing lithologies caused by errors in the inversion. The prior information consists of a Gibbs distribution function and transition probability matrices. The Gibbs distribution connects the same or similar lithologies internally, which does not need a geological definition from the outside. The transition matrices provide preferential transitions between different lithologies, and an estimation of them implicitly depends on the depositional environments and juxtaposition rules between different lithologies. Analog cross sections from the subsurface or outcrop studies can contribute to the construction of these matrices by a simple counting procedure.

Dušan Gleich; "Optimal-Dual-Based <inline-formula> <tex-math notation="LaTeX">$l_1$ </tex-math></inline-formula> Analysis for Speckle Reduction of SAR Data," vol.56(11), pp.6674-6685, Nov. 2018. Synthetic aperture radar (SAR) images are affected by a speckle noise, which is a consequence of random fluctuations in the return signal from an object that is no bigger than a single image processing element and it is caused by coherent processing of backscattered signals from multiple distributed targets. Speckle within SAR images can be reduced using filtering methods. To preserve features within the SAR images, this paper proposes a noise removal based on scene and SAR data modeling. The proposed method is a model-based total variational optimization with the minimization of a cost function. The cost function consisted of energy and data fidelity terms. The energy term was modeled using optimal-dual-based <inline-formula> <tex-math notation="LaTeX">$l_{1}$ </tex-math></inline-formula> analysis. The data fidelity term modeled the amplitude of the SAR data, which was approximated using a Nakagami distribution. The minimization of the cost function was solved using a quasi-Newton approach. The experimental results showed good results in SAR feature preservation. The proposed method was evaluated numerically using quality metrics for synthetic generated data and real amplitude SAR data.

Mallik S. Mahmud;Torsten Geldsetzer;Stephen E. L. Howell;John J. Yackel;Vishnu Nandan;Randall K. Scharien; "Incidence Angle Dependence of HH-Polarized C- and L-Band Wintertime Backscatter Over Arctic Sea Ice," vol.56(11), pp.6686-6698, Nov. 2018. Synthetic aperture radar (SAR) incidence angle has a significant effect on the microwave backscatter from sea ice. This paper investigates the incidence angle dependence of C- and L-band HH-polarized microwave backscatter coefficient over Arctic first-year sea ice (FYI) and multiyear sea ice (MYI) in winter. Advanced Land Observation Satellite Phased Array type L-band SAR (L-band) and RADARSAT-2 (C-band) images are used to derive ice type-specific incidence angle dependencies calculated using linear regression models. For L-band, mean ice type-specific incidence angle dependencies for FYI and MYI are −0.21 and −0.30 dB/1°, respectively; and for C-band, they are −0.22 and −0.16 dB/1°, respectively. To validate our results, we calculated root-mean-square deviation (RMSD) by comparing the ice type-specific dependence from 2010 with individual dependencies from 2009 based on ice types and frequencies. The RMSD is found to be smaller than the standard deviation of ice type-specific dependencies for both frequencies. The RMSD values for the L-band incidence angle dependencies are 0.03 and 0.04 dB/1° for FYI and MYI, respectively. For C-band, the RMSD values for the FYI and MYI dependencies are 0.03 and 0.01 dB/1°, respectively. Subsequently, we demonstrate that after applying incidence angle normalization, the variability of C- and L-band SAR backscatter reduces and separability of ice types increase substantially.

Lichao Mou;Xiao Xiang Zhu; "Vehicle Instance Segmentation From Aerial Image and Video Using a Multitask Learning Residual Fully Convolutional Network," vol.56(11), pp.6699-6711, Nov. 2018. Object detection and semantic segmentation are two main themes in object retrieval from high-resolution remote sensing images, which have recently achieved remarkable performance by surfing the wave of deep learning and, more notably, convolutional neural networks. In this paper, we are interested in a novel, more challenging problem of vehicle instance segmentation, which entails identifying, at a pixel level, where the vehicles appear as well as associating each pixel with a physical instance of a vehicle. In contrast, vehicle detection and semantic segmentation each only concern one of the two. We propose to tackle this problem with a semantic boundary-aware multitask learning network. More specifically, we utilize the philosophy of residual learning to construct a fully convolutional network that is capable of harnessing multilevel contextual feature representations learned from different residual blocks. We theoretically analyze and discuss why residual networks can produce better probability maps for pixelwise segmentation tasks. Then, based on this network architecture, we propose a unified multitask learning network that can simultaneously learn two complementary tasks, namely, segmenting vehicle regions and detecting semantic boundaries. The latter subproblem is helpful for differentiating “touching” vehicles that are usually not correctly separated into instances. Currently, data sets with a pixelwise annotation for vehicle extraction are the ISPRS data set and the IEEE GRSS DFC2015 data set over Zeebrugge, which specializes in a semantic segmentation. Therefore, we built a new, more challenging data set for vehicle instance segmentation, called the Busy Parking Lot Unmanned Aerial Vehicle Video data set, and we make our data set available at so that it can be used to benchmark future vehicle instance segmentation algorithms.

Gong Cheng;Zhenpeng Li;Junwei Han;Xiwen Yao;Lei Guo; "Exploring Hierarchical Convolutional Features for Hyperspectral Image Classification," vol.56(11), pp.6712-6722, Nov. 2018. Hyperspectral image (HSI) classification is an active and important research task driven by many practical applications. To leverage deep learning models especially convolutional neural networks (CNNs) for HSI classification, this paper proposes a simple yet effective method to extract hierarchical deep spatial feature for HSI classification by exploring the power of off-the-shelf CNN models, without any additional retraining or fine-tuning on the target data set. To obtain better classification accuracy, we further propose a unified metric learning-based framework to alternately learn discriminative spectral–spatial features, which have better representation capability and train support vector machine (SVM) classifiers. To this end, we design a new objective function that explicitly embeds a metric learning regularization term into SVM training. The metric learning regularization term is used to learn a powerful spectral–spatial feature representation by fusing spectral feature and deep spatial feature, which has small intraclass scatter but big between class separation. By transforming HSI data into new spectral–spatial feature space through CNN and metric learning, we can pull the pixels from the same class closer, while pushing the different class pixels farther away. In the experiments, we comprehensively evaluate the proposed method on three commonly used HSI benchmark data sets. State-of-the-art results are achieved when compared with the existing HSI classification methods.

Hu Yang;Jun Zhou;Fuzhong Weng;Ninghai Sun;Kent Anderson;Quanhua Liu;Edward J. Kim; "Developing Vicarious Calibration for Microwave Sounding Instruments Using Lunar Radiation," vol.56(11), pp.6723-6733, Nov. 2018. Accurate global observations from space are critical for global climate change study. However, atmospheric temperature trend derived from spaceborne microwave instruments remains a subject of debate, due mainly to the uncertainty in characterizing the long-term drift of instrument calibration. Thus, a highly stable target with a well-known microwave radiation is required to evaluate the long-term calibration stability. This paper develops a new model to simulate the lunar emission at microwave frequencies, and the model is then used for monitoring the stability of the Advanced Technology Microwave Sounder (ATMS) onboard Suomi NPP satellite. It is shown that the ATMS cold space view of lunar radiation agrees well with the model simulation during the past five years and this instrument is capable of serving the reference instrument for atmospheric temperature trending studies, and connecting the previous generation of microwave sounders from NOAA-15 to the future Joint Polar Satellite System Microwave Sounder onboard NOAA-20 satellite.

Stephan Palm;Rainer Sommer;Uwe Stilla; "Mobile Radar Mapping—Subcentimeter SAR Imaging of Roads," vol.56(11), pp.6734-6746, Nov. 2018. In this paper, we present a strategy for focusing ultrahigh-resolution synthetic aperture radar (SAR) data for mobile radar mapping. We illustrate the related theoretical background and required extensions on the imaging method based on backprojection techniques. The influence of potential errors in estimating a correct geometry with respect to the imaging quality is investigated in detail by point target simulations. As backprojection techniques require precise knowledge of the topography in close range, the new strategy instantly uses the GPS/INS data of the trajectory to define a suitable digital elevation model of the illuminated scene. We have tested the strategy by driving on conventional roads with an active frequency-modulated continuous wave radar system operating at 300 GHz. Different reference targets were placed in the scene, and the accuracy of the method was evaluated. The results experimentally reveal that the lower terahertz band is capable of subcentimeter SAR imaging in mobile mapping scenarios at very high quality. We have shown that narrow cracks in the asphalt of roads can be detected and fine-scale objects on millimeter size can be displayed. Geometric distortions in the SAR images are significantly reduced allowing the measurements of infrastructure. The output data can at last be transferred to the conventional 3-D Point Cloud Software for further processing.

Bin Yang;Bin Wang; "Band-Wise Nonlinear Unmixing for Hyperspectral Imagery Using an Extended Multilinear Mixing Model," vol.56(11), pp.6747-6762, Nov. 2018. Most nonlinear mixture models and unmixing methods in the literature assume implicitly that the degrees of multiple scatterings at each band are the same. However, it is commonly against the practical situation that spectral mixing is intrinsically wavelength dependent, and the nonlinear intensity varies along with bands. In this paper, a band-wise nonlinear unmixing algorithm is proposed to circumvent this drawback. Pixel dependent probability parameters of the recent multilinear mixing model that represent different orders of nonlinear contributions are vectorized. Therefore, each band can get a scalar probability parameter which explicitly corresponds to the nonlinear intensity at that band. Before solving the extended model, abundances’ sparsity and probability parameters’ smoothness are exploited to build two physical constraints. After incorporating them into the objective function as regularization terms, the issue of local minima can be well alleviated to produce better solutions. Finally, alternating direction method of multipliers is applied to solve the constrained optimization problem and implement the nonlinear spectral unmixing. Experiments are further carried out with current model-based simulated data, physical-based synthetic data of virtual vegetated areas, and real hyperspectral remote sensing images, to provide a more reasonable validation for the developed model and algorithm. In comparison with state-of-the-art nonlinear unmixing methods, this method performs better in explaining the band dependent nonlinear mixing effect for improving the unmixing accuracy.

Xiong Xu;Xiaohua Tong;Antonio Plaza;Jun Li;Yanfei Zhong;Huan Xie;Liangpei Zhang; "A New Spectral-Spatial Sub-Pixel Mapping Model for Remotely Sensed Hyperspectral Imagery," vol.56(11), pp.6763-6778, Nov. 2018. In this paper, a new joint spectral–spatial subpixel mapping model is proposed for hyperspectral remotely sensed imagery. Conventional approaches generally use an intermediate step based on the derivation of fractional abundance maps obtained after a spectral unmixing process, and thus the rich spectral information contained in the original hyperspectral data set may not be utilized fully. In this paper, a concept of subpixel abundance map, which calculates the abundance fraction of each subpixel to belong to a given class, was introduced. This allows us to directly connect the original (coarser) hyperspectral image with the final subpixel result. Furthermore, the proposed approach incorporates the spectral information contained in the original hyperspectral imagery and the concept of spatial dependence to generate a final subpixel mapping result. The proposed approach has been experimentally evaluated using both synthetic and real hyperspectral images, and the obtained results demonstrate that the method achieves better results when compared to other seven subpixel mapping methods. The numerical comparisons are based on different indexes such as the overall accuracy and the CPU time. Moreover, the obtained results are statistically significant at 95% confidence.

Hossein Aghababaee;Gianfranco Fornaro;Gilda Schirinzi; "Phase Calibration Based on Phase Derivative Constrained Optimization in Multibaseline SAR Tomography," vol.56(11), pp.6779-6791, Nov. 2018. This paper deals with the compensation of phase miscalibration in the general context of tomographic synthetic aperture radar image focusing. Phase errors are typically independent of one acquisition to the other, thus leading to a spreading and defocusing in the multidimensional (3-D, 4-D, and 5-D) imaging space. Coping with this problem in presence of volumetric scattering is generally a complex issue. In this paper, we consider the approach for phase calibration characterized by the advantage, with respect to classical phase calibration algorithms, of not requiring either the identification of a reference target or specific assumptions about the unknown phase function, or a priori information about the terrain topography. The novelty of the proposed phase miscalibration estimation and compensation method is related to its ability to avoid unwanted and uncontrollable vertical shifts in the focused image. The estimation of the calibration phase is performed by optimizing the contrast or the entropy of the vertical profile with the constraint of a zero phase derivative. Such a constraint preserves the output height distribution. Experimental results of simulated and real data are included to demonstrate the effectiveness of the proposed method.

Juan Mario Haut;Ruben Fernandez-Beltran;Mercedes E. Paoletti;Javier Plaza;Antonio Plaza;Filiberto Pla; "A New Deep Generative Network for Unsupervised Remote Sensing Single-Image Super-Resolution," vol.56(11), pp.6792-6810, Nov. 2018. Super-resolution (SR) brings an excellent opportunity to improve a wide range of different remote sensing applications. SR techniques are concerned about increasing the image resolution while providing finer spatial details than those captured by the original acquisition instrument. Therefore, SR techniques are particularly useful to cope with the increasing demand remote sensing imaging applications requiring fine spatial resolution. Even though different machine learning paradigms have been successfully applied in SR, more research is required to improve the SR process without the need of external high-resolution (HR) training examples. This paper proposes a new convolutional generator model to super-resolve low-resolution (LR) remote sensing data from an unsupervised perspective. That is, the proposed generative network is able to initially learn relationships between the LR and HR domains throughout several convolutional, downsampling, batch normalization, and activation layers. Then, the data are symmetrically projected to the target resolution while guaranteeing a reconstruction constraint over the LR input image. An experimental comparison is conducted using 12 different unsupervised SR methods over different test images. Our experiments reveal the potential of the proposed approach to improve the resolution of remote sensing imagery.

Andrea Virgilio Monti-Guarnieri;Maria Antonia Brovelli;Marco Manzoni;Mauro Mariotti d’Alessandro;Monia Elisa Molinari;Daniele Oxoli; "Coherent Change Detection for Multipass SAR," vol.56(11), pp.6811-6822, Nov. 2018. This paper focuses on the detection, from a stack of repeated-pass interferometric synthetic aperture radar (SAR) images, of such changes causing a target to completely lose the correlation between one epoch and another. This can be the consequence of human activities, such as construction, destruction, and agricultural activities, and also be the consequence of hazards, such as earthquake, landslides, or flooding, to buildings or terrains. The millimetric sensitivity of SAR makes it valuable for detecting such changes. This paper approaches two coherent change detection methods: a space coherent, time incoherent one and a full space and time coherent one, both based on the generalized likelihood ratiob (LR) test. A preliminary validation of the method is provided by processing two Sentinel-1 data stacks of 2016 Central Italy earthquake and by comparing the results with the map of damaged buildings in Amatrice and Accumoli made by Copernicus Emergency Management Service.

Yunfeng Lv;Di Wu;Zhongqiu Sun; "Effect of Black Carbon Concentration on the Reflection Property of Snow: A Comparison With Model Results," vol.56(11), pp.6823-6840, Nov. 2018. Snow has a very high reflection property when compared with all other natural surfaces on Earth; thus, a small amount of contamination (30 ng/g) can dramatically reduce the reflectance of snow. To quantify the effect of black carbon (BC) concentrations on the directional reflectance factors of snow, we deposited BC concentrations onto snow surfaces at natural levels (98–4095 ng/g) and compared the measurement results with those from a theoretical model. It was found that increasing BC concentrations decreased the reflectance factor of snow and changed its distribution pattern. Moreover, our data provided valuable verification of the snow reflection model, which has previously been used to characterize the reflectance of snow. The model did not well match our measurements; for example, the model found fewer anisotropic results than those from observations. Subsequently, a specular kernel combined with two free parameters was proposed to counter the forward and backward scattering of the optical property of snow. The improved model successfully characterized the observed variability in the reflection measurements of snow with different BC concentration levels under field conditions, and its inverted parameter (M) had the potential to estimate BC concentrations. The improved model also had the ability to simulate the spectral reflectance factor of snow with low BC concentrations (i.e., smaller than 118 ng/g) over a wide range of viewing zenith angles. This paper provides an additional and effective method for studying the angular and spectral reflection properties of snow.

Prabu Dheenathayalan;David Small;Ramon F. Hanssen; "3-D Positioning and Target Association for Medium-Resolution SAR Sensors," vol.56(11), pp.6841-6853, Nov. 2018. Associating a radar scatterer to a physical object is crucial for the correct interpretation of interferometric synthetic aperture radar measurements. Yet, especially for medium-resolution imagery, this is notoriously difficult and dependent on the accurate 3-D positioning of the scatterers. Here, we investigate the 3-D positioning capabilities of ENVISAT medium-resolution data. We find that the data are perturbed by range-and-epoch-dependent timing errors and calibration offsets. Calibration offsets are estimated to be about 1.58 m in azimuth and 2.84 m in range and should be added to ASAR products to improve geometric calibration. The timing errors involve a bistatic offset, atmospheric path delay, solid earth tides, and local oscillator drift. This way, we achieve an unbiased positioning capability in 2-D, while in 3-D, a scatterer was located at a distance of 28 cm from the true location. 3-D precision is now expressed as an error ellipsoid in local coordinates. Using the Bhattacharyya metric, we associate radar scatterers to real-world objects. Interpreting deformation of individual infrastructure is shown to be feasible for this type of medium-resolution data.

Yan Soldo;David M. Le Vine;Alexandra Bringer;Paolo de Matthaeis;Roger Oliva;Joel T. Johnson;Jeffrey R. Piepmeier; "Location of Radio-Frequency Interference Sources Using the SMAP L-Band Radiometer," vol.56(11), pp.6854-6866, Nov. 2018. The Soil Moisture Active/Passive (SMAP) satellite mission measures Earth’s radiation in the protected portion of the spectrum at 1.413 GHz (L-band) to retrieve geophysical quantities of the surface, such as soil moisture and the frozen/thawed state of the soil. The presence of radio-frequency interference (RFI) in this band is significant and impacts the quality of SMAP measurements. Knowing the location of the sources of RFI is important, because it can help to identify the source itself and also be used to develop strategies to mitigate its impact of the RFI on the data. This paper presents an algorithm that takes advantage of the viewing geometry of SMAP to locate sources of RFI. The results are validated using known locations of RFI sources and by comparison with the measurements of Soil Moisture and Ocean Salinity (SMOS) and Aquarius, two other satellite missions with L-band microwave radiometers operating in the protected band. Comparison with RFI of known location suggests that the algorithm is accurate to 1–2 km. The median distance between the locations reported by SMOS and this algorithm is 2.27 km. A study of the relationship between the localization error and the number of observations of RFI sources shows that the median localization error is about 2 km with 12 observations and about 1 km with 30 observations.

* "IEEE Access," vol.56(11), pp.6867-6867, Nov. 2018.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "IEEE Transactions on Geoscience and Remote Sensing information for authors," vol.56(11), pp.C3-C3, Nov. 2018.* Provides instructions and guidelines to prospective authors who wish to submit manuscripts.

* "IEEE Transactions on Geoscience and Remote Sensing institutional listings," vol.56(11), pp.C4-C4, Nov. 2018.* Advertisements.

IEEE Geoscience and Remote Sensing Letters - new TOC (2018 November 15) [Website]

* "Front Cover," vol.15(11), pp.C1-C1, Nov. 2018.* Presents the front cover for this issue of the publication.

* "IEEE Geoscience and Remote Sensing Letters publication information," vol.15(11), pp.C2-C2, Nov. 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Table of contents," vol.15(11), pp.1635-1636, Nov. 2018.* Presents the table of contents for this issue of the publication.

Leonardo A. Agüero Guzmán;Elias M. Ovalle;Rodrigo A. Reeves; "Measurement of the Ionospheric Reflection Height of an HF Wave in Vertical Incidence With a Resolution of Minutes," vol.15(11), pp.1637-1641, Nov. 2018. In this letter, we present a prototype of an RF signal receiver operating in the HF band, whose design considers the use of software-defined radio signal processing technology, based on field-programmable gate arrays (FPGA) reconfigurable hardware and the use of gnuradio open software. The purpose of this letter is to improve the measurement rate at a fixed frequency of the reflection height, which is now obtained with a rate of 15 min using the IPS-42 ionosonde. The proposed method uses a pulse generated by the Canadian Advanced Digital Ionosonde as the transmitted signal. For the receiving section, the FPGA-based “Universal Software Radio Peripheral 1” was directly connected to a PC, where the return signals were analyzed by gnuradio. The measurements are taken with 1-min cadence, approximately, and are validated by comparing them with 15-min measurements taken with a colocated IPS-42 ionosonde. The acquisition rate of order 1 every minute is of interest in the study of a number of physical processes, i.e., traveling ionospheric disturbance, disturbances generated by seismic events, meteorological processes, and so on.

Haifeng Zhang;Alexander V. Babanin;Alexander Ignatov;Boris Petrenko; "Initial Evaluation of the Sensor-Specific Error Statistics in the NOAA Advanced Clear-Sky Processor for Oceans SST System: Diurnal Variation Signals Captured," vol.15(11), pp.1642-1646, Nov. 2018. The newly designed sensor-specific error statistics (SSES) for sea surface temperature (SST) in the National Oceanic and Atmospheric Administration (NOAA) Advanced Clear-Sky Processor for Oceans (ACSPO) system is evaluated using six months (September 15, 2017–March 14, 2018) of ACSPO SST derived from the Visible Infrared Imaging Radiometer Suite (VIIRS) flown onboard the Suomi National Polar-orbiting Partnership satellite, SSTVIIRS. SSES bias field is shown to generally capture the temperature differences between the surface SSTVIIRS and the in situ SSTs (0.2–1 m) in both daytime and nighttime, especially at low-mid latitudes. Daytime SSES can capture, in addition to several other physical factors, the diurnal variation differences between the SSTVIIRS and SSTin situ over most of the global ocean. The SSES biases also accurately pick up the cooling effect on satellite SST retrievals caused by the Saharan dust over the eastern tropical–subtropical Atlantic Ocean. However, at high latitudes, especially over the Southern Ocean where in situ SST measurements are sparse, and strong winds are persistent, the current ACSPO SSES biases may be suboptimal.

Anna Mateo-Sanchis;Jordi Muñoz-Marí;Adrián Pérez-Suay;Gustau Camps-Valls; "Warped Gaussian Processes in Remote Sensing Parameter Estimation and Causal Inference," vol.15(11), pp.1647-1651, Nov. 2018. This letter introduces warped Gaussian process (WGP) regression in remote sensing applications. WGP models output observations as a parametric nonlinear transformation of a GP. The parameters of such a prior model are then learned via standard maximum likelihood. We show the good performance of the proposed model for the estimation of oceanic chlorophyll content from multispectral data, vegetation parameters (chlorophyll, leaf area index, and fractional vegetation cover) from hyperspectral data, and in the detection of the causal direction in a collection of 28 bivariate geoscience and remote sensing causal problems. The model consistently performs better than the standard GP and the more advanced heteroscedastic GP model, both in terms of accuracy and more sensible confidence intervals.

Dominik Rains;Gabrielle J. M. De Lannoy;Hans Lievens;Jeffrey P. Walker;Niko E. C. Verhoest; "SMOS and SMAP Brightness Temperature Assimilation Over the Murrumbidgee Basin," vol.15(11), pp.1652-1656, Nov. 2018. With the launch of the Soil Moisture and Ocean Salinity (SMOS) mission in 2009 and the Soil Moisture Active-Passive (SMAP) mission in 2015, a wealth of L-band brightness temperature (Tb) observations has become available. In this letter, SMOS and SMAP Tbs are assimilated separately into the Community Land Model over the Murrumbidgee basin in south-east Australia from April 2015 to August 2017. To overcome the seasonal Tb observation-minus-forecast biases, Tb anomalies from the seasonal climatology are assimilated. The use of climatologies derived from either SMOS or SMAP observations using either 2 years or 7 years of data yields nearly identical results, highlighting the limited sensitivity to the climatology computation and their interchangeability. The temporal correlation between soil moisture data assimilation results and in situ observations is slightly improved for top-layer soil moisture (+0.04) and for root-zone soil moisture (+0.05). The soil moisture anomaly correlation improves moderately for the top-layer soil moisture (+0.15), with a smaller positive impact on the root zone (+0.05).

Akhilesh S. Nair;J. Indu; "A Coupled Land Surface and Radiative Transfer Models Based on Relief Correction for a Reliable Land Data Assimilation Over Mountainous Terrain," vol.15(11), pp.1657-1661, Nov. 2018. This letter presents a new approach to incorporate topographic relief effect in land data assimilation system over mountainous terrain. The conventional radiative transfer model (RTM) used for assimilation of microwave brightness temperature (Tb) is subjected to systematic bias owing to its flat earth model assumption. This is theoretically important for direct Tb assimilation system since a difference between simulated and observed Tb may disarray the assimilation system. Here, we consider three crucial relief effects, namely: 1) change in local incidence angle; 2) rotation of the plane of polarization; and 3) effect of pixels in shadow not visible to radiometer. Results indicate that the RTM formulation with relief effects (Topo) significantly reduce the bias as compared to the conventional RTM (Flat). The simulated Tb for the Topo case shows improved sensitivity toward soil moisture as compared to the Flat case. The proposed land surface model (LSM)–RTM assimilation framework has an immense potential source for initialization of LSM for seasonal climate prediction over mountainous terrain.

Yuanheng Sun;Huazhong Ren;Tianyuan Zhang;Chengye Zhang;Qiming Qin; "Crop Leaf Area Index Retrieval Based on Inverted Difference Vegetation Index and NDVI," vol.15(11), pp.1662-1666, Nov. 2018. Leaf area index (LAI), an important parameter describing a crop canopy structure and its growth status, can be estimated from remote sensing data by statistical methods involving vegetation indices (VIs). This letter reports the development of a new VI, the inverted difference vegetation index (IDVI), for crop LAI retrieval. The IDVI can overcome the saturation issue of the normalized difference vegetation index (NDVI) at high LAI values and exhibits robust insensitivity to crop leaf water and chlorophyll content. By combining the IDVI and NDVI with a scaling factor, we constructed a novel statistical regression model with parameters that can be calibrated to a specific region to estimate the LAI. Validations on simulated data and in situ observations show that the proposed retrieval method with the IDVI is stable for low and high LAIs and obtains better results than the empirical method involving the NDVI at the regional scale. Findings in this letter will benefit future agricultural applications.

Shuangquan Chen;Qing Wei;Libin Liu;Xiang-Yang Li; "Data-Driven Attenuation Compensation Via a Shaping Regularization Scheme," vol.15(11), pp.1667-1671, Nov. 2018. Seismic prospecting is one of the most important geophysical survey methods to understand the earth’s structure. Properly compensating the anelastic attenuation of seismic data help to clearly characterize the structure of subsurface. Inverse <inline-formula> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula>-filtering processing is one of the most efficient ways to compensate for anelastic attenuation caused by the earth’s structure. Stability is the most challenging task in inverse <inline-formula> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula>-filtering processing. In this letter, we introduce a shaping regularization method to stabilize the amplitude compensation operator in the inverse <inline-formula> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula>-filtering. The stabilization operator can be used simultaneously with a phase correction operator. To further improve the robustness of our method to noise, we adopt an adaptive tapering window to control the compensation frequency components according to the frequency bandwidth of seismic data. These schemes can accentuate the attenuated amplitude through <inline-formula> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula>-filtering and improve the stability of our method to noise. To demonstrate the effectiveness of our method, we first apply it to synthetic examples and to a real seismic data. Both results illustrate that the proposed shaping regularization inverse <inline-formula> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula>-filtering is an efficient adaptive data-driven inverse <inline-formula> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula>-filtering method.

T. Scuccato;L. Carrer;F. Bovolo;L. Bruzzone; "Compensating Earth Ionosphere Phase Distortion in Spaceborne VHF Radar Sounders for Subsurface Investigations," vol.15(11), pp.1672-1676, Nov. 2018. Spaceborne low-frequency and wide bandwidth radar sounders are a promising technology to regularly investigate at global-scale Earth’s icy and arid regions. However, Earth ionosphere distorts the radar signal impacting performance parameters, such as subsurface resolution, of the radar system. One of the most relevant distortions that a sounder signal in the lower part of the very high-frequency (VHF) band (e.g., 40–50 MHz) encounters is the distortion of the phase component that could become mission critical if not properly compensated. Low-frequency and high fractional bandwidth radar systems are particularly affected by this issue. Previous works on radar sounder ionosphere phase distortion compensation addressed the Martian ionosphere and used techniques based on the Taylor series expansion. In this letter, we focus on the Earth ionosphere and we exploit a recently proposed ionosphere compensation technique based on the Legendre orthogonal polynomials expansion, which proved to be more accurate than the compensation based on Taylor expansion. Simulations show that the method allows a nominal compensation of the phase distortions under realistic ionosphere scenarios expected during the acquisitions. Furthermore, it proved to be accurate and robust for total electron content conditions expected during nighttime for all the geomagnetic latitudes. The results confirm that the method can accurately compensate the distorting effects on the phase component of a spaceborne VHF radar sounder.

Haitao Ma;Teng Wang;Yue Li;Yuqi Meng; "A Time Picking Method for Microseismic Data Based on LLE and Improved PSO Clustering Algorithm," vol.15(11), pp.1677-1681, Nov. 2018. Time picking is of great concern in the processing of microseismic data. However, the traditional method based on time/frequency domain cannot pick the first arrival time accurately in low signal-to-noise ratio. Besides, the traditional time picking methods which based on clustering are sensitive to selecting the initial clustering centers and easy to converge to local optimal value. To solve the above problems, we propose a time picking method for microseismic data based on locally linear embedding (LLE) and improved particle swarm optimization (PSO) clustering algorithm. First, the LLE algorithm can obtain the inherent characteristics and the rules hidden in high-dimensional data by calculating Euclidean distances and reconstruction weights between microseismic data points. The input is represented in a low-dimensional form. Then, the improved PSO clustering algorithm is used to select the optimal clustering centers from low-dimensional data through global search method. After that, the low-dimensional data can be classified into noise cluster and signal cluster by the K-means algorithm. Finally, the initial time of the signal cluster can be considered as the first arrival time of microseismic data. The experimental results show that accuracy of the proposed method is higher than that of the improved PSO clustering algorithm, Akaike information criterion method, and short- and long-time window ratio method (short-time window averaging/long-time window averaging).

Sanyi Yuan;Shangxu Wang;Chunmei Luo;Tieyi Wang; "Inversion-Based 3-D Seismic Denoising for Exploring Spatial Edges and Spatio-Temporal Signal Redundancy," vol.15(11), pp.1682-1686, Nov. 2018. Seismic data are increasingly required to be high quality for the continuous improvement of the degree of exploration. From the viewpoint of inversion, the utilization of more information is an effective way to improve the signal-to-noise ratio of seismic data. In this letter, we adopt simultaneous sparsity constraints of the first-order differences of signals along the time direction and two spatial directions, described by minimizing the Cauchy function, as a combined constraint (or regularization) term imposed on the time-domain data misfit to propose an inversion-based 3-D seismic denoising method. In this way, the redundancies among time slices and seismic sections along two spatial directions are simultaneously considered, and the edges along the spatial directions can be preserved. Through analyzing the first-order derivative of the sum of the data misfit term and the designed combined regularization term (or the objective function), we derive that the relationship between data and desired signal samples in the range of the first-order neighborhood can be expressed as a linear system with seven data-dependent coefficients. Furthermore, it can be inferred that the sparsity constraints of signal differences along different dimensional directions of 3-D data have some complementary functions of noise reduction and signal preservation. We use a 3-D synthetic data set, a 3-D real poststack data set, and a 3-D real prestack data set to determine that the proposed method is an effective amplitude-preservation denoising tool with an acceptable computational cost.

Omar M. Saad;Koji Inoue;Ahmed Shalaby;Lotfy Samy;Mohammed S. Sayed; "Automatic Arrival Time Detection for Earthquakes Based on Stacked Denoising Autoencoder," vol.15(11), pp.1687-1691, Nov. 2018. The accurate detection of P-wave arrival time is imperative for determining the hypocenter location of an earthquake. However, precise detection of onset time becomes more difficult when the signal-to-noise ratio (SNR) of the seismic data is low, such as during microearthquakes. In this letter, a stacked denoising autoencoder (SDAE) is proposed to smooth the background noise. The SDAE acts as a denoising filter for the seismic data. In the proposed algorithm, the SDAE is utilized to reduce background noise such that the onset time becomes more clear and sharp. Afterward, a hard decision with one threshold is used to detect the onset time of the event. The proposed algorithm is evaluated on both synthetic and field seismic data. As a result, the proposed algorithm outperforms the short-time average/long-time average and the Akaike information criterion algorithms. The proposed algorithm accurately picks the onset time of 94.1% for 407 field seismic waveforms with a standard deviation error of 0.10 s. In addition, the results indicate that the proposed algorithm can pick arrival times accurately for weak SNR seismic data with SNR higher than −14 dB.

Giampietro Casasanta;Igor Petenko;Giangiuseppe Mastrantonio;Simone Bucci;Alessandro Conidi;Andrea M. Di Lellis;Giulio Sfoglietti;Stefania Argentini; "Consumer Drones Targeting by Sodar (Acoustic Radar)," vol.15(11), pp.1692-1694, Nov. 2018. Consumer drones have recently become a threat in many real scenarios, since they are difficult to detect and track, and can be easily used to perform criminal activities such as smuggling of illicit materials, surveillance operations or network hacking, and stealing data. Existing technologies are either not suitable to detect an object that can be as small as 10 cm, or quite expensive and complicated to deploy. We carried out a field experiment to explore the possibility of tuning a well-known, low-cost acoustic radar (sodar), commonly used in atmospheric physics, to detect and track consumer drones. The vertical position of a small drone retrieved by a single-sodar antenna turned out to be in good agreement with that measured by its onboard GPS (correlation coefficient 0.93), and no significant bias was observed. Despite being preliminary, these results support the use of such a technique to retrieve reliably the position of small unmanned aerial vehicles.

Jue Wang;Wenchao Liu;Long Ma;He Chen;Liang Chen; "IORN: An Effective Remote Sensing Image Scene Classification Framework," vol.15(11), pp.1695-1699, Nov. 2018. In recent times, many efforts have been made to improve remote sensing image scene classification, especially using popular deep convolutional neural networks. However, most of these methods do not consider the specific scene orientation of the remote sensing images. In this letter, we propose the improved oriented response network (IORN), which is based on the ORN, to handle the orientation problem in remote sensing image scene classification. We propose average active rotating filters (A-ARFs) in the IORN. While IORNs are being trained, A-ARFs are updated by a method that is different from the ARFs of the ORN, without additional computations. This change helps IORN improve its ability to encode orientation information and speeds up optimization during training. We also propose Squeeze-ORAlign (S-ORAlign) by adding a squeeze layer to ORAlign of ORN. With the squeeze layer, S-ORAlign can address large-scale images, unlike ORAlign. An ablation study and comparison experiments are designed on a public remote sensing image scene classification data set. The experimental results demonstrate the effectiveness and better performance of the proposed model over that of other state-of-the-art models.

Thayananthan Thayaparan;Yousef Ibrahim;John Polak;Ryan Riddolls; "High-Frequency Over-the-Horizon Radar in Canada," vol.15(11), pp.1700-1704, Nov. 2018. Over-the-horizon radars (OTHRs) have recently been making a comeback in Canada. As the need for accurate long-range tracking becomes more important, less-expensive ground-based radars are once again being considered for more effective long-range surveillance of Canadian airspace. Ray tracing is a powerful tool and is, especially, useful in applications requiring a detailed knowledge of radio wave propagation through the ionosphere. In this letter, new methods are developed to determine the feasible radar parameters such as operating frequencies, elevation angles, and absorption for OTHR operation using a 3-D ray tracing technique and up-to-date ionospheric, magnetic, and absorption models. The results of these simulations can be used for frequency monitoring systems and other OTHR applications in Canada.

Wei Zhou;Junhao Xie;Baiqiang Zhang;Gaopeng Li; "Maximum Likelihood Detector in Gamma-Distributed Sea Clutter," vol.15(11), pp.1705-1709, Nov. 2018. Constant false alarm rate (CFAR) is the desired property for automatic target detection in unknown and nonstationary background. In this letter, an analysis of the experimental data shows that gamma (GM) distribution is a promising model for sea clutter. Furthermore, a modified cell-averaging (CA) detector for GM-distributed clutter is proposed by using the maximum likelihood estimation method. Theoretical analysis demonstrates that the proposed detector maintains the CFAR property with respect to the scale parameter of the GM-distributed background. The proposed detector is verified to be optimal in homogenous GM-distributed clutter with a known shape parameter when compared with CA, greatest of selection, ordered statistic (OS), and weighted amplitude iteration (WAI) detectors. At clutter edges, the proposed method attains a similar false alarm rate control compared with the CA, OS, and WAI detectors. In multiple-target scenario, the proposed method works effectively and robustly, whereas the competitors suffer performance degradation in varying degree. Simulation and experimental results demonstrate the superiority and generality of the proposed method.

Jiangyuan Zeng;Kun-Shan Chen; "Theoretical Study of Global Sensitivity Analysis of L-Band Radar Bistatic Scattering for Soil Moisture Retrieval," vol.15(11), pp.1710-1714, Nov. 2018. This letter explores the optimal bistatic radar configurations for bare soil moisture retrieval at L-band using a global sensitivity analysis method, the extended Fourier amplitude sensitivity test (EFAST) algorithm. Complete sets of bistatic scattering, covering a wide range of geometric parameters and ground surface conditions, are simulated by the well-established advanced integral equation model. The sensitivity of radar bistatic signals to soil moisture and surface roughness, and the interactions among the parameters are quantified using the EFAST algorithm. The results show that in bistatic scattering, VV polarization has notably higher sensitivity to soil moisture than HH polarization, particularly at large incident angles. For VV polarization, as incident angle increases, the sensitivity zone of soil moisture expands and shifts toward the forward direction, specifically at small azimuth scattering angles and large scattering angles, thereby becoming promising configurations for soil moisture retrieval. For HH polarization, in contrast, the sensitive zone gradually moves to the backward direction as incident angle increases, and an intermediate incident angle (e.g., 40°) is recommended for retrieving soil moisture by considering both sensitivity strength and parameter interaction effects.

Hongyang An;Junjie Wu;Zhichao Sun;Jianyu Yang;Yulin Huang;Haiguang Yang; "Topology Design for Geosynchronous Spaceborne–Airborne Multistatic SAR," vol.15(11), pp.1715-1719, Nov. 2018. Geosynchronous (GEO) spaceborne–airborne multistatic synthetic aperture radar (GEO MulSAR) is more flexible and accessible in remote sensing applications because of the high-altitude illuminator and the separation of the receivers and transmitter. In addition, the information obtained by the multiple airborne receivers can be fused to enhance the spatial resolution. However, the fused spatial resolution severely depends on the applied multistatic topology. To achieve the optimal fused spatial resolution by properly adjusting the imaging topology, a topology design method is proposed in this letter. First, the spatial resolution model of GEO MulSAR is given, and the dependence of the spatial resolution on the multistatic topology is analyzed in detail. Then, a topology design method is proposed to obtain the best multistatic topology that simultaneously optimizes the resolution cell area and resolution disequilibrium factor. Finally, the simulation results validate the effectiveness of the proposed method, and some insights into designing the multistatic topology are given.

Tushar Gadhiya;Anil K. Roy; "Optimized Wishart Network for an Efficient Classification of Multifrequency PolSAR Data," vol.15(11), pp.1720-1724, Nov. 2018. High-resolution wide-area images are required in the diverse field of research ranging from urban planning and disaster prediction to agriculture and geology. Sometimes the image is taken under harsh weather conditions or at night time. Current optical remote sensing technology does not have the capability to acquire images in such conditions. Synthetic aperture radar (SAR) uses microwave signal which has a long-range propagation characteristic that allows us to capture images in difficult weather conditions. In addition to this, some polarimetric SAR (PolSAR) systems are also capable of capturing images using multifrequency bands simultaneously resulting into a multitude of information in comparison to the optical images. In this letter, we propose a single-hidden layer optimized Wishart network (OWN) and extended OWN for classification of the single-frequency and multifrequancy PolSAR data, respectively. Performance evaluation is conducted on a single-frequency as well as multifrequency SAR data obtained by Airborne Synthetic Aperture Radar. We observed that for combining multiple band information, proposed single-hidden layer network outperforms deep learning-based architecture involving multiple hidden layers.

Dario Ligori;Simon Wagner;Luca Fabbrini;Mario Greco;Tanja Bieker;Gianpaolo Pinelli;Stefan Brüggenwirth; "Nonparametric ISAR Autofocusing Via Entropy- Based Doppler Centroid Search," vol.15(11), pp.1725-1729, Nov. 2018. A novel approach to nonparametric entropy-based autofocusing of inverse synthetic aperture radar images is proposed. Kinematics considerations show that the range cell corresponding to the target rotation center has a purely translational Doppler phase. This range cell can be selected by means of entropy criteria, and its phase component used as a measurement of the translational Doppler phase. Concerning the existing techniques, the new one is robust enough to noise and low target reflectivity, whereas provides higher quality output images with low computational load. The quality of the output is further confirmed by resorting to a recently published automatic target recognition algorithm, obtaining a very high success rate on a wide and variegated data set, acquired by the tracking and imaging radar system of the Fraunhofer FHR Institute.

Gustavo Martín del Campo;Matteo Nannini;Andreas Reigber; "Towards Feature Enhanced SAR Tomography: A Maximum-Likelihood Inspired Approach," vol.15(11), pp.1730-1734, Nov. 2018. One of the main objectives of the upcoming space missions, such as Tandem-L and BIOMASS, is to map, on a global scale, the forest structure by means of synthetic aperture radar (SAR) tomography (TomoSAR). On one hand, the number of baselines is constrained to the revisit time that avoids temporal decorrelation issues. On the other hand, enhanced resolution is desired, since the forest structure is characterized from the vegetation layers that compose it, reflected in the tomographic profiles as local maxima. The TomoSAR nonlinear ill-conditioned inverse problem is conventionally tackled within the direction-of-arrival (DOA) estimation framework. The DOA-inspired nonparametric techniques are well suited to cope with distributed targets; nonetheless, the achievable resolution highly depends on the span of the tomographic aperture. Alternatively, superresolved parametric approaches have the main drawback related to the white noise model assumption that guaranties the separation of the signal and noise subspaces. Overcoming the disadvantages of the aforementioned techniques, in this letter, we address a novel maximum-likelihood (ML) inspired adaptive robust iterative approach (MARIA) for feature-enhanced TomoSAR reconstruction. MARIA performs resolution enhancement, with suppression of artifacts and ambiguity levels reduction, to an initial estimate of the continuous power spectrum pattern. After convergence, an accurate location of the closely spaced phase centers is achieved, easing the characterization of the forest structure. The feature-enhancing capabilities of the proposed approach are corroborated using airborne F-SAR data of the German Aerospace Center (DLR).

G. P. Ortiz;J. A. Lorenzzetti; "Observing Multimodal Ocean Wave Systems by a Multiscale Analysis of Polarimetric SAR Imagery," vol.15(11), pp.1735-1739, Nov. 2018. A new methodology is here presented for the assessment of deep water multimodal ocean wave systems using polarimetric Radarsat-2 imagery. To separate wave systems in multimodal cases and evaluate their statistics individually, the new approach uses a multiscale processing scheme together with the mean square slope statistics theory of ocean waves, which has been originally developed from optical and radar altimetry sensors. Moreover, it was possible to classify the wave systems as wind driven or “pure” swell. The results show a reasonable agreement with data by a nearby metocean buoy. More C-band polarimetric SAR imagery is required to fully validate the methodology with well collocated wave buoys. We reckon that this new approach, being a fast and independent satellite algorithm to assess multimodal ocean wave systems, could contribute to several coastal/oceanic engineering applications and monitoring systems.

Jiaqi Zhang;Dan Zhang;Wenping Ma;Licheng Jiao; "Deep Self-Paced Residual Network for Multispectral Images Classification Based on Feature-Level Fusion," vol.15(11), pp.1740-1744, Nov. 2018. The classification methods based on fusion techniques of multisource multispectral (MS) images have been studied for a long time. However, it may be difficult to classify these data based on a feature level while avoiding the inconsistency of data caused by multisource and multiple regions or cities. In this letter, we propose a deep learning structure called 2-branch SPL-ResNet which combines the self-paced learning with deep residual network to classify multisource MS data based on the feature-level fusion. First, a 2-D discrete wavelet is used to obtain the multiscale features and sparse representation of MS data. Then, a 2-branch SPL-ResNet is established to extract respective characteristics of the two satellites. Finally, we implement the feature-level fusion by cascading the two feature vectors and then classify the integrated feature vector. We conduct the experiments on Landsat_8 and Sentinel_2 MS images. Compared with the commonly used classification methods such as support vector machine and convolutional neural networks, our proposed 2-branch SPL-ResNet framework has higher accuracy and more robustness.

Zenghui Zhang;Weiwei Guo;Shengnan Zhu;Wenxian Yu; "Toward Arbitrary-Oriented Ship Detection With Rotated Region Proposal and Discrimination Networks," vol.15(11), pp.1745-1749, Nov. 2018. Ship detection from remote sensing images can provide important information for maritime reconnaissance and surveillance and is also a challenging task. Although previous detection methods including some advanced ones based on deep convolutional neural network expertize in detecting horizontal or nearly horizontal targets, they cannot give satisfying detection results for arbitrary-oriented ship detection. In this letter, we introduce a novel ship detection system that can detect arbitrary-oriented ships. In this method, a rotated region proposal networks (R2PN) is proposed to generate multiorientated proposals with ship orientation angle information. In R2PN, the orientation angles of bounding boxes are also regressed to make the inclined ship region proposals generated more accurately. For ship discrimination, a rotated region of interest pooling layer is adopted in the following classification subnetwork to extract discriminative features from such inclined candidate regions. The proposed whole ship detection system can be trained end to end. Experimental results conducted on our rotated ship data set and HRSD2016 benchmark demonstrate that our proposed method outperforms state-of-the-art approaches for the arbitrary-oriented ship detection task.

Wenqiang Zhang;Xiaorun Li;Liaoying Zhao; "A Fast Hyperspectral Feature Selection Method Based on Band Correlation Analysis," vol.15(11), pp.1750-1754, Nov. 2018. Band selection (BS) tries to find a few useful bands to represent the whole hyperspectral image cube. This letter proposes a novel unsupervised BS method based on the band correlation analysis (BCA). The BCA method tries to find a subset of bands that can well represent the whole image data set. To avoid the exhaustive search, the BCA method iteratively adds the band with the good representative ability and low redundancy into the selected band set, until the sufficient quantity of bands has been obtained. The redundancy and the representative ability of one band are computed by its correlation with the currently selected bands and the remaining unselected bands, respectively. Through constructing a correlation matrix of total bands, the BCA method can find the bands that with large amounts of information and low redundancy, which ensures that the selected bands are useful for the further applications like pixels classification. Experimental results on three different data sets demonstrate that the proposed method is very effective and can achieve the best performance among the competitors.

Xiangrong Zhang;Yujia Sun;Jingyan Zhang;Peng Wu;Licheng Jiao; "Hyperspectral Unmixing via Deep Convolutional Neural Networks," vol.15(11), pp.1755-1759, Nov. 2018. Hyperspectral unmixing (HU) is a method used to estimate the fractional abundances corresponding to endmembers in each of the mixed pixels in the hyperspectral remote sensing image. In recent times, deep learning has been recognized as an effective technique for hyperspectral image classification. In this letter, an end-to-end HU method is proposed based on the convolutional neural network (CNN). The proposed method uses a CNN architecture that consists of two stages: the first stage extracts features and the second stage performs the mapping from the extracted features to obtain the abundance percentages. Furthermore, a pixel-based CNN and cube-based CNN, which can improve the accuracy of HU, are presented in this letter. More importantly, we also use dropout to avoid overfitting. The evaluation of the complete performance is carried out on two hyperspectral data sets: Jasper Ridge and Urban. Compared with that of the existing method, our results show significantly higher accuracy.

Chunhui Zhao;Yao Xi-Feng; "Fast Real-Time Kernel RX Algorithm Based on Cholesky Decomposition," vol.15(11), pp.1760-1764, Nov. 2018. Real-time processing has attracted wide attention in hyperspectral anomaly detection. The traditional local real-time kernel RX detector (LRT-KRXD) is still with some computational limitations, which lower the processing speed and even damage the detection due to the matrix singularity. In this letter, we present LRT-KRXD based on Cholesky decomposition (LRT-KRXD-CD). First, the derivation of kernel covariance matrices is computationally expensive in KRXD, while each two adjacent matrices contain almost identical content. To remove the repeated computation, a recursive strategy for these kernel covariance matrices is used. Second, the kernel covariance matrix is symmetric positive definite after adding a diagonal matrix with small scale. With this property, Cholesky decomposition and linear system solving can be utilized to address the problem of inverse matrix. In this case, the detection of LRT-KRXD-CD becomes robust and its processing speed is improved as well. Experimental results on two hyperspectral images substantiate the effectiveness of LRT-KRXD-CD.

Maofeng Tang;Bing Zhang;Andrea Marinoni;Lianru Gao;Paolo Gamba; "Multiharmonic Postnonlinear Mixing Model for Hyperspectral Nonlinear Unmixing," vol.15(11), pp.1765-1769, Nov. 2018. In this letter, a new method for higher order nonlinear hyperspectral unmixing is introduced. The proposed scheme relies on the harmonic description of the endmembers contributions to characterize the interactions among the materials showing up in the given scenes. Moreover, it aims at directly estimating the probability of occurrence of each material in the images, so to provide an accurate quantification of the endmembers also in complex scenarios. Experimental results carried out on synthetic and real data sets show that the proposed method is able to obtain good unmixing performance when compared to other state-of-the-art architectures.

Christofer Schwartz;Cristiano Torezzan;Leonardo Tomazeli Duarte; "A UEP Method for Imaging Low-Orbit Satellites Based on CCSDS Recommendations," vol.15(11), pp.1770-1774, Nov. 2018. Remote sensing satellites allow continuous information acquisition from large areas of the earth and have been intensively applied in a number of applications, from agriculture to defense. A major challenge in remote sensing is that satellite communication systems present bandwidth restrictions and several issues typical of time-variant channels, which justifies the need for signal coding techniques. In that sense, this letter proposes an unequal error protection method for aerospace applications using the recommendations for source and channel coding created by the Consultative Committee for Space Data System (CCSDS) as frameworks. The proposed method makes use of the CCSDS-recommended convolutional code to ensure a channel coding step as low complex as possible, which allows implementation in a wide range of embedded platforms. This letter exploits the natural data division delivered by the compressor to unequally protect the information. The proposed method, which relies on a multiobjective optimization problem, allows one to find rate arrangements that minimize the distortion of the received image for a given value of an average coding rate within a granular range. The system performance is evaluated over an additive white Gaussian noise channel model. The obtained results show that the proposed method presents several advantages over an equal error protection strategy, and paves the way for scenarios with stringent energy and bandwidth constraints.

Xuebin Qin;Shida He;Xiucheng Yang;Masood Dehghan;Qiming Qin;Jagersand Martin; "Accurate Outline Extraction of Individual Building From Very High-Resolution Optical Images," vol.15(11), pp.1775-1779, Nov. 2018. This letter presents a novel approach for extracting accurate outlines of individual buildings from very high-resolution (0.1–0.4 m) optical images. Building outlines are defined as polygons here. Our approach operates on a set of straight line segments that are detected by a line detector. It groups a subset of detected line segments and connects them to form a closed polygon. Particularly, a new grouping cost is defined first. Second, a weighted undirected graph <inline-formula> <tex-math notation="LaTeX">$textit {G(V,E)}$ </tex-math></inline-formula> is constructed based on the endpoints of those extracted line segments. The building outline extraction is then formulated as a problem of searching for a graph cycle with the minimal grouping cost. To solve the graph cycle searching problem, the bidirectional shortest path method is utilized. Our method is validated on a newly created data set that contains 123 images of various building roofs with different shapes, sizes, and intensities. The experimental results with an average intersection-over-union of 90.56% and an average alignment error of 6.56 pixels demonstrate that our approach is robust to different shapes of building roofs and outperforms the state-of-the-art method.

Jie Liu;Ziqing He;Zuolong Chen;Lei Shao; "Tiny and Dim Infrared Target Detection Based on Weighted Local Contrast," vol.15(11), pp.1780-1784, Nov. 2018. Robust detection of infrared (IR) tiny and dim targets in a single frame remains a hot and difficult problem in military fields. In this letter, we introduce a method for IR tiny and dim target detection based on a new weighted local contrast measure. Our method simultaneously exploits the local contrast of target, the consistency of image background, and the imaging characteristics of the background edges. The proposed method is simple to implement and computationally efficient. We compared our algorithm with six state-of-the-art methods on four real-world videos with different targets and backgrounds. Our method outperforms all the compared algorithms on the ground-truth evaluation with both higher detection rate and lower false alarm rate.

Yuwei Chen;Changhui Jiang;Juha Hyyppä;Shi Qiu;Zheng Wang;Mi Tian;Wei Li;Eetu Puttonen;Hui Zhou;Ziyi Feng;Yuming Bo;Zhijie Wen; "Feasibility Study of Ore Classification Using Active Hyperspectral LiDAR," vol.15(11), pp.1785-1789, Nov. 2018. Recently, a major effort has been made to develop methods or tools for rock characterization and mineral content mapping. Light detection and ranging (LiDAR) is an efficient active remote sensing technique for collecting geometry information about rock surfaces. However, traditional LiDAR sensors work with a single-wavelength laser source, and it is unfeasible to obtain spectral information using one LiDAR sensor. The combination of hyperspectral imaging and LiDAR techniques is an emerging method for acquiring spatial and spectral information simultaneously that allows remote mapping of high-resolution mineral content and distributions and identifies subtle chemical variations. Unfortunately, spatial and spectral data registration, which introduces additional complicated data processing, is an inevitable and essential issue for this method. In this letter, first, we investigate the feasibility of ore classification applications with hyperspectral LiDAR (HSL). HSL consists of 17 spectral channels covering the visible–shortwave infrared (SWIR) spectral range. Spatial and spectral information about seven different ore samples is obtained under a controlled laboratory environment using HSL. The standard deviation of the distance measurements is less than 1.1 cm for different spectral channels, and the classification accuracy can reach 100% if all 17 spectral measurements are used. To optimize the system design with lower cost and system complexity, a spectral band selection criterion is built based on the feature contribution degree (FCD), which is calculated using the normalized variance of the reflectance values for different ore samples at each wavelength. Two different strategies of FCD selection are tested to generate vectors: ascending sequences and descending sequences. Feature vectors with descending sequences have better classification accuracy. In addition, the results show that the classification accuracy can reach 100&#- 0025; with the feature vector of the seven largest FCD values compared to 59.57% for the feature vector with the seven smallest FCD values. Moreover, we find that the channels with high FCD values are primarily centered in SWIR bands. This result could be a reference for optimizing the hardware design of HSL for ore classification or mineral identification.

Chenglu Wen;Xiaotian Sun;Shiwei Hou;Jinbin Tan;Yudi Dai;Cheng Wang;Jonathan Li; "Line Structure-Based Indoor and Outdoor Integration Using Backpacked and TLS Point Cloud Data," vol.15(11), pp.1790-1794, Nov. 2018. This letter presents a line structure-based method for integration of centimeter-level indoor backpacked scanning point clouds and millimeter-level outdoor terrestrial laser scanning point clouds. Using 3-D lines for registration, instead of matching points directly, can improve the robustness of the method and adapt to multisource point cloud data of different qualities. Considering the limited overlapping between indoor and outdoor scenes, line structures are extracted from overlapped wall areas that may be included in interior and exterior data. Here, a patch-based method labels a point cloud into wall, ceiling, floor categories, as well as assigning the candidate overlapping walls. Then, lines structures are extracted from the wall plane point cloud. Potential door and window line structures are detected and refined for point cloud registration. Last, an iterative closest point-based method is used to fine tune the registration results. Our results show that the proposed method effectively integrates a promising map of indoor and outdoor scenes.

Maolin Chen;Jianping Pan;Jingzhong Xu; "Classification of Terrestrial Laser Scanning Data With Density-Adaptive Geometric Features," vol.15(11), pp.1795-1799, Nov. 2018. Point cloud classification is a crucial procedure in ground scene interpretation. In this letter, density-adaptive geometric features are proposed for the classification of terrestrial laser scanning data, with the problem brought by point density variation as one of the main concerns. For each point, point spacing is estimated, respectively, based upon the distance to scanner position and the angular resolution, and then used as neighborhood scale basis to generate the search range of optimal radius. In feature extraction, we modify some common geometric features to adapt to density variation, e.g., a polar projection grid is proposed to generate projection features instead of commonly used rectangular grid. The polar grid can make sure similar number of laser beams passing through each grid. An evaluation involving five classifiers is carried out in an outdoor scene captured by Reigl-VZ400 scanner and the results show density-adaptive features have better and more stable performances than features without considering density variation, with the highest overall accuracy of 95.95%. Moreover, the proposed features perform well on the recognition of buildings from a large distance (more than 300 m in this letter).

* "Introducing IEEE Collabratec," vol.15(11), pp.1800-1800, Nov. 2018.* Advertisement, IEEE.

* "Together, we are advancing technology," vol.15(11), pp.1801-1801, Nov. 2018.* Advertisement, IEEE.

* "IEEE Open Access," vol.15(11), pp.1802-1802, Nov. 2018.* Advertisement, IEEE.

* "IEEE Geoscience and Remote Sensing Letters information for authors," vol.15(11), pp.C3-C3, Nov. 2018.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "IEEE Geoscience and Remote Sensing Letters Institutional Listings," vol.15(11), pp.C4-C4, Nov. 2018.* Presents the institutional listings associated with this issue of the publication.

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing - new TOC (2018 November 15) [Website]

* "Frontcover," vol.11(10), pp.C1-C1, Oct. 2018.* Presents the front cover for this issue of the publication.

* "IEEE Geoscience and Remote Sensing Societys," vol.11(10), pp.C2-C2, Oct. 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Table of Contents," vol.11(10), pp.3381-3383, Oct. 2018.* Presents the table of contents for this issue of the publication.

P. Dubois-Fernandez;L. Fatoyinbo;I. Hajnsek;S. Saatchi;K. Scipal; "Foreword to the Special Issue on Forest Structure Estimation in Remote Sensing," vol.11(10), pp.3384-3385, Oct. 2018.

Matteo Pardini;Marivi Tello;Victor Cazcarra-Bes;Konstantinos P. Papathanassiou;Irena Hajnsek; "L- and P-Band 3-D SAR Reflectivity Profiles Versus Lidar Waveforms: The AfriSAR Case," vol.11(10), pp.3386-3401, Oct. 2018. The aim of this paper is to compare L- and P-band vertical backscattering profiles estimated by means of synthetic aperture radar (SAR) tomography and full light detection and ranging (lidar) waveforms in terms of their ability to distinguish different tropical forest structure types. The comparison relies on the unique DLR F-SAR and NASA Land, Vegetation and Ice Sensor (LVIS) lidar datasets acquired in 2016 in the frame of the AfriSAR campaign. In particular, F-SAR and LVIS data over three different test sites complemented by plot field measurements are used. First, the SAR and lidar three-dimensional (3-D) datasets are compared and discussed on a qualitative basis. The ability to penetrate into and through the canopy down to the ground is assessed at L- and P-band in terms of both the ground-to-volume power ratio and the performance to estimate the location of the underlying ground. The effect of polarimetry on the visibility of the ground is discussed as well. Finally, the 3-D measurements for each configuration are compared with respect to their ability to derive physical forest structure descriptors. For this, vertical structure indices derived from the volume-only 3-D radar reflectivity at L- and P-band and from the LVIS profiles are compared against each other as well as against plot-derived indices.

Marivi Tello;Victor Cazcarra-Bes;Matteo Pardini;Konstantinos Papathanassiou; "Forest Structure Characterization From SAR Tomography at L-Band," vol.11(10), pp.3402-3414, Oct. 2018. Synthetic aperture radar (SAR) remote sensing configurations are able to provide continuous measurements on global scales sensitive to the vertical structure of forests with a high spatial and temporal resolution. Furthermore, the development of tomographic SAR techniques allows the reconstruction of the three-dimensional (3-D) radar reflectivity opening the door for 3-D forest monitoring. However, the link between 3-D radar reflectivity and 3-D forest structure is not yet established. In this sense, this paper introduced a framework that allows a qualitative and quantitative interpretation of physical forest structure from tomographic SAR data at L-band. For this, forest structure is parameterized into a set of a horizontal and a vertical structure index. From inventory data, both indices can be derived from the spatial distribution and the dimensions of the trees. Similarly, two structure indices are derived from the 3-D spatial distribution of the local maxima of the reconstructed 3-D radar reflectivity profiles at L-band. The proposed methodology is tested by means of experimental tomographic L-band data acquired over the temperate forest site of Traunstein in Germany. The obtained horizontal and vertical structure indices are validated against the corresponding estimates obtained from inventory measurements and against the same indices derived from the vertical profiles of airborne Lidar data. The high correlation between the forest structure indices obtained from these three different data sources (expressed by correlation coefficients between 0.75 and 0.87) indicates the potential of the proposed framework.

Michael Denbina;Marc Simard;Brian Hawkins; "Forest Height Estimation Using Multibaseline PolInSAR and Sparse Lidar Data Fusion," vol.11(10), pp.3415-3433, Oct. 2018. We demonstrate a method using lidar data fusion to improve the forest height estimation accuracy of multibaseline polarimetric synthetic aperture radar interferometry (PolInSAR). Compared to single-baseline PolInSAR, multibaseline PolInSAR allows forest canopy height to be estimated more accurately across a wider range of height values. However, to arrive at a single forest height estimate, the estimates from the multiple baselines must be selected or weighted. A number of approaches to selecting between baselines have been proposed in the literature, but they are generally based on simple metrics of the PolInSAR data and do not necessarily capture the full range of characteristics that make one baseline produce more accurate forest height estimates than another. We solve this problem by treating baseline selection as a supervised classification problem that can be trained using a small amount of sparse lidar data located within the PolInSAR coverage area. We train a support vector machine classifier using a variety of coarse lidar sample spacings of 250 m and greater, to demonstrate that data from future spaceborne lidar missions will be sufficient for this purpose. We demonstrate results for multiple study areas in the country of Gabon using data collected by NASA's uninhabited aerial vehicle synthetic aperture radar and land, vegetation, and ice sensor lidar. The use of lidar fusion for PolInSAR baseline selection yields improved results compared to standard baseline selection methods, and further demonstrates the strong potential of PolInSAR and lidar fusion for remote sensing of forests.

Seung-Kuk Lee;Temilola E. Fatoyinbo;David Lagomasino;Emanuelle Feliciano;Carl Trettin; "Multibaseline TanDEM-X Mangrove Height Estimation: The Selection of the Vertical Wavenumber," vol.11(10), pp.3434-3442, Oct. 2018. We generated a large-scale mangrove forest height map using multiple TanDEM-X (TDX) interferometric synthetic aperture radar (InSAR) acquisitions with various spatial baselines in order to improve the height estimation accuracy across a wide range of forest heights. The forest height inversion using InSAR data is strongly dependent upon the vertical wavenumber (i.e., perpendicular baseline). First, we investigated the role of the vertical wavenumber in forest height inversion from InSAR data using the sensitivity of the interferometric (volume) coherence to forest height. We used corrected but lower resolution and accuracy Shuttle Radar Topography Mission (SRTM) mangrove height maps as a priori information over Akanda and Pongara National Parks in Gabon to estimate lower and upper boundaries of the vertical wavenumber over test sites from the measured coherence-to-height sensitivity. Only TDX acquisitions within the boundaries of the vertical wavenumber were selected and combined for multibaseline mangrove height inversion. Mangrove forest height was obtained with multibaseline TDX acquisitions and was validated against the reference height derived from field measurement data providing improvements in multibaseline inversion over existing height estimates (i.e., SRTM height) and single-baseline inversions (multibaseline inversion: r2 = 0.98, root mean square error (RMSE) of 2.73 m; SRTM height: r2 = 0.86, RMSE = 7.21 m; single-baseline inversions: r2 = 0.08-0.97, RMSE = 3.86-11.10 m). As a result, to accurately estimate forest heights over a wide range (3-60 m), multibaseline InSAR acquisitions (at least three different baselines) are needed to exclude biases associated with the vertical wavenumber in forest height inversion.

Hao Chen;Shane R. Cloude;David G. Goodenough;David A. Hill;Andrea Nesdoly; "Radar Forest Height Estimation in Mountainous Terrain Using Tandem-X Coherence Data," vol.11(10), pp.3443-3452, Oct. 2018. In this paper, we consider the problem of radar estimation of forest canopy height in regions with dense forests and severe topography. We combine a reference digital elevation model with multiple satellite baselines from ascending and descending orbits to develop a merging algorithm relating single pass interferometric coherence to forest canopy height. We first describe the algorithm and processing steps used for height estimation and then apply the technique to a mountainous study site in British Columbia, Canada, using data from the Tandem-X satellite pair. We devise a new masking scheme to isolate potential problem areas in sloped terrain and apply the new merging algorithm by using multiple Tandem-X tracks to overcome the gaps left due to the masking procedure. The radar height products are validated by using a network of ground forest measurement sites and supporting lidar. The regression statistics show an r2 of 0.70 and rmse of 4.1 m between the radar and the field measured heights. By examining height errors, we implement a new test for the presence of canopy extinction, or subcanopy surface scattering, and demonstrate that in the dense and mountainous forests of British Columbia, there are significant canopy extinction effects in X-band imagery.

Maryam Pourshamsi;Mariano Garcia;Marco Lavalle;Heiko Balzter; "A Machine-Learning Approach to PolInSAR and LiDAR Data Fusion for Improved Tropical Forest Canopy Height Estimation Using NASA AfriSAR Campaign Data," vol.11(10), pp.3453-3463, Oct. 2018. This paper investigates the benefits of integrating multibaseline polarimetric interferometric SAR (PolInSAR) data with LiDAR measurements using a machine-learning approach in order to obtain improved forest canopy height estimates. Multiple interferometric baselines are required to ensure consistent height retrieval performance across a broad range of tree heights. Previous studies have proposed multibaseline merging strategies using metrics extracted from PolInSAR measurements. Here, we introduce the multibaseline merging using a support vector machine trained by sparse LiDAR samples. The novelty of this method lies in the new way of combining the two datasets. Its advantage is that it does not require a complete LiDAR coverage, but only sparse LiDAR samples distributed over the PolInSAR image. LiDAR samples are not used to obtain the best height among a set of height stacks, but rather to train the retrieval algorithm in selecting the best height using the variables derived through PolInSAR processing. This enables more accurate height estimation for a wider scene covered by the SAR with only partial LiDAR coverage. We test our approach on NASA AfriSAR data acquired over tropical forests by the L-band UAVSAR and the LVIS LiDAR instruments. The estimated height from this approach has a higher accuracy (r2 = 0.81, RMSE = 7.1 m) than previously introduced multibaseline merging approach (r2 = 0.67, RMSE = 9.2 m). This method is beneficial to future spaceborne missions, such as GEDI and BIOMASS, which will provide a wealth of near-contemporaneous LiDAR samples and PolInSAR measurements for mapping forest structure at global scale.

Nafiseh Ghasemi;Valentyn A. Tolpekin;Alfred Stein; "Estimating Tree Heights Using Multibaseline PolInSAR Data With Compensation for Temporal Decorrelation, Case Study: AfriSAR Campaign Data," vol.11(10), pp.3464-3477, Oct. 2018. This paper presents a multibaseline method to increase the accuracy of height estimation when using SAR tomographic data. It is based upon mitigating the temporal decorrelation induced by wind. The Fourier-Legendre function of different orders was fitted to each pixel as the structure function in the PCT model. It was combined with the motion standard deviation function from the random-motion-over ground (RMoG) model. L-band multibaseline data are used that were acquired during the AfriSAR campaign over La Lope National Park in Gabon with a height range between 0 and 60 m that has an average of 30 m and standard deviation of 15 m. The results were compared with those from the regular PCT model using the root mean square error (RMSE). Histograms were compared to the one obtained from Lidar height map. The average RMSE was equal to 7.5 m for the regular PCT model and to 5.6 m for the modified PCT model. We concluded that the accuracy of tree height estimation increased after modeling of temporal decorrelation. This is of value for future satellite missions that would collect tomographic data over forest areas.

Bryan Riel;Michael Denbina;Marco Lavalle; "Uncertainties in Forest Canopy Height Estimation From Polarimetric Interferometric SAR Data," vol.11(10), pp.3478-3491, Oct. 2018. The random volume over ground (RVoG) model has been widely applied to estimate forest tree height from polarimetric synthetic aperture radar (SAR) interferometry (PolInSAR) data for the past two decades. Successful application of the RVoG model requires certain assumptions to be valid for the imaged forest and the acquisition scenarios in order to avoid large errors in height estimates. Quantification of errors and uncertainties of RVoG-estimated heights have typically been limited to comparison against external validation data, such as lidar or field measurements. In this paper, we present a straightforward approach to simultaneously estimate height and height uncertainty from PolInSAR data using a Bayesian framework that accounts for errors in the data, as well as errors due to incorrect RVoG modeling assumptions, such as those caused by temporal decorrelation effects and errors in ground phase estimation. We apply our method to synthetic data to study how forest height uncertainty depends on modeling assumptions and PolInSAR acquisition parameters. We also compare our estimated Bayesian uncertainties to PolInSAR-derived and lidar-derived height RMS deviations observed over Gabonese tropical forests during the joint NASA-ESA 2016 AfriSAR campaign. Our results show good correspondence between uncertainties and deviations, as well as a strong correlation between uncertainty and estimated tree height. Furthermore, we demonstrate that we can associate specific areas of high uncertainty to confounding effects, such as temporal decorrelation and noncanopy related scattering.

Valentine Wasik;Pascale C. Dubois-Fernandez;Cédric Taillandier;Sassan S. Saatchi; "The AfriSAR Campaign: Tomographic Analysis With Phase-Screen Correction for P-Band Acquisitions," vol.11(10), pp.3492-3504, Oct. 2018. ESA's earth explorer BIOMASS mission is a P-band (432-438 MHz) synthetic aperture radar (SAR) using a combination of polarimetry and interferometric observations to quantify the vertical structure and biomass of global forests, with the primary focus on tropical forests. The methodology to map the vertical structure of the forest is based on multibaseline tomographic measurements from space. In this paper, we use data acquired by airborne sensors during the AfriSAR campaign in humid tropical forests of Africa to examine the potential of P -band tomographic SAR measurements in estimating forest parameters. We use data acquired by ONERA's P-band SAR system over the Lopé National Park in central Gabon during July 2015 to estimate vertical profiles. In processing the multibaseline data, we develop and implement a phase-screen correction methodology based on recent works by Tebaldini et al. to improve the quality of measurements by removing phase perturbations associated with platform motions and uncertainties in flight trajectories. The vertical structure estimated from the corrected tomographic measurements are then compared with small and large footprint light detection and ranging (Lidar) observations collected as part of the AfriSAR campaign. The results suggest that phase-screen correction can significantly improve the vertical profile of radar backscattered power to match the Lidar observations in detecting ground, vertical vegetation density, and total height of the forests across a variety of forest types and terrain complexity.

Yu Bai;Stefano Tebaldini;Dinh Ho Tong Minh;Wen Yang; "An Empirical Study on the Impact of Changing Weather Conditions on Repeat-Pass SAR Tomography," vol.11(10), pp.3505-3511, Oct. 2018. Researches carried out in the last years have shown that the use of P-band SAR tomography (TomoSAR) largely improves the retrieval of the above-ground biomass (AGB) in tropical forests, providing a most encouraging element toward the systematic employment of TomoSAR techniques in the frame of the upcoming spaceborne mission BIOMASS. All of these researches were carried out using campaign data acquired in a single day, and under stable (mostly sunny) weather conditions. The impact of temporal decorrelation was considered in the literature by analyzing ground-based Radar data from the TropiSCAT campaign (Paracou, French Guiana), and found not to be a show-stopper for BIOMASS tomography. Yet, the validity of this analysis was limited to sunny days only. Accordingly, a precise assessment of the impact of changing weather conditions on TomoSAR is currently missing. The aim of this paper is to provide a first experimental element to fill this gap. To do this, data from the TropiSCAT archive are reprocessed to mimic BIOMASS repeat-pass tomography. Since BIOMASS tomography will be implemented by taking seven acquisitions with a revisit time of three days, we form tomograms by taking two TropiSCAT antennas every three days (and three antennas on the last day), which means that any single tomogram is actually obtained by mixing seven different days and under different weather conditions. The quality of tomographic imaging is then assessed by evaluating the observed backscattered power fluctuations in the tomogram time series. While imaging quality is observed to degrade by mixing different days, the resulting temporal variations of the backscattered power in the canopy layer are within 1.5-dB rms in cross polarization. For this forest site, this error is translated into an AGB error of about 50-100 t/ha, which is 20% or less of forest AGB.

Carlos Alberto Silva;Sassan Saatchi;Mariano Garcia;Nicolas Labrière;Carine Klauberg;António Ferraz;Victoria Meyer;Kathryn J. Jeffery;Katharine Abernethy;Lee White;Kaiguang Zhao;Simon L. Lewis;Andrew T. Hudak; "Comparison of Small- and Large-Footprint Lidar Characterization of Tropical Forest Aboveground Structure and Biomass: A Case Study From Central Gabon," vol.11(10), pp.3512-3526, Oct. 2018. NASA's Global Ecosystem Dynamic Investigation (GEDI) mission has been designed to measure forest structure using lidar waveforms to sample the earth's vegetation while in orbit aboard the International Space Station. In this paper, we used airborne large-footprint (LF) lidar measurements to simulate GEDI observations from which we retrieved ground elevation, vegetation height, and aboveground biomass (AGB). GEDI-like product accuracy was then assessed by comparing them to similar products derived from airborne small-footprint (SF) lidar measurements. The study focused on tropical forests and used data collected during the NASA and European Space Agency (ESA) AfriSAR ground and airborne campaigns in the Lope National Park in Central Gabon. The measurements covered a gradient of successional stages of forest development with different height, canopy density, and topography. The comparison of the two sensors shows that LF lidar waveforms and simulated waveforms from SF lidar are equivalent in their ability to estimate ground elevation (RMSE = 0.5 m, bias = 0.29 m) and maximum forest height (RMSE = 2.99 m, bias = 0.24 m) over the study area. The difference in the AGB estimated from both lidar instruments at the 1-ha spatial scale is small over the entire study area (RMSE = 6.34 Mg·ha -1, bias = 11.27 Mg·ha-1) and the bias is attributed to the impact of ground slopes greater than 10-20° on the LF lidar measurements of forest height. Our results support the ability of GEDILF lidar to measure the complex structure of humid tropical forests and provide AGB estimates comparable to SF-derived ones.

Atticus E. L. Stovall;Herman H. Shugart; "Improved Biomass Calibration and Validation With Terrestrial LiDAR: Implications for Future LiDAR and SAR Missions," vol.11(10), pp.3527-3537, Oct. 2018. Future NASA and ESA satellite missions plan to better quantify global carbon stocks through detailed observations of forest structure, but ultimately rely on uncertain ground measurement approaches for calibration and validation. A substantial amount of uncertainty in estimating plot-level biomass can be attributed to inadequate and unrepresentative allometric relationships used to convert plot-level tree measurements to estimates of aboveground biomass. These allometric equations are known to have high errors and biases, particularly in carbon-rich forests, because they were calibrated with small and often biased samples of destructively harvested trees. To overcome this issue, we present and test a framework for nondestructively estimating tree and plot-level biomass with terrestrial laser scanning (TLS). We modeled 243 trees from 12 species with TLS and created ten low-RMSE allometric equations. The full 3-D reconstructions, TLS allometry, and Jenkins et al. (2003) allometry were used to calibrate SAR- and LiDAR-based empirical biomass models to investigate the potential for improved accuracy and reduced uncertainty. TLS reduced plot-level RMSE from 18.5% to 9.8% and revealed a systematic negative bias in the national equations. At the calibration stage, allometric uncertainty accounted for 2.8-28.4% of the total RMSE, increasing in relative contribution as calibration improved with sensor fusion. Our findings suggest that TLS plot acquisitions and nondestructive allometry can play a vital role for reducing uncertainty in calibration and validation data for biomass mapping in the upcoming NASA and ESA missions.

Michael Schlund;Klaus Scipal;Shaun Quegan; "Assessment of a Power Law Relationship Between P-Band SAR Backscatter and Aboveground Biomass and Its Implications for BIOMASS Mission Performance," vol.11(10), pp.3538-3547, Oct. 2018. This paper presents an analysis of a logarithmic relationship between P-band cross-polarized backscatter from synthetic aperture radar (SAR) and aboveground biomass (AGB) across different forest types based on multiple airborne datasets. It is found that the logarithmic function provides a statistically significant fit to the observed relationship between HV backscatter and AGB. While the coefficient of determination varies between datasets, the slopes, and intercepts of many of the models are not significantly different, especially when similar AGB ranges are assessed. Pooled boreal and pooled tropical data have slopes that are not significantly different, but they have different intercepts. Using the power law formulation of the logarithmic relation allows estimation of both the equivalent number of looks (ENL) needed to retrieve AGB with a given uncertainty and the sensitivity of the AGB inversion. The campaign data indicates that boreal forests require a larger ENL than tropical forests to achieve a specified relative accuracy. The ENL can be increased by multichannel filtering, but ascending and descending images will need to be combined to meet the performance requirements of the BIOMASS mission. The analysis also indicates that the relative change in AGB associated with a given backscatter change depends only on the magnitude of the change and the exponent of the power law, and further implies that to achieve a relative AGB accuracy of 20% or better, residual errors from radiometric distortions produced by the system and environmental effects must not exceed 0.43 dB in tropical and 0.39 dB in boreal forests.

Maciej J. Soja;Henrik J. Persson;Lars M. H. Ulander; "Modeling and Detection of Deforestation and Forest Growth in Multitemporal TanDEM-X Data," vol.11(10), pp.3548-3563, Oct. 2018. This paper compares three approaches to forest change modeling in multitemporal (MT) InSAR data acquired with the X-band system TanDEM-X over a forest with known topography. Volume decorrelation is modeled with the two-level model (TLM), which describes forest scattering using two parameters: forest height hand vegetation scattering fraction ζ, accounting for both canopy cover and electromagnetic scattering properties. The single-temporal (ST) approach allows both h and ζ to change between acquisitions. The MT approach keeps h constant and models all change by varying ζ. The MT growth (MTG) approach is based on MT, but it accounts for height growth by letting h have a constant annual increase. Monte Carlo simulations show that MT is more robust than ST with respect to coherence and phase calibration errors and height estimation ambiguities. All three inversion approaches are also applied to 12 VV-polarized TanDEM-X acquisitions made during the summers of 2011-2014 over Remningstorp, a hemiboreal forest in southern Sweden. MT and MTG show better height estimation performance than ST, and MTG provides more consistent canopy cover estimates than MT. For MTG, the root-mean-square difference is 1.1 m (6.6%; r = 0.92) for forest height and 0.16 (22%; r = 0.48) for canopy cover, compared with similar metrics from airborne lidar scanning (ALS). The annual height increase estimated with MTG is found correlated with a related ALS metric, although a bias is observed. A deforestation detection method is proposed, correctly detecting 15 out of 19 areas with canopy cover loss above 50%.

Albert R. Monteith;Lars M. H. Ulander; "Temporal Survey of P- and L-Band Polarimetric Backscatter in Boreal Forests," vol.11(10), pp.3564-3577, Oct. 2018. Environmental conditions and seasonal variations affect the backscattered radar signal from a forest. This potentially causes errors in a biomass retrieval scheme using data from the synthetic aperture radar (SAR) data. A better understanding of these effects and the electromagnetic scattering mechanisms in forests is required to improve biomass estimation algorithms for current and upcoming P- and L-band SAR missions. In this paper, temporal changes in HH-, VV-, and HV-polarized P- and L-band radar backscatter and temporal coherence from a boreal forest site are analyzed in relation to environmental parameters. The radar data were collected from a stand of mature Norway spruce ( Picea abies (L.) Karst.) with an above-ground biomass of approximately 250 tons/ha at intervals of 5 min from January to August 2017 using the BorealScat tower-based scatterometer. It was observed that subzero temperatures during the winters cause large variations (4 to 10 dB) in P- and L-band backscatter, for which the HH/VV backscatter ratio offered some mitigation. High wind speeds were also seen to cause deviations in the average backscatter at P-band due to decreased double-bounce scattering. Severe temporal decorrelation was observed at L-band over timescales of days or more, whereas the P-band temporal coherence remained high (>0.9) for at least a month neglecting windy periods. Temporal coherence at P-band was highest during night times when wind speeds are low.

Wenjian Ni;Guoqing Sun;Yong Pang;Zhiyu Zhang;Jianli Liu;Aqiang Yang;Yao Wang;Dafeng Zhang; "Mapping Three-Dimensional Structures of Forest Canopy Using UAV Stereo Imagery: Evaluating Impacts of Forward Overlaps and Image Resolutions With LiDAR Data as Reference," vol.11(10), pp.3578-3589, Oct. 2018. The application of aerial stereo imagery on the measurement of forest three-dimensional structures is growing in recent years due to the rapid development of unmanned aerial vehicle platforms and automatic processing algorithms of stereo images. Yet there is still no clear knowledge about how the description of forest three-dimensional structures is affected by settings of critical acquisition parameters of stereo images. This study systematically addressed the impacts of image resolutions and forward overlaps over a broad range by using LiDAR data as reference. The different combinations of image resolutions and forward overlaps used in this study are produced by image average downsampling and subsetting. Their performances were evaluated from four aspects, including computation loads, point densities, estimation of canopy height indices at forest stand level, and the vertical distribution of point clouds over forest stands and along forest transects with different levels of canopy closure. Results showed that the coupling between image resolutions and forward overlaps in the data processing should be given full consideration. Generally, finer image resolutions require higher forward overlaps; otherwise, much of the area could not be detected and sparse trees were easily missed. It was a better choice to degrade image resolutions while keeping forward overlap in data processing if more blank areas appeared or sparse trees could not be detected. The good match between image resolutions and forward overlaps could dramatically reduce the computation load while keeping the estimation accuracy.

Hui Zhou;Yuwei Chen;Juha Hyyppä;Ziyi Feng;Fashuai Li;Teemu Hakala;Xinmin Xu;Xiaolei Zhu; "Estimation of Canopy Height Using an Airborne Ku-Band Frequency-Modulated Continuous Waveform Profiling Radar," vol.11(10), pp.3590-3597, Oct. 2018. An airborne Ku-band frequency-modulated continuous waveform (FMCW) profiling radar terms as Tomoradar provides a distance-resolved measure of microwave radiation backscattered from the canopy surface and the underlying ground. The Tomoradar waveform data are acquired in the southern Boreal Forest Zone with Scots pine, Norway spruce, and birch as major species in Finland. A weighted filtering algorithm based on statistical properties of noise is designed to process the original waveform. In addition, another algorithm of estimating canopy height for the processed waveform is developed by extracting the canopy top and ground position. A higher-precision reference data from a Velodyne VLP-16 laser scanner and a digital terrain model are introduced to validate the accuracy of extracted canopy height. According to the processed results from 127 765 copolarization measurements in 32 stripes of Tomoradar field test, the mean error of canopy height varies from -0.04 to 1.53 m, and the root-mean-square error approximates 1 m. Moreover, the estimated canopy heights highly correlate with the reference data in view of that the correlation coefficients maintain from 0.86 to 0.99 with an average value of 0.96. All these results demonstrate that Tomoradar presents an important approach in estimating the canopy height with several meters footprint and is feasible of being a validation instrument for satellite LiDAR with large footprint in the forest inventory.

Jiri Pyörälä;Xinlian Liang;Mikko Vastaranta;Ninni Saarinen;Ville Kankare;Yunsheng Wang;Markus Holopainen;Juha Hyyppä; "Quantitative Assessment of Scots Pine (Pinus Sylvestris L.) Whorl Structure in a Forest Environment Using Terrestrial Laser Scanning," vol.11(10), pp.3598-3607, Oct. 2018. State-of-the-art technology available at sawmills enables measurements of whorl numbers and the maximum branch diameter for individual logs, but such information is currently unavailable at the wood procurement planning phase. The first step toward more detailed evaluation of standing timber is to introduce a method that produces similar wood quality indicators in standing forests as those currently used in sawmills. Our aim was to develop a quantitative method to detect and model branches from terrestrial laser scanning (TLS) point clouds data of trees in a forest environment. The test data were obtained from 158 Scots pines (Pinus sylvestris L.) in six mature forest stands. The method was evaluated for the accuracy of the following branch parameters: Number of whorls per tree and for every whorl, the maximum branch diameter and the branch insertion angle associated with it. The analysis concentrated on log-sections (stem diameter >15 cm) where the branches most affect wood's value added. The quantitative whorl detection method had an accuracy of 69.9% and a 1.9% false positive rate. The estimates of the maximum branch diameters and the corresponding insertion angles for each whorl were underestimated by 0.34 cm (11.1%) and 0.67° (1.0%), with a root-mean-squared error of 1.42 cm (46.0%) and 17.2° (26.3%), respectively. Distance from the scanner, occlusion, and wind were the main external factors that affect the method's functionality. Thus, the completeness and point density of the data should be addressed when applying TLS point cloud based tree models to assess branch parameters.

Tobias Bollian;Batuhan Osmanoglu;Rafael F. Rincon;Seung-Kuk Lee;Temilola E. Fatoyinbo; "Detection and Geolocation of P-Band Radio Frequency Interference Using EcoSAR," vol.11(10), pp.3608-3616, Oct. 2018. The high penetration of P-band signals (300-600 MHz) into dense vegetation and the higher temporal stability at low frequencies are key advantages for the estimation of forest properties using synthetic aperture radar (SAR). However, existing services at those frequencies make P-band SAR imagers more vulnerable to the radio frequency interference (RFI). In this paper, a method to detect and geolocate the RFI using digital beamforming (DBF) is presented. The method is implemented using NASA's EcoSAR measurements. This P-band multichannel radar uses a sniffing pulse interleaved during the DBF SAR operation to sense the RFI. RFI detection is implemented with time-bandpass filters while DBF is used to estimate the angle-of-arrival and geolocate the interference. The method is demonstrated for an interferer how EcoSAR could be used to assess the RFI threats to spaceborne missions.

Nicolas Labrière;Shengli Tao;Jérôme Chave;Klaus Scipal;Thuy Le Toan;Katharine Abernethy;Alfonso Alonso;Nicolas Barbier;Pulchérie Bissiengou;Tânia Casal;Stuart J. Davies;Antonio Ferraz;Bruno Hérault;Gaëlle Jaouen;Kathryn J. Jeffery;David Kenfack;Lisa Korte;Simon L. Lewis;Yadvinder Malhi;Hervé R. Memiaghe;John R. Poulsen;Maxime Réjou-Méchain;Ludovic Villard;Grégoire Vincent;Lee J. T. White;Sassan Saatchi; "In Situ Reference Datasets From the TropiSAR and AfriSAR Campaigns in Support of Upcoming Spaceborne Biomass Missions," vol.11(10), pp.3617-3627, Oct. 2018. Tropical forests are a key component of the global carbon cycle. Yet, there are still high uncertainties in forest carbon stock and flux estimates, notably because of their spatial and temporal variability across the tropics. Several upcoming spaceborne missions have been designed to address this gap. High-quality ground data are essential for accurate calibration/validation so that spaceborne biomass missions can reach their full potential in reducing uncertainties regarding forest carbon stocks and fluxes. The BIOMASS mission, a P-band SAR satellite from the European Space Agency (ESA), aims at improving carbon stock mapping and reducing uncertainty in the carbon fluxes from deforestation, forest degradation, and regrowth. In situ activities in support of the BIOMASS mission were carried out in French Guiana and Gabon during the TropiSAR and AfriSAR campaigns. During these campaigns, airborne P-band SAR, forest inventory, and lidar data were collected over six study sites. This paper describes the methods used for forest inventory and lidar data collection and analysis, and presents resulting plot estimates and aboveground biomass maps. These reference datasets along with intermediate products (e.g., canopy height models) can be accessed through ESA's Forest Observation System and the Dryad data repository and will be useful for BIOMASS but also to other spaceborne biomass missions such as GEDI, NISAR, and Tandem-L for calibration/validation purposes. During data quality control and analysis, prospects for reducing uncertainties have been identified, and this paper finishes with a series of recommendations for future tropical forest field campaigns to better serve the remote sensing community.

Lin Sun;Xueying Zhou;Jing Wei;Quan Wang;Xinyan Liu;Meiyan Shu;Tingting Chen;Yulei Chi;Wenhua Zhang; "A New Cloud Detection Method Supported by GlobeLand30 Data Set," vol.11(10), pp.3628-3645, Oct. 2018. In terms of traditional threshold methods, uniform thresholds are used for cloud detection based on remote sensing images; however, due to complex surface structures and cloud conditions, such an approach is typically difficult to effectively implement for high-precision cloud detection. To solve this problem, a new cloud detection algorithm is proposed based on global land cover data. Specifically, a high spatial-resolution at 30-m Global Land Cover Data set with global coverage was employed as background data for image inversions, which further supported cloud detection in remote sensing images. Notably, threshold settings can be varied for different land cover types. Such an algorithm can effectively improve the accuracy of cloud pixel identification for thin and broken clouds, even over bright areas. Moreover, Landsat 5 data are used to perform cloud detection experiments based on this algorithm. The thresholds are considering land cover variations. The thresholds of land cover types spatiotemporally vary, such as vegetation, differed by latitude and over time. In addition, six common land cover types are selected for cloud detection experiments. Then, validations analyses are conducted through visual interpretation and the results indicated that the algorithm is capable of achieving a high cloud detection accuracy. Specifically, the overall RMSE of cloud cover is 4.44%, and the accuracies of cloud and clear-sky pixel identifications is 86.5% and 98.7%, respectively.

Robert F. Hardy;Chuanmin Hu;Blair Witherington;Brian Lapointe;Anne Meylan;Ernst Peebles;Leo Meirose;Shigetomo Hirama; "Characterizing a Sea Turtle Developmental Habitat Using Landsat Observations of Surface-Pelagic Drift Communities in the Eastern Gulf of Mexico," vol.11(10), pp.3646-3659, Oct. 2018. Compared with our understanding of most aspects of sea turtle biology, knowledge of the surface-pelagic juvenile life stages remains limited. Young North Atlantic cheloniids (hard-shelled sea turtles) are closely associated with surface-pelagic drift communities (SPDCs), which are dominated by macroalgae of the genus Sargassum. We quantified SPDCs in the eastern Gulf of Mexico, a region that hosts four species of cheloniids during their surface-pelagic juvenile stage. Landsat satellite imagery was used to identify and measure the areal coverage of SPDCs in the eastern Gulf during 2003-2011 (1323 images). Although the SPDC coverage varied annually, seasonally, and spatially, SPDCs were present year-round, with an estimated mean area of SPDC in each Landsat image of 4.9 km2 (SD = 10.1). The area of SPDCs observed was inversely proportional to sea-surface wind velocity (Spearman's r = -0.33, p <; 0.001). The SPDC coverage was greatest during 2005, 2009, and 2011 and least during 2004 and 2010, but the 2010 analysis was affected by the Deepwater Horizon oil spill, which occurred within the study region. In the eastern Gulf, the area of SPDC peaked during June-August of each year. Although the SPDC coverage appeared lower in the eastern Gulf than in other regions of the Gulf and the North Atlantic, surface-pelagic juvenile green, hawksbill, Kemp's ridley, and loggerhead turtles were found to be using this habitat, suggesting that eastern Gulf SPDCs provide developmental habitats that are critical to the recovery of four sea turtle species.

Geraldine Rangmoen Rimven;Kevin S. Paulson;Timothy Bellerby; "Estimating One-Minute Rain Rate Distributions in the Tropics From TRMM Satellite Data (October 2017)," vol.11(10), pp.3660-3667, Oct. 2018. Internationally recognized prognostic models of rain fade on terrestrial and Earth-space extremely high frequency (EHF) links rely fundamentally on distributions of 1-min rain rates. In Rec. ITU-R P.837-6, these distributions are estimated from the data provided by Numerical Weather Products (NWPs). NWP yields rain accumulations over regions typically larger than 100 km across and over intervals of 6 h. Over the tropics, the Tropical Rain Measuring Mission (TRMM) satellite data yield instantaneous rain rates over regions 5 km across. This paper uses TRMM data to estimate rain rate distributions for telecommunications regulation over the tropics. Rain rate distributions are calculated for each 1° square between 35° south to 35° north. These distributions of instantaneous rain rates over 5 km squares are transformed to distributions over 1 km squares using a correction calculated from U.K. Nimrod radar data. Results are compared to rain distributions in DBSG3, the database of ITU-R Study Group 3. A comparison with the new Rec. ITU-R P.837-7 is also presented. A table of 0.01% exceeded rain rates over the tropics is provided as associated data.

Felix Greifeneder;Claudia Notarnicola;Sebastian Hahn;Mariette Vreugdenhil;Christoph Reimer;Emanuele Santi;Simonetta Paloscia;Wolfgang Wagner; "The Added Value of the VH/VV Polarization-Ratio for Global Soil Moisture Estimations From Scatterometer Data," vol.11(10), pp.3668-3679, Oct. 2018. The successor to the current series of Metop advanced scatterometers (ASCATs), the Metop-SG SCA, will be able to record data in dual-polarization, at C-band. The aim of this study is to investigate if the information contained in the added cross-polarization measurements can improve the vegetation parameterization for the estimation of the soil moisture content. In case of the operational Hydrology Satellite Application Facility Metop ASCAT soil moisture product, vegetation dynamics are characterized by the relationship between radar backscattering intensity and the incidence angle, the so-called SLOPE parameter. Building on findings from previous studies, the assumption is that the polarization ratio, i.e., VH/VV, could improve this characterization. To verify this assumption, flexible approaches, able to integrate a combination of ASCAT VV data and AQUARIUS (NASA) VH/VV data were required. Two machine learning methods were chosen: Support-vector-regression and artificial-neural-networks, and one statistical approach, the Bayesian-Regression. Each of these methods were used to derive models with different input configurations, with and without characterization of vegetation. The results show that the information contained in the SLOPE parameter and in PR are similar. Based on a global average, almost identical SMC retrieval accuracies were achieved. Despite that, analysis of the temporal dynamics of SLOPE and PR revealed certain location specific differences, which affect the spatial distribution of SMC retrieval accuracies. As a result, improvements based on the combination of the two parameters are minor overall, but they can be significant locally.

Xiang Li;Xiaojing Yao;Yi Fang; "Building-A-Nets: Robust Building Extraction From High-Resolution Remote Sensing Images With Adversarial Networks," vol.11(10), pp.3680-3687, Oct. 2018. With the proliferation of high-resolution remote sensing sensor and platforms, vast amounts of aerial image data are becoming easily accessed. High-resolution aerial images provide sufficient structural and texture information for image recognition while also raise new challenges for existing segmentation methods. In recent years, deep neural networks have gained much attention in remote sensing field and achieved remarkable performance for high-resolution remote sensing images segmentation. However, there still exists spatial inconsistency problems caused by independently pixelwise classification while ignoring high-order regularities. In this paper, we developed a novel deep adversarial network, named Building-A-Nets, that jointly trains a deep convolutional neural network (generator) and an adversarial discriminator network for the robust segmentation of building rooftops in remote sensing images. More specifically, the generator produces pixelwise image classification map using a fully convolutional DenseNet model, whereas the discriminator tends to enforce forms of high-order structural features learned from ground-truth label map. The generator and discriminator compete with each other in an adversarial learning process until the equivalence point is reached to produce the optimal segmentation map of building objects. Meanwhile, a soft weight coefficient is adopted to balance the operation of the pixelwise classification and high-order structural feature learning. Experimental results show that our Building-A-Net can successfully detect and rectify spatial inconsistency on aerial images while archiving superior performances compared to other state-of-the-art building extraction methods. Code is available at

Yibo Liu;Zhenxin Zhang;Ruofei Zhong;Dong Chen;Yinghai Ke;Jiju Peethambaran;Chuqun Chen;Lan Sun; "Multilevel Building Detection Framework in Remote Sensing Images Based on Convolutional Neural Networks," vol.11(10), pp.3688-3700, Oct. 2018. In this paper, we propose a hierarchical building detection framework based on deep learning model, which focuses on accurately detecting buildings from remote sensing images. To this end, we first construct the generation model of the multilevel training samples using the Gaussian pyramid technique to learn the features of building objects at different scales and spatial resolutions. Then, the building region proposal networks are put forward to quickly extract candidate building regions, thereby increasing the efficiency of the building object detection. Based on the candidate building regions, we establish the multilevel building detection model using the convolutional neural networks (CNNs), from which the generic image features of each building region proposal are calculated. Finally, the obtained features are provided as inputs for training CNNs model, and the learned model is further applied to test images for the detection of unknown buildings. Various experiments using the Datasets I and II (in Section V-A) show that the proposed framework increases the mean average precision values of building detection by 3.63%, 3.85%, and 3.77%, compared with the state-of-the-art methods, i.e., Method IV. Besides, the proposed method is robust to the buildings having different spatial textures and types.

Yifan Pan;Xianfeng Zhang;Guido Cervone;Liping Yang; "Detection of Asphalt Pavement Potholes and Cracks Based on the Unmanned Aerial Vehicle Multispectral Imagery," vol.11(10), pp.3701-3712, Oct. 2018. Asphalt roads are the basic component of a land transportation system, and the quality of asphalt roads will decrease during the use stage because of the aging and deterioration of the road surface. In the end, some road pavement distresses may appear on the road surface, such as the most common potholes and cracks. In order to improve the efficiency of pavement inspection, currently some new forms of remote sensing data without destructive effect on the pavement are widely used to detect the pavement distresses, such as digital images, light detection and ranging, and radar. Multispectral imagery presenting spatial and spectral features of objects has been widely used in remote sensing application. In our study, the multispectral pavement images acquired by unmanned aerial vehicle (UAV) were used to distinguish between the normal pavement and pavement damages (e.g., cracks and potholes) using machine learning algorithms, such as support vector machine, artificial neural network, and random forest. Comparison of the performance between different data types and models was conducted and is discussed in this study, and indicates that a UAV remote sensing system offers a new tool for monitoring asphalt road pavement condition, which can be used as decision support for road maintenance practice.

Zhiguo Meng;Qingshuai Wang;Huihui Wang;Tianxing Wang;Zhanchuan Cai; "Potential Geologic Significances of Hertzsprung Basin Revealed by CE-2 CELMS Data," vol.11(10), pp.3713-3720, Oct. 2018. The Hertzsprung basin (2 °N, 128 °W) is a relatively well-preserved Nectarian-age basin on the Moon farside with a diameter of 570 km. Low FeO+TiO2 abundance (FTA) and substantial topographic elevation change make the Hertzsprung Basin particularly relevant for evaluating the microwave thermal emission (MTE) mechanism over the lunar surface. In this study, microwave sounder (CELMS) data from the Chang'E-2 satellite are employed to investigate MTE features of the Hertzsprung Basin combined with FTA, surface slope, and rock abundance data. The results are as follows. First, the influence of topography on the CELMS data is essentially negligible, at least in low-latitude regions. Second, the brightness temperature (TB) behavior in the basin floor is similar to that in the nearby highlands, indicating homogeneity of the regolith thermophysical parameters in the highland areas. Third, abnormally low TB is interpreted to represent the existence of large boulders, which brings about two distinct and influential mechanisms on the observed TB. Fourth, a warm interior of the shallow lunar crust is inferred because of the high TB anomaly in the basin floor, Michelson crater, and southeastern region. Finally, the Orientale impact event exerted limited influence on the thermophysical structures of the lunar regolith in the study area.

Shisheng Guo;Guolong Cui;Lingjiang Kong;Yilin Song;Xiaobo Yang; "Multipath Analysis and Exploitation for MIMO Through-the-Wall Imaging Radar," vol.11(10), pp.3721-3731, Oct. 2018. This paper considers the multipath modeling and exploitation problems for multiple input multiple output through-the-wall imaging radar. A multipath propagation model referring to two parallel walls is established, and the position distribution characteristics of the multipath ghosts in radar image are analyzed by means of geometric approximation. Therein, we find a feature that the locations of the walls can be obtained using the geometric relationship between the true target and its multipath ghosts. After utilizing constant false alarm rate technique and clustering approach, the coordinates of the multipath ghosts and the concealed target are achieved, and the positions of the two walls are obtained. Finally, simulation and experimental results validate the correctness of the multipath model and the validity of the multipath-exploitation-based wall localization method.

Minh-Tan Pham; "Fusion of Polarimetric Features and Structural Gradient Tensors for VHR PolSAR Image Classification," vol.11(10), pp.3732-3742, Oct. 2018. This paper proposes a fast texture based supervised classification framework for fully polarimetric synthetic aperture radar (PolSAR) images with very high spatial resolution (VHR). With the development of recent polarimetric radar remote sensing technologies, the acquired images contain not only rich polarimetric characteristics but also high spatial content. Thus, the notion of geometrical structures and heterogeneous textures within VHR PolSAR data becomes more and more significant. Moreover, when the spatial resolution is increased, we need to deal with large-size image data. In this paper, our motivation is to characterize textures by incorporating (fusing) both polarimetric and structural features, and then use them for classification purpose. First, polarimetric features from the weighted coherency matrix and local geometric information based on the Di Zenzo structural tensors are extracted and fused using the covariance approach. Then, supervised classification task is performed by using Riemannian distance measure relevant for covariance-based descriptors. In order to accelerate the computational time, we propose to perform texture description and classification only on characteristic points, not all pixels from the image. Experiments conducted on the VHR F-SAR data as well as the AIRSAR Flevoland image using the proposed framework provide very promising and competitive results in terms of terrain classification and discrimination.

Nan Ge;Fernando Rodriguez Gonzalez;Yuanyuan Wang;Yilei Shi;Xiao Xiang Zhu; "Spaceborne Staring Spotlight SAR Tomography—A First Demonstration With TerraSAR-X," vol.11(10), pp.3743-3756, Oct. 2018. With the objective of exploiting hardware capabilities and preparing the ground for the next-generation X-band synthetic aperture radar (SAR) missions, TerraSAR-X and TanDEM-X are now able to operate in staring spotlight mode, which is characterized by an increased azimuth resolution of approximately 0.24 m compared with 1.1 m of the conventional sliding spotlight mode. In this paper, we demonstrate for the first time its potential for SAR tomography (TomoSAR). To this end, we tailored our interferometric and tomographic processors for the distinctive features of the staring spotlight mode, which will be analyzed accordingly. By means of its higher spatial resolution, the staring spotlight mode will not only lead to a denser point cloud but also to more accurate height estimates due to the higher signal-to-clutter ratio. As a result of a first comparison between sliding and staring spotlight TomoSAR, first, the density of the staring spotlight point cloud is approximately 5.1-5.5 times as high; and, second, the relative height accuracy of the staring spotlight point cloud is approximately 1.7 times as high.

Marc Lort;Albert Aguasca;Carlos López-Martínez;Xavier Fabregas; "Impact of Wind-Induced Scatterers Motion on GB-SAR Imaging," vol.11(10), pp.3757-3768, Oct. 2018. Ground-based synthetic aperture radar (GB-SAR) sensors represent a cost-effective solution for change detection and ground displacement assessment of small-scale areas in real-time early warning applications. GB-SAR systems based on stepped linear frequency modulated continuous wave signals have led to several improvements such as a significant reduction of the acquisition time. Nevertheless, the acquisition time is still long enough to force a degradation of the quality of the reconstructed images because of possible short-term variable reflectivity of the scenario. This reduction of the quality may degrade the differential interferometric detection process. In scenarios where interesting targets are surrounded by vegetation, this is normally related to atmospheric conditions, in particular, the wind. The present paper characterizes the effect of the short-term variable reflectivity in the GB-SAR image reconstruction and evaluates its equivalent blurring effect, the decorrelation introduced in the SAR images, and the degradation of the extracted parameters. In order to validate the results, the study assesses different GB-SAR images obtained with the RISKSAR-X sensor, which has been developed by the Universitat Politècnica de Catalunya.

Zheng Wang;Zhenhong Li;Jon P. Mills; "A New Nonlocal Method for Ground-Based Synthetic Aperture Radar Deformation Monitoring," vol.11(10), pp.3769-3781, Oct. 2018. Ground-based synthetic aperture radar (GBSAR) interferometry offers an effective solution for the monitoring of surface displacements with high precision. However, coherence estimation and phase filtering in GBSAR interferometry is often based on a rectangular window, resulting in estimation biases and resolution loss. To address these issues, conventional nonlocal methods developed for spaceborne synthetic aperture radar are investigated with GBSAR data for the first time. Based on investigation and analysis, an efficient similarity measure is proposed to identify pixels with similar amplitude behaviors and a comprehensive nonlocal method is presented on the basis of this concept with the aim of overcoming current limitations. Pixels with high similarity are identified from a large search window for each point based on a stack of GBSAR single look complex images. Coherence is calculated based on the selected sibling pixels and then enhanced by the second kind statistic estimator. Nonlocal means filtering is also performed based on the sibling pixels to reduce interferometric phase noise. Experiments were conducted using short- and long-term GBSAR interferograms. Qualitative and quantitative analyses of the proposed nonlocal method and other existing techniques demonstrate that the new approach has advantages in terms of coherence estimation and phase filtering capability. The proposed method was integrated into a complete GBSAR small baseline subset algorithm and a time series analysis was achieved for two stacks of data sets. Considered alongside experimental results, this successful application demonstrates the feasibility of the proposed nonlocal method to facilitate the adoption of GBSAR for deformation monitoring applications.

Zefa Yang;Zhiwei Li;Jianjun Zhu;Axel Preusse;Jun Hu;Guangcai Feng;Markus Papst; "High-Resolution Three-Dimensional Displacement Retrieval of Mining Areas From a Single SAR Amplitude Pair Using the SPIKE Algorithm," vol.11(10), pp.3782-3793, Oct. 2018. High-resolution three-dimensional (3-D) displacements of mining areas are crucial to assess mining-related geohazards and understand the mining deformation mechanism. In 2018, we proposed a cost-effective and robust method for retrieving mining-induced 3-D displacements from a single SAR amplitude pair (SAP) using offset tracking (OT) procedures. Hereafter, we refer to this method as the “alternative OT-SAP” (AOT-SAP) method. A key step in the AOT-SAP method is solving the 3-D surface displacements from the AOT-SAP-constructed linear system using routine lower-upper (LU) factorization. However, if the AOT-SAP method is used to retrieve high-resolution 3-D displacements, the dimension of the linear system becomes very large (in the order millions), and a high-end supercomputer is often needed to perform the LU-based solving procedure. This significantly narrows the practical application of the AOT-SAP method, considering the limited availability of supercomputers. In this paper, owing to the banded nature of the AOT-SAP-constructed linear system, we introduce the SPIKE algorithm as an alternative to LU factorization to solve high-resolution mining-induced 3-D displacements. The SPIKE algorithm is a divide-and-conquer direct solver of a large banded system, which can parallelly or sequentially solve a large banded linear system, with a much smaller memory requirement and a shorter time cost than LU factorization. This allows us to retrieve the high-resolution 3-D mining-induced displacements with the AOT-SAP method on either a supercomputer or a standard personal computer. Finally, the accuracy of the retrieved 3-D displacements and the efficiency improvement of the SPIKE algorithm were tested using both simulation analysis and a real dataset.

Moussa Amrani;Feng Jiang;Yunzhong Xu;Shaohui Liu;Shengping Zhang; "SAR-Oriented Visual Saliency Model and Directed Acyclic Graph Support Vector Metric Based Target Classification," vol.11(10), pp.3794-3810, Oct. 2018. The performance of a synthetic aperture radar automatic (SAR) target recognition system mainly depends on feature extraction and classification. It is crucial to select discriminative features to train a classifier to achieve desired performance. In this paper, we propose an efficient feature extraction and classification algorithm based on a visual saliency model. First, an SAR-oriented graph-based visual saliency model is introduced. Second, relying on the ability of our saliency model in highlighting the most significant regions, Gabor and histogram of oriented gradients features are extracted from the processed SAR images. Third, in order to have more discriminative features, the discrimination correlation analysis algorithm is used for feature fusion and combination. Finally, a two-level directed acyclic graph (DAG) support vector metric learning is developed that seamlessly takes advantage of a two-level DAG by eliminating weak classifiers and the Mahalanobis distance-based radial basis function kernel which emphasizes relevant features and reduces the influence of noninformative features. Experiments on real SAR data from the MSTAR database are conducted and the experimental results demonstrate that the proposed method outperforms the state-of-the-art methods.

Guoqiang Shi;Hui Lin;Peifeng Ma; "A Hybrid Method for Stability Monitoring in Low-Coherence Urban Regions Using Persistent and Distributed Scatterers," vol.11(10), pp.3811-3821, Oct. 2018. To perform better monitoring of infrastructure stability in urban and suburban regions, we propose an improved method based on the combined analysis of persistent scatterer (PS) and distributed scatterer (DS). Previous work [13] is extended to explore the DS measurements by exploiting an adaptive homogeneous filter and the capon-based estimator. In this paper, PSs are detected as reference points in the first-tier network. Parameters along network arcs are estimated through the integrated use of beam-forming and an M-estimator. In the second-tier network, we design an adaptive homogeneous filter to cluster statistically homogeneous pixels. DS candidates are then connected to their nearest PS references, establishing the DS network. We estimate DS parameters by using capon beamforming. As the proposed method can provide more complete deformation details of low-coherence targets, it is more effective in stability monitoring. Results from 28 C-band ENVISAT-ASAR scenes of the Hong Kong International Airport are presented in this paper.

Yongguang Zhao;Lingling Ma;Chuanrong Li;Caixia Gao;Ning Wang;Lingli Tang; "Radiometric Cross-Calibration of Landsat-8/OLI and GF-1/PMS Sensors Using an Instrumented Sand Site," vol.11(10), pp.3822-3829, Oct. 2018. The panchromatic and multispectral (PMS1/PMS2) sensors designed with one panchromatic band and four multispectral bands are two optical cameras on-board the Gao Fen 1 (GF-1) satellite launched on April 26, 2013. The spatial resolution of PMS1/PMS2 are 2 and 8 m of pan and multispectral, respectively. The vicarious calibration of GF-1/PMS was performed by China Centre for Resource Satellite Data and Application, and the calibration coefficients were released once a year. Due to the lack of on-board calibrator, the on-orbit radiometric performance evaluation of the PMS sensors is very important for the further quantitative application of the data. To evaluate the radiometric performance of PMS1 sensor on-board GF-1 satellite, this paper attempts to calibrate PMS1 sensor against the well-calibrated OLI sensor on-board Landsat-8. An instrumented sand site was used, which is located in the national high resolution remote sensing comprehensive calibration site, Baotou, China. The cross-calibration coefficients of PMS1 on October 5, 2015 and August 20, 2016 are calculated and compared with the official calibration coefficients published in 2015 and 2016. The results show that the cross-calibration of PMS sensor using other sensors is necessary, due to the low updating frequency of calibration coefficient for GF-1/PMS sensor. The uncertainty estimation results show that the total uncertainties associated with the cross-calibration of Landsat-8/OLI and GF-1/PMS sensors over a desert site are within 5.5%.

Zhuocheng Jiang;W. David Pan;Hongda Shen; "Universal Golomb–Rice Coding Parameter Estimation Using Deep Belief Networks for Hyperspectral Image Compression," vol.11(10), pp.3830-3840, Oct. 2018. For efficient compression of hyperspectral images, we propose a universal Golomb-Rice coding parameter estimation method using deep belief network, which does not rely on any assumption on the distribution of the input data. We formulate the problem of selecting the best coding parameter for a given input sequence as a supervised pattern classification problem. Simulations on the synthesized data and five hyperspectral image datasets show that we can achieve significantly more accurate estimation of the coding parameters, which can translate to slightly higher compression than three state-of-the-art methods. More extensive simulations on additional images from the 2006 AVIRIS datasets show that the proposed method achieved overall compression bitrates comparable with other estimation methods, as well as the sample-adaptive entropy coder employed by the Consultative Committee for Space Data Systems standard for multispectral and hyperspectral data compression. Regarding computational feasibility, we show how to use transferable deep belief networks to speed up training by about five times. We also show that inferring the best coding parameters using a trained deep belief network offers computational advantages over the brute-force search method. As an extension, we propose a novel side-information free codec, where the intersequence correlations can be learned by a differently trained network based on the current sequence to predict reasonably good parameters for coding the next sequence. As another extension, we introduce a variable feature combination architecture, where problem specific heuristics such as the sample means can be incorporated to further improve the estimation accuracy.

Johan Fjeldtvedt;Milica Orlandić;Tor Arne Johansen; "An Efficient Real-Time FPGA Implementation of the CCSDS-123 Compression Standard for Hyperspectral Images," vol.11(10), pp.3841-3852, Oct. 2018. Hyperspectral imaging (HSI) can extract information from scenes on the earth surface acquired by airborne or spaceborne sensors. On-board processing of HSI is characterized by large datasets on one side and limited processing time and communication links on the other. The CCSDS-123 algorithm is a compression standard assembled for space-related application that allows compacted data transmission via a transmission link. In this paper, a low-complexity high-throughput field-programmable gate array (FPGA) implementation of CCSDS-123 compression algorithm with band interleaved by pixel ordering is presented. Hardware accelerators implemented in FPGAs are increasingly used for custom tasks due to their efficiency, low power, and reconfigurability. The proposed implementation of CCSDS-123 compression standard has been tested on ZedBoard development board containing a Zynq-7020 FPGA. The results are verified against an existing software implementation. The synthesized design can perform on-the-fly processing of hyperspectral images with maximum operating frequency of 147 MHz. The achieved throughput of 147 Msamples/s (2.35 Gb/s) is higher when compared with the throughput reported in recent state-of-the-art FPGA implementations.

Asad Mahmood;Amandine Robin;Michael Sears; "Estimation of the Noise Spectral Covariance Matrix in Hyperspectral Images," vol.11(10), pp.3853-3862, Oct. 2018. Accurate estimation of the underlying noise is vital in the processing of hyperspectral images (HSIs). Previous studies have shown that many HSI processing algorithms perform poorly if the noise is not correctly estimated. The classic residual (CR) method is commonly employed for the estimation of variance of the noise in different spectral bands. However, noise estimates as per the CR method ignore the presence of spectral or spatial correlation amongst noise samples. Some studies have been conducted in the past to investigate the spectral and spatial correlation present in the noise but there are very few methods available to estimate the correlations present in the noise. In this paper, we present a reliable method for the estimation of the spectral correlation in the noise. Recently, it was shown that the CR method performs poorly when estimating the spectral correlation in the noise. By using both artificial and real datasets, we show that the proposed new method estimates the noise spectral correlation significantly more accurately than the CR method.

Wenfei Cao;Kaidong Wang;Guodong Han;Jing Yao;Andrzej Cichocki; "A Robust PCA Approach With Noise Structure Learning and Spatial–Spectral Low-Rank Modeling for Hyperspectral Image Restoration," vol.11(10), pp.3863-3879, Oct. 2018. Hyperspectral images (HSIs), during the acquisition process, are often corrupted by a mixture of several types of noises, including Gaussian noise, impulsive noise, dead lines, stripes, and many others. These mixed noises not only severely degrade the visual quality of HSIs, but also limit the related subsequent applications. In this paper, we propose a novel robust principal component analysis approach for mixed noise removal by fully identifying the intrinsic structures of the mixed noise and clean HSI. Specifically, for the noise modeling, considering that the mixed noise consists of the dense Gaussian noise and sparse noise, and even the noise densities in different bands are disparate, we introduce a series of Gaussian-Laplace mixture distributions with the band-adaptive scale parameters to estimate the mixed noise. For the image modeling, since there exist rich correlations among the spectral bands and many self-similarities over the image blocks, we initialize a spatial-spectral low-rank characterization of the image. Furthermore, we impose the anisotropic spatial-spectral total variation regularization on the image to enhance the robustness of our approach. Then, by combining the expectation-maximization algorithm and the alternative direction method of multiplier, we develop an efficient algorithm for the resulting optimization problem. Extensive experimental results on the simulated and real datasets demonstrate that the proposed method is superior over the existing state-of-the-art ones.

Xianghai Cao;Renjie Li;Li Wen;Jie Feng;Licheng Jiao; "Deep Multiple Feature Fusion for Hyperspectral Image Classification," vol.11(10), pp.3880-3891, Oct. 2018. Hyperspectral images usually have high-dimensional and abundant spectral information for land-cover types classification. Research shows that multiple kinds of features would be helpful to the classification task. In this paper, a new feature fusion framework-deep multiple feature fusion (DMFF)-is proposed for the classification of the hyperspectral image. First, several different features are extracted for each pixel. Then, these features are fed to a deep random forest classifier. With a multiple-layer structure, the outputs of preceding layers will be used as the inputs of the subsequent layers. After the final layer, the classification probability will be computed. By introducing the information of neighboring pixels, the spectral-spatial information is combined effectively. Besides, the structure of the DMFF is easy to expand. Experimental results based on two widely used hyperspectral datasets (Indian pines image and Pavia University image) demonstrate that the proposed method can achieve a satisfactory classification performance compared with other multiple feature fusion methods.

Juntao Yang;Zhizhong Kang; "Voxel-Based Extraction of Transmission Lines From Airborne LiDAR Point Cloud Data," vol.11(10), pp.3892-3904, Oct. 2018. The safety of the electricity infrastructure significantly affects both our daily life and industrial activities. Timely and accurate monitoring of the safety of electricity network can prevent dangerous situations effectively. Thus, we, in this paper, develop a voxel-based method for automatically extracting the transmission lines from airborne LiDAR point cloud data. The method proposed in this paper uses three-dimensional (3-D) voxels as primitives and consist of the following steps: First, skeleton structure extraction using Laplacian smoothing; second, feature construction of a 3-D voxel using Latent Dirichlet allocation topic model; and third Markov random field model-based extraction for generating locally continuous and globally optimal results. To evaluate the effectiveness and robustness of the proposed method, experiments were conducted on four different types of power line scenes with flat and complex terrains from helicopter-borne LiDAR point cloud data. Experimental results demonstrate that our proposed method is efficient and robust for automatically detecting both the single conductor and the bundled conductors, with precision, recall, and quality of over 96.78%, 98.67%, and 96.66%, respectively. Moreover, compared with other existing methods, our proposed method provides higher detection correctness rate.

Farah Jahan;Jun Zhou;Mohammad Awrangjeb;Yongsheng Gao; "Fusion of Hyperspectral and LiDAR Data Using Discriminant Correlation Analysis for Land Cover Classification," vol.11(10), pp.3905-3917, Oct. 2018. It is evident that using complementary features from different sensors is effective for land cover classification. Therefore, combining complementary information from hyperspectral (HS) and light detection and ranging (LiDAR) data can greatly assist in such applications. In this paper, we propose a model for land cover classification, which extracts effective features representing different characteristics (e.g., spectral, geometrical/structural) of objects of interest from these two complementary data sources (e.g., HS and LiDAR) and fuse them effectively by incorporating dimensionality reduction technique. The HS bands are first grouped based on their joint entropy and structural similarity for group-wise spatial feature extraction. The spectral and spatial features from HS are then fused in parallel via discriminant correlation analysis (DCA) method for each band group. This is followed by a multisource fusion step between the spatial features extracted from HS and LiDAR data using DCA. The resultant features from both band-group fusion and multisource fusion steps are concatenated with several other features extracted from HS and LiDAR data. In the proposed model, DCA fusion produces discriminative features by eliminating between-class correlations and confining within-class correlations. We compare the performance of our feature extraction and fusion scheme using random forest and support vector machine classifiers. We also compare our approach with several state-of-the-art approaches on two benchmark land cover datasets and show that our approach outperforms the alternatives by a large margin.

Mohammad Ali Khajouei;Alireza Goudarzi; "UDWT Domain: A Verified Replacement for Time Domain Implementation of the Robust P Phase Picker Algorithm," vol.11(10), pp.3918-3924, Oct. 2018. Picking the seismic arrival time is an important parameter for the refraction studies. However, random noise, mainly generated by unknown sources, leads to the data quality reduction and incorrect arrival definition. Besides, picking the accurate first arrival time requires expert interpreters. Poor quality of seismic refraction data, related to the high noise amount (because of human being related noise and low energy source) leads to reduce the accuracy of automatic picking methods. In this study, we tried to enhance the accuracy of first break picking by the P phase Picker method using a proper selection of the wavelet type in the discrete wavelet transform domain. UDWT provides and improves the temporal frequency decomposition that can then be followed by thresholding to prepare a denoised signal. To provide the improvement to the P phase picker algorithm, the examples of the synthetic and real data are studied in the time and UDWT domains. Our results demonstrate that the UDWT domain is a reliable replacement for the time-domain implementation of the P phase algorithm.

* "IEEE Geoscience and Remote Sensing Societys," vol.11(10), pp.C3-C3, Oct. 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Institutional Listings," vol.11(10), pp.C4-C4, Oct. 2018.* Presents a listing of institutions relevant for this issue of the publication.

IEEE Geoscience and Remote Sensing Magazine - new TOC (2018 November 15) [Website]

* "Front Cover," vol.6(3), pp.C1-C1, Sept. 2018.* Presents the front cover for this issue of the publication.

* "GRSM Call for Papers," vol.6(3), pp.C2-C2, Sept. 2018.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "Table of Contents," vol.6(3), pp.1-2, Sept. 2018.* Presents the table of contents for this issue of the publication.

* "Staff Listing," vol.6(3), pp.2-2, Sept. 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

James L. Garrison; "A Busy Week at IGARSS and a Full Issue of Features [From the Editor[Name:_blank]]," vol.6(3), pp.3-4, Sept. 2018. Presents the introductory editorial for this issue of the publication.

* "CISS 2018," vol.6(3), pp.4-4, Sept. 2018.* Presents information on the CISS 2018 conference.

* "Call for Nominations: 2019 GRSS Major Awards," vol.6(3), pp.4-4, Sept. 2018.* Reports on GRSS major awards for 2019.

Adriano Camps; "Greetings from Valencia! [President's Message}," vol.6(3), pp.5-8, Sept. 2018. Presents the President’s message for this issue of the publication.

Pedram Ghamisi;Emmanuel Maggiori;Shutao Li;Roberto Souza;Yuliya Tarablaka;Gabriele Moser;Andrea De Giorgi;Leyuan Fang;Yushi Chen;Mingmin Chi;Sebastiano B. Serpico;Jon Atli Benediktsson; "New Frontiers in Spectral-Spatial Hyperspectral Image Classification: The Latest Advances Based on Mathematical Morphology, Markov Random Fields, Segmentation, Sparse Representation, and Deep Learning," vol.6(3), pp.10-43, Sept. 2018. In recent years, airborne and spaceborne hyperspectral imaging systems have advanced in terms of spectral and spatial resolution, which makes the data sets they produce a valuable source for land cover classification. The availability of hyperspectral data with fine spatial resolution has revolutionized hyperspectral image (HSI) classification techniques by taking advantage of both spectral and spatial information in a single classification framework.

Ganggang Dong;Guisheng Liao;Hongwei Liu;Gangyao Kuang; "A Review of the Autoencoder and Its Variants: A Comparative Perspective from Target Recognition in Synthetic-Aperture Radar Images," vol.6(3), pp.44-68, Sept. 2018. In recent years, unsupervised feature learning based on a neural network architecture has become a hot new topic for research [1]-[4]. The revival of interest in such deep networks can be attributed to the development of efficient optimization skills, by which the model parameters can be optimally estimated [5]. The milestone work done by Hinton and Salakhutdinov [6] proposes to initialize the weights that allow deep autoencoder networks to learn low-dimensional codes. The encoding trick introduced works much better than principal component analysis (PCA) in terms of dimension reduction.

Chao Chen;Jiaoqi Fu;Yingying Gai;Jun Li;Li Chen;Venkata Subrahmanyam Mantravadi;Anhui Tan; "Damaged Bridges Over Water: Using High-Spatial-Resolution Remote-Sensing Images for Recognition, Detection, and Assessment," vol.6(3), pp.69-85, Sept. 2018. A bridge over water is typically an artificial object that can be damaged by changes in landform, geology, hydrology, and other surrounding conditions. Natural disaster scenarios must be studied using high-spatial-resolution remote-sensing images to detect and assess damage to bridges over water. Such studies are important for emergency rescue operations, disaster relief, and rapid disaster assessment. This article describes a method of damage detection and assessment for bridges over water using high-spatial-resolution remote-sensing images based on research regarding the selection and expression of characteristic knowledge.

Ji Zhu;Jiancheng Shi; "An Algorithm for Subpixel Snow Mapping: Extraction of a Fractional Snow-Covered Area Based on Ten-Day Composited AVHRR/2 Data of the Qinghai-Tibet Plateau," vol.6(3), pp.86-98, Sept. 2018. Advanced very-high-resolution radiometer (AVHRR)/2 sensors lack the e 1.6-nm channel used by Moderate Resolution Imaging Spectroradiometer (MODIS) and AVHRR/3; consequently, the subpixel snow-mapping algorithms for MODIS and AVHRR/3 data cannot be introduced into the subpixel snow mapping for AVHRR/2 data. To formulate subpixel snow mapping for AVHRR/2 data, in this research we determined an algorithm for subpixel snow mapping using ten-day composited AVHRR/2 data from the Qinghai-Tibet Plateau and extending moderate-resolution fractional snow-covered-area data to more than 30 years.

Alberto Moreira; "Prof. Wolfgang-Martin Boerner (1937-2018) [In Memoriam[Name:_blank]]," vol.6(3), pp.99-100, Sept. 2018. Dear Colleagues of the International Remote Sensing Community: On 25 May 2018, Prof. Wolfgang-Martin Boerner, a Distinguished Senior Professor Emeritus of the University of Illinois at Chicago, an IEEE Life Fellow, and a most esteemed member of the IEEE Geoscience and Remote Sensing Society (GRSS), passed away. On behalf of the GRSS, I would like to express our deepest sympathy to his family and, at the same time, our sincere gratitude and appreciation for Prof. Boerner's outstanding achievements and extraordinary contributions to radar remote sensing.

Shane Cloude; "Science Is International and Best Performed Without Borders [In Memoriam[Name:_blank]]," vol.6(3), pp.100-101, Sept. 2018. Recounts the career and contributions of Wolfgang-Martin Boerner.

Irena Hajnsek;Kostas Papathanassiou; "Wolfgang-Martin Boerner-The Mentor [In Memoriam[Name:_blank]]," vol.6(3), pp.101-101, Sept. 2018. Recounts the career and contributions of Wolfgang-Martin Boerner.

Jong-Sen Lee; "Contributions to the Establishment of the Polarimetry Community [In Memoriam[Name:_blank]]," vol.6(3), pp.102-102, Sept. 2018. Recounts the career and contributions of Wolfgang-Martin Boerner.

Shiv Mohan; "A Vision for Enhancing the GRSS's Footprint in India [In Memoriam[Name:_blank]]," vol.6(3), pp.102-103, Sept. 2018. Recounts the career and contributions of Wolfgang-Martin Boerner.

Eric Pottier; "Contributions to Education [In Memoriam[Name:_blank]]," vol.6(3), pp.103-104, Sept. 2018. Recounts the career and contributions of Wolfgang-Martin Boerner.

Paul Rosen; "Tribute to Wolfgang Boerner [In Memoriam[Name:_blank]]," vol.6(3), pp.104-105, Sept. 2018. Recounts the career and contributions of Wolfgang-Martin Boerner.

Christiane Schmullius; "Free Radar: Traveling with Wolfgang Boerner in Siberia [In Memoriam[Name:_blank]]," vol.6(3), pp.105-105, Sept. 2018. Recounts the career and contributions of Wolfgang-Martin Boerner.

Masanobu Shimada; "Contributions to the CEOS SAR Group [In Memoriam[Name:_blank]]," vol.6(3), pp.106-106, Sept. 2018. Recounts the career and contributions of Wolfgang-Martin Boerner.

Yoshio Yamaguchi; "Thank You, Prof. Boerner! [In Memoriam[Name:_blank]]," vol.6(3), pp.106-107, Sept. 2018. Recounts the career and contributions of Wolfgang-Martin Boerner.

Jakob van Zyl; "A Great Mentor for All of Us [In Memoriam[Name:_blank]]," vol.6(3), pp.107-107, Sept. 2018. Recounts the career and contributions of Wolfgang-Martin Boerner.

John P. Kerekes;John R. Roadcap; "2018 Atmospheric Transmission Models-Modeling in Remote Sensing Meeting [Conference Reports[Name:_blank]]," vol.6(3), pp.108-110, Sept. 2018. Presents information on the 2018 Atmospheric Transmission Models-Modeling in Remote Sensing Meeting.

* "Call for Nominations: 2019 GRSS-Special Awards," vol.6(3), pp.110-110, Sept. 2018.* Presents call for nominations for 2019 GRSS awards.

Cindy Ong;Kurtis Thome;Uta Heiden;Jeff Czapla-Myers;Andreas Mueller; "Reflectance-Based Imaging Spectrometer Error Budget Field Practicum at the Railroad Valley Test Site, Nevada [Technical Committees[Name:_blank]]," vol.6(3), pp.111-115, Sept. 2018. Calibration and validation determine the quality and integrity of the data provided by sensors and have enormous downstream impacts on the accuracy and reliability of the products generated by these sensors. With the imminent launch of the next generation of spaceborne imaging spectroscopy sensors, the IEEE Geoscience and Remote Sensing Society's (GRSS's) Geoscience Spaceborne Imaging Spectroscopy Technical Committee (GSIS TC) initiated a calibration and validation initiative.

* "[Calendar[Name:_blank]]," vol.6(3), pp.116-116, Sept. 2018.* Presents the GRSS society calendar of upcoming events and meetings.

* "RSCL," vol.6(3), pp.C3-C3, Sept. 2018.* Presents information on the GRSS Remote Sensing Code Library.

Topic revision: r6 - 22 May 2015, AndreaVaccari
©2018 University of Virginia. Privacy Statement
Virginia Image and Video Analysis - School of Engineering and Applied Science - University of Virginia
P.O. Box 400743 - Charlottesville, VA - 22904 - E-Mail