Relevant TOCs

IEEE Transactions on Image Processing - new TOC (2018 September 20) [Website]

Guang-Hai Liu;Jing-Yu Yang; "Exploiting Color Volume and Color Difference for Salient Region Detection," vol.28(1), pp.6-16, Jan. 2019. Foreground and background cues can assist humans in quickly understanding visual scenes. In computer vision, however, it is difficult to detect salient objects when they touch the image boundary. Hence, detecting salient objects robustly under such circumstances without sacrificing precision and recall can be challenging. In this paper, we propose a novel model for salient region detection, namely, the foreground-center-background (FCB) saliency model. Its main highlights as follows. First, we use regional color volume as the foreground, together with perceptually uniform color differences within regions to detect salient regions. This can highlight salient objects robustly, even when they touched the image boundary, without greatly sacrificing precision and recall. Second, we employ center saliency to detect salient regions together with foreground and background cues, which improves saliency detection performance. Finally, we propose a novel and simple yet efficient method that combines foreground, center, and background saliency. Experimental validation with three well-known benchmark data sets indicates that the FCB model outperforms several state-of-the-art methods in terms of precision, recall, F-measure, and particularly, the mean absolute error. Salient regions are brighter than those of some existing state-of-the-art methods.

Pedro Asad;Ricardo Marroquim;Andréa L. e L. Souza; "On GPU Connected Components and Properties: A Systematic Evaluation of Connected Component Labeling Algorithms and Their Extension for Property Extraction," vol.28(1), pp.17-31, Jan. 2019. Connected component labeling (CCL) is a fundamental image processing problem that has been studied in many platforms, including GPUs. A common approach to CCL performance analysis is studying the total processing time as a function of abstract image features, like the number of connected components or the fraction of foreground pixels, and input data usually includes synthetic images and segmented video datasets. In this paper, we develop on these ideas and propose an evaluation methodology for GPU CCL algorithms based on synthetic image patterns, addressing the nonexistence of a standard and reliable benchmark in the literature. Our methodology, applied on two important algorithms from existing literature, uncovers their data dependency with great detail, and allows us to model their processing time in three real-world video data sets as functions of abstract, high-level image concepts. We also apply our methodology for studying the memory and performance requirements of two strategies for computing connected component properties: an existing memory-hungry approach, and a new memory-preserving strategy.

Mingxing Zhang;Yang Yang;Hanwang Zhang;Yanli Ji;Heng Tao Shen;Tat-Seng Chua; "More is Better: Precise and Detailed Image Captioning Using Online Positive Recall and Missing Concepts Mining," vol.28(1), pp.32-44, Jan. 2019. Recently, a great progress in automatic image captioning has been achieved by using semantic concepts detected from the image. However, we argue that existing concepts-to-caption framework, in which the concept detector is trained using the image-caption pairs to minimize the vocabulary discrepancy, suffers from the deficiency of insufficient concepts. The reasons are two-fold: 1) the extreme imbalance between the number of occurrence positive and negative samples of the concept and 2) the incomplete labeling in training captions caused by the biased annotation and usage of synonyms. In this paper, we propose a method, termed online positive recall and missing concepts mining, to overcome those problems. Our method adaptively re-weights the loss of different samples according to their predictions for online positive recall and uses a two-stage optimization strategy for missing concepts mining. In this way, more semantic concepts can be detected and a high accuracy will be expected. On the caption generation stage, we explore an element-wise selection process to automatically choose the most suitable concepts at each time step. Thus, our method can generate more precise and detailed caption to describe the image. We conduct extensive experiments on the MSCOCO image captioning data set and the MSCOCO online test server, which shows that our method achieves superior image captioning performance compared with other competitive methods.

Yang You;Cewu Lu;Weiming Wang;Chi-Keung Tang; "Relative CNN-RNN: Learning Relative Atmospheric Visibility From Images," vol.28(1), pp.45-55, Jan. 2019. We propose a deep learning approach for directly estimating relative atmospheric visibility from outdoor photos without relying on weather images or data that require expensive sensing or custom capture. Our data-driven approach capitalizes on a large collection of Internet images to learn rich scene and visibility varieties. The relative CNN–RNN coarse-to-fine model, where CNN stands for convolutional neural network and RNN stands for recurrent neural network, exploits the joint power of relative support vector machine, which has a good ranking representation, and the data-driven deep learning features derived from our novel CNN–RNN model. The CNN–RNN model makes use of shortcut connections to bridge a CNN module and an RNN coarse-to-fine module. The CNN captures the global view while the RNN simulates human’s attention shift, namely, from the whole image (global) to the farthest discerned region (local). The learned relative model can be adapted to predict absolute visibility in limited scenarios. Extensive experiments and comparisons are performed to verify our method. We have built an annotated dataset consisting of about 40000 images with 0.2 million human annotations. The large-scale, annotated visibility data set will be made available to accompany this paper.

Chung-Chi Tsai;Weizhi Li;Kuang-Jui Hsu;Xiaoning Qian;Yen-Yu Lin; "Image Co-Saliency Detection and Co-Segmentation via Progressive Joint Optimization," vol.28(1), pp.56-71, Jan. 2019. We present a novel computational model for simultaneous image co-saliency detection and co-segmentation that concurrently explores the concepts of saliency and objectness in multiple images. It has been shown that the co-saliency detection via aggregating multiple saliency proposals by diverse visual cues can better highlight the salient objects; however, the optimal proposals are typically region-dependent and the fusion process often leads to blurred results. Co-segmentation can help preserve object boundaries, but it may suffer from complex scenes. To address these issues, we develop a unified method that addresses co-saliency detection and co-segmentation jointly via solving an energy minimization problem over a graph. Our method iteratively carries out the region-wise adaptive saliency map fusion and object segmentation to transfer useful information between the two complementary tasks. Through the optimization iterations, sharp saliency maps are gradually obtained to recover entire salient objects by referring to object segmentation, while these segmentations are progressively improved owing to the better saliency prior. We evaluate our method on four public benchmark data sets while comparing it to the state-of-the-art methods. Extensive experiments demonstrate that our method can provide consistently higher-quality results on both co-saliency detection and co-segmentation.

Haosen Liu;Shan Tan; "Image Regularizations Based on the Sparsity of Corner Points," vol.28(1), pp.72-87, Jan. 2019. Many analysis-based regularizations proposed so far employ a common prior information, i.e., edges in an image are sparse. However, in local edge regions and texture regions, this prior may not hold. As a result, the performance of regularizations based on the edge sparsity may be unsatisfactory in such regions for image-related inverse problems. These regularizations tend to smooth out the edges while eliminating the noise. In other words, these regularizations’ abilities of preserving edges are limited. In this paper, a new prior that the corner points in a natural image are sparse was proposed to construct regularizations. Intuitively, even in local edge regions and texture regions, the sparsity of corner points may still exist, and hence, the regularizations based on it can achieve better performance than those based on the edge sparsity. As an example, by utilizing the sparsity of corner points, we proposed a new regularization based on Noble’s corner measure function. Our experiments demonstrated the excellent performance of the proposed regularization for both image denoising and deblurring problems, especially in local edge regions and texture regions.

Song Bai;Zhichao Zhou;Jingdong Wang;Xiang Bai;Longin Jan Latecki;Qi Tian; "Automatic Ensemble Diffusion for 3D Shape and Image Retrieval," vol.28(1), pp.88-101, Jan. 2019. As a post-processing procedure, the diffusion process has demonstrated its ability of substantially improving the performance of various visual retrieval systems. Whereas, great efforts are also devoted to similarity (or metric) fusion, seeing that only one individual type of similarity cannot fully reveal the intrinsic relationship between objects. This stimulates a great research interest of considering similarity fusion in the framework of the diffusion process (i.e., fusion with diffusion) for robust retrieval. In this paper, we first revisit representative methods about fusion with diffusion and provide new insights which are ignored by previous researchers. Then, observing that existing algorithms are susceptible to noisy similarities, the proposed regularized ensemble diffusion (RED) is bundled with an automatic weight learning paradigm, so that the negative impacts of noisy similarities are suppressed. Though formulated as a convex optimization problem, one advantage of RED is that it converts back into the iteration-based solver with the same computational complexity as the conventional diffusion process. At last, we integrate several recently-proposed similarities with the proposed framework. The experimental results suggest that we can achieve new state-of-the-art performances on various retrieval tasks, including 3D shape retrieval on the ModelNet data set, and image retrieval on the Holidays and Ukbench data sets.

Devraj Mandal;Kunal N. Chaudhury;Soma Biswas; "Generalized Semantic Preserving Hashing for Cross-Modal Retrieval," vol.28(1), pp.102-112, Jan. 2019. Cross-modal retrieval is gaining importance due to the availability of large amounts of multimedia data. Hashing-based techniques provide an attractive solution to this problem when the data size is large. For cross-modal retrieval, data from the two modalities may be associated with a single label or multiple labels, and in addition, may or may not have a one-to-one correspondence. This work proposes a simple hashing framework which has the capability to work with different scenarios while effectively capturing the semantic relationship between the data items. The work proceeds in two stages in which the first stage learns the optimum hash codes by factorizing an affinity matrix, constructed using the label information. In the second stage, ridge regression and kernel logistic regression is used to learn the hash functions for mapping the input data to the bit domain. We also propose a novel iterative solution for cases where the training data is very large, or when the whole training data is not available at once. Extensive experiments on single label data set like Wiki and multi-label datasets like MirFlickr, NUS-WIDE, Pascal, and LabelMe, and comparisons with the state-of-the-art, shows the usefulness of the proposed approach.

Yousong Zhu;Chaoyang Zhao;Haiyun Guo;Jinqiao Wang;Xu Zhao;Hanqing Lu; "Attention CoupleNet: Fully Convolutional Attention Coupling Network for Object Detection," vol.28(1), pp.113-126, Jan. 2019. The field of object detection has made great progress in recent years. Most of these improvements are derived from using a more sophisticated convolutional neural network. However, in the case of humans, the attention mechanism, global structure information, and local details of objects all play an important role for detecting an object. In this paper, we propose a novel fully convolutional network, named as Attention CoupleNet, to incorporate the attention-related information and global and local information of objects to improve the detection performance. Specifically, we first design a cascade attention structure to perceive the global scene of the image and generate class-agnostic attention maps. Then the attention maps are encoded into the network to acquire object-aware features. Next, we propose a unique fully convolutional coupling structure to couple global structure and local parts of the object to further formulate a discriminative feature representation. To fully explore the global and local properties, we also design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local information. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging data sets, i.e., a mAP of 85.7% on VOC07, 84.3% on VOC12, and 35.4% on COCO. Codes are publicly available at https://github.com/tshizys/CoupleNet.

Ching-Chun Huang;Manh-Hung Nguyen; "X-Ray Enhancement Based on Component Attenuation, Contrast Adjustment, and Image Fusion," vol.28(1), pp.127-141, Jan. 2019. Inspecting X-ray images is an essential aspect of medical diagnosis. However, due to an X-ray’s low contrast and low dynamic range, important aspects such as organs, bones, and nodules become difficult to identify. Hence, contrast adjustment is critical, especially because of its ability to enhance the details in both bright and dark regions. For X-ray image enhancement, we therefore propose a new concept based on component attenuation. Notably, we assumed an X-ray image could be decomposed into tissue components and important details. Since tissues may not be the major primary focus of an X-ray, we proposed enhancing the visual contrast by adaptive tissue attenuation and dynamic range stretching. Via component decomposition and tissue attenuation, a parametric adjustment model was deduced to generate many enhanced images at once. Finally, an ensemble framework was proposed for fusing these enhanced images and producing a high-contrast output in both bright and dark regions. We have used measurement metrics to evaluate our system and achieved promising scores in each. An online testing system was also built for subjective evaluation. Moreover, we applied our system to an X-ray data set provided by the Japanese Society of Radiological Technology to help with nodule detection. The experimental results of which demonstrated the effectiveness of our method.

Yanmin Luo;Zhitong Xu;Peizhong Liu;Yongzhao Du;Jing-Ming Guo; "Multi-Person Pose Estimation via Multi-Layer Fractal Network and Joints Kinship Pattern," vol.28(1), pp.142-155, Jan. 2019. We propose an effective method to boost the accuracy of multi-person pose estimation in images. Initially, the three-layer fractal network was constructed to regress multi-person joints location heatmap that can help to enhance an image region with receptive field and capture more joints local-contextual feature information, thereby producing keypoints heatmap intermediate prediction to optimize human body joints regression results. Subsequently, the hierarchical bi-directional inference algorithm was proposed to calculate the degree of relatedness (call it Kinship) for adjacent joints, and it combines the Kinship between adjacent joints with the spatial constraints, which we refer to as joints kinship pattern matching mechanism, to determine the best matched joints pair. We iterate the above-mentioned joints matching process layer by layer until all joints are assigned to a corresponding individual. Comprehensive experiments demonstrate that the proposed approach outperforms the state-of-the-art schemes and achieves about 1% and 0.6% increase in mAP on MPII multi-person subset and MSCOCO 2016 keypoints challenge.

Changsheng Chen;Wenjian Huang;Lin Zhang;Wai Ho Mow; "Robust and Unobtrusive Display-to-Camera Communications via Blue Channel Embedding," vol.28(1), pp.156-169, Jan. 2019. Due to the rapid advancement in processing power and camera quality of mobile devices, research on the display-to-camera (D2C) communication channel has recently received increasing attention. Unlike the traditional QR Code, the unobtrusive D2C communication scheme normally serves both the human eyes and the mobile camera in commercial advertisements. Thus, attention should be paid to both unobtrusiveness and reliability of the design of a D2C communication scheme. In this paper, 2D barcodes with unobtrusive embedding in the blue channel are proposed in image and video formats to yield the robust and unobtrusive (RU) code and video-based RU (vRU) code, respectively. The proposed RU code is featured with a modulation scheme in blue channel that leverages several important properties of the human visual system; such as insensitivity toward the yellow-blue chrominance component, the proximity principle, and the oblique effect. Both RU code and vRU code employ a low-density-parity-check code for intra-frame channel coding. In addition, vRU code adopts an erasure-correcting Reed–Solomon code for inter-frame channel coding. Under a high perceptual quality constraint (multiscale structural similarity (MS-SSIM) ≈ 0.95), RU code achieves a demodulation bit error probability of 3.84%, which is an order of magnitude smaller than that of the existing picture-embedding 2D barcodes. Meanwhile, under a similar perceptual quality requirement (MS-SSIM ≈ 0.95 for each video frame), the goodput of vRU code is reported to be as high as 34.33 kbps under practical settings, e.g., a display frame rate of 30 fps and a capture frame rate of 42 fps.

Baohua Li;Xiaoning Zhang;Xiaoli Li;Huchuan Lu; "Tensor Completion From One-Bit Observations," vol.28(1), pp.170-180, Jan. 2019. The tensor completion issues have obtained a great deal of attention in the past few years. However, the data fidelity part minimizes a squared loss function, which may be inappropriate for the case of noisy one-bit observations. In this paper, we alleviate the mentioned difficulty by drawing on the experience of matrix scenarios. Based on the convex relation to <inline-formula> <tex-math notation="LaTeX">$ell _{1}$ </tex-math></inline-formula> norm of the tensor multi-rank, we propose a novel optimization model trying to recover the underlying tensor in case of one-bit observations. The feasibility of this model is proved by theoretical derivations. Furthermore, an alternating direction method of multipliers based algorithm is designed to find the solution. The numerical experiments demonstrate the effectiveness of our method.

Yan Zheng;Zhouchen Lin; "The Augmented Homogeneous Coordinates Matrix-Based Projective Mismatch Removal for Partial-Duplicate Image Search," vol.28(1), pp.181-193, Jan. 2019. Mismatch removal is a key step in many computer vision problems that involve point matching. The existing methods for checking geometric consistency mainly focus on similarity or affine transformations. In this paper, we propose a novel mismatch removal method that can cope with the projective transformation between two corresponding point sets. Our approach is based on the augmented homogeneous coordinates matrix constructed from the coordinates of anchor matches, whose degeneracy can indicate the correctness of anchor matches. The set of anchor matches is initially all the matches and is iteratively updated by calculating the difference between the estimated matched points, which can be easily computed in a closed form, and the actually matched points and removing those with large differences. Experimental results on synthetic 2D point matching data sets and real image matching data sets verify that our method achieves the highest <inline-formula> <tex-math notation="LaTeX">$F$ </tex-math></inline-formula>-score among all the methods under similarity, affine, and projective transformations with noises and outliers. Our method can also achieve faster speed than all other iterative methods. Those non-iterative methods with slight advantage in speed are not competitive in accuracy when compared with ours. We also show that the set of anchor matches is stable through the iteration and the computation time grows very slowly with respect to the number of matched points. When applied to mismatch removal in partial-duplicate image search, our method achieves the best retrieval precision, and its computing time is also highly competitive.

Zhengming Ding;Yun Fu; "Dual Low-Rank Decompositions for Robust Cross-View Learning," vol.28(1), pp.194-204, Jan. 2019. Cross-view data are very popular contemporarily, as different viewpoints or sensors attempt to richly represent data in various views. However, the cross-view data from different views present a significant divergence, that is, cross-view data from the same category have a lower similarity than those in different categories but within the same view. Considering that each cross-view sample is drawn from two intertwined manifold structures, i.e., class manifold and view manifold, in this paper, we propose a robust cross-view learning framework to seek a robust view-invariant low-dimensional space. Specifically, we develop a dual low-rank decomposition technique to unweave those intertwined manifold structures from one another in the learned space. Moreover, we design two discriminative graphs to constrain the dual low-rank decompositions by fully exploring the prior knowledge. Thus, our proposed algorithm is able to capture more within-class knowledge and mitigate the view divergence to obtain a more effective view-invariant feature extractor. Furthermore, our proposed method is very flexible in addressing such a challenging cross-view learning scenario that we only obtain the view information of the training data while with the view information of the evaluation data unknown. Experiments on face and object benchmarks demonstrate the effective performance of our designed model over the state-of-the-art algorithms.

Lei Zhang;Xiantong Zhen;Ling Shao;Jingkuan Song; "Learning Match Kernels on Grassmann Manifolds for Action Recognition," vol.28(1), pp.205-215, Jan. 2019. Action recognition has been extensively researched in computer vision due to its potential applications in a broad range of areas. The key to action recognition lies in modeling actions and measuring their similarity, which however poses great challenges. In this paper, we propose learning match kernels between actions on Grassmann manifold for action recognition. Specifically, we propose modeling actions as a linear subspace on the Grassmann manifold; the subspace is a set of convolutional neural network (CNN) feature vectors pooled temporally over frames in semantic video clips, which simultaneously captures local discriminant patterns and temporal dynamics of motion. To measure the similarity between actions, we propose Grassmann match kernels (GMK) based on canonical correlations of linear subspaces to directly match videos for action recognition; GMK is learned in a supervised way via kernel target alignment, which is endowed with a great discriminative ability to distinguish actions from different classes. The proposed approach leverages the strengths of CNNs for feature extraction and kernels for measuring similarity, which accomplishes a general learning framework of match kernels for action recognition. We have conducted extensive experiments on five challenging realistic data sets including Youtube, UCF50, UCF101, Penn action, and HMDB51. The proposed approach achieves high performance and substantially surpasses the state-of-the-art algorithms by large margins, which demonstrates the great effectiveness of proposed approach for action recognition.

Cid A. N. Santos;Nelson D. A. Mascarenhas; "Geodesic Distances in Probabilistic Spaces for Patch-Based Ultrasound Image Processing," vol.28(1), pp.216-226, Jan. 2019. Many recent ultrasound image processing methods are based on patch comparison, such as filtering and segmentation. Identifying similar patches in noise-corrupted images is a key factor for the performance of these methods. While the Euclidean distance is ideal to handle the patch comparison under additive Gaussian noise, finding good measures to compare patches corrupted by multiplicative noise is still an open research. In this paper, we deduce several new geodesic distances, arising from parametric probabilistic spaces, and suggest them as similarity measures to process RF and log-compressed ultrasound images in patch-based methods. We provide practical examples using these measures in the fields of ultrasound image filtering and segmentation, with results that confirm the potential of the technique.

Tingting Wang;Faming Fang;Fang Li;Guixu Zhang; "High-Quality Bayesian Pansharpening," vol.28(1), pp.227-239, Jan. 2019. Pansharpening is a process of acquiring a multi-spectral image with high spatial resolution by fusing a low resolution multi-spectral image with a corresponding high resolution panchromatic image. In this paper, a new pansharpening method based on the Bayesian theory is proposed. The algorithm is mainly based on three assumptions: 1) the geometric information contained in the pan-sharpened image is coincident with that contained in the panchromatic image; 2) the pan-sharpened image and the original multi-spectral image should share the same spectral information; and 3) in each pan-sharpened image channel, the neighboring pixels not around the edges are similar. We build our posterior probability model according to above-mentioned assumptions and solve it by the alternating direction method of multipliers. The experiments at reduced and full resolution show that the proposed method outperforms the other state-of-the-art pansharpening methods. Besides, we verify that the new algorithm is effective in preserving spectral and spatial information with high reliability. Further experiments also show that the proposed method can be successfully extended to hyper-spectral image fusion.

Wenhan Luo;Björn Stenger;Xiaowei Zhao;Tae-Kyun Kim; "Trajectories as Topics: Multi-Object Tracking by Topic Discovery," vol.28(1), pp.240-252, Jan. 2019. This paper proposes a new approach to multi-object tracking by semantic topic discovery. We dynamically cluster frame-by-frame detections and treat objects as topics, allowing the application of the Dirichlet process mixture model. The tracking problem is cast as a topic-discovery task, where the video sequence is treated analogously to a document. It addresses tracking issues such as object exclusivity constraints as well as tracking management without the need for heuristic thresholds. Variation of object appearance is modeled as the dynamics of word co-occurrence and handled by updating the cluster parameters across the sequence in the dynamical clustering procedure. We develop two kinds of visual representation based on super-pixel and deformable part model and integrate them into the model of automatic topic discovery for tracking rigid and non-rigid objects, respectively. In experiments on public data sets, we demonstrate the effectiveness of the proposed algorithm.

Kareth M. León-López;Laura V. Galvis Carreño;Henry Arguello Fuentes; "Temporal Colored Coded Aperture Design in Compressive Spectral Video Sensing," vol.28(1), pp.253-264, Jan. 2019. Compressive spectral video sensing (CSVS) systems obtain spatial, spectral, and temporal information of a dynamic scene through the encoding of the incoming light rays by using a temporal-static coded aperture (CA). CSVS systems use CAs with binary entries spatially distributed at random. The random spatial encoding of the binary CAs entails a poor quality in the reconstructed images even though the CSVS sensing matrix is incoherent with the sparse representation basis. In addition, since some pixels are totally blocked, information such as object motion is missed over time. This paper substitutes the temporal-static binary coded apertures by a richer spatio-spectro-temporal encoding based on selectable color filters, named temporal colored coded apertures (T-CCA). The spatial, spectral, and time distributions of the T-CCAs are optimized by better satisfying the restricted isometry property (RIP) of the CSVS system. The RIP-optimized T-CCAs lead to spatio-spectral-time structures that tend to sense more uniformly the spatial, spectral, and temporal dimensions. An algorithm for optimally designing the T-CCAs is developed. In addition, a regularization term based on the scene motion is included in the inverse problem leading to a better quality of the reconstructed images. Computational experiments using four different spectral videos show an improvement of up to 6 dB in terms of peak signal-to-noise ratio of the reconstructed images by using the proposed inverse problem and the T-CCA patterns compared with the binary CAs and random and image-optimized CCA patterns.

Gong Cheng;Junwei Han;Peicheng Zhou;Dong Xu; "Learning Rotation-Invariant and Fisher Discriminative Convolutional Neural Networks for Object Detection," vol.28(1), pp.265-278, Jan. 2019. The performance of object detection has recently been significantly improved due to the powerful features learnt through convolutional neural networks (CNNs). Despite the remarkable success, there are still several major challenges in object detection, including object rotation, within-class diversity, and between-class similarity, which generally degenerate object detection performance. To address these issues, we build up the existing state-of-the-art object detection systems and propose a simple but effective method to train rotation-invariant and Fisher discriminative CNN models to further boost object detection performance. This is achieved by optimizing a new objective function that explicitly imposes a rotation-invariant regularizer and a Fisher discrimination regularizer on the CNN features. Specifically, the first regularizer enforces the CNN feature representations of the training samples before and after rotation to be mapped closely to each other in order to achieve rotation-invariance. The second regularizer constrains the CNN features to have small within-class scatter but large between-class separation. We implement our proposed method under four popular object detection frameworks, including region-CNN (R-CNN), Fast R- CNN, Faster R- CNN, and R- FCN. In the experiments, we comprehensively evaluate the proposed method on the PASCAL VOC 2007 and 2012 data sets and a publicly available aerial image data set. Our proposed methods outperform the existing baseline methods and achieve the state-of-the-art results.

Nenad Markuš;Igor Pandžić;Jörgen Ahlberg; "Learning Local Descriptors by Optimizing the Keypoint-Correspondence Criterion: Applications to Face Matching, Learning From Unlabeled Videos and 3D-Shape Retrieval," vol.28(1), pp.279-290, Jan. 2019. Current best local descriptors are learned on a large data set of matching and non-matching keypoint pairs. However, data of this kind are not always available, since the detailed keypoint correspondences can be hard to establish. On the other hand, we can often obtain labels for pairs of keypoint bags. For example, keypoint bags extracted from two images of the same object under different views form a matching pair, and keypoint bags extracted from images of different objects form a non-matching pair. On average, matching pairs should contain more corresponding keypoints than non-matching pairs. We describe an end-to-end differentiable architecture that enables the learning of local keypoint descriptors from such weakly labeled data. In addition, we discuss how to improve the method by incorporating the procedure of mining hard negatives. We also show how our approach can be used to learn convolutional features from unlabeled video signals and 3D models.

Kaihao Zhang;Wenhan Luo;Yiran Zhong;Lin Ma;Wei Liu;Hongdong Li; "Adversarial Spatio-Temporal Learning for Video Deblurring," vol.28(1), pp.291-301, Jan. 2019. Camera shake or target movement often leads to undesired blur effects in videos captured by a hand-held camera. Despite significant efforts having been devoted to video-deblur research, two major challenges remain: 1) how to model the spatio-temporal characteristics across both the spatial domain (i.e., image plane) and the temporal domain (i.e., neighboring frames) and 2) how to restore sharp image details with respect to the conventionally adopted metric of pixel-wise errors. In this paper, to address the first challenge, we propose a deblurring network (DBLRNet) for spatial-temporal learning by applying a 3D convolution to both the spatial and temporal domains. Our DBLRNet is able to capture jointly spatial and temporal information encoded in neighboring frames, which directly contributes to the improved video deblur performance. To tackle the second challenge, we leverage the developed DBLRNet as a generator in the generative adversarial network (GAN) architecture and employ a content loss in addition to an adversarial loss for efficient adversarial training. The developed network, which we name as deblurring GAN, is tested on two standard benchmarks and achieves the state-of-the-art performance.

You Yang;Qiong Liu;Xin He;Zhen Liu; "Cross-View Multi-Lateral Filter for Compressed Multi-View Depth Video," vol.28(1), pp.302-315, Jan. 2019. Multi-view depth is crucial for describing positioning information in 3D space for virtual reality, free viewpoint video, and other interaction- and remote-oriented applications. However, in cases of lossy compression for bandwidth limited remote applications, the quality of multi-view depth video suffers from quantization errors, leading to the generation of obvious artifacts in consequent virtual view rendering during interactions. Considerable efforts must be made to properly address these artifacts. In this paper, we propose a cross-view multi-lateral filtering scheme to improve the quality of compressed depth maps/videos within the framework of asymmetric multi-view video with depth compression. Through this scheme, a distorted depth map is enhanced via non-local candidates selected from current and neighboring viewpoints of different time-slots. Specifically, these candidates are clustered into a macro super pixel denoting the physical and semantic cross-relationships of the cross-view, spatial and temporal priors. The experimental results show that gains from static depth maps and dynamic depth videos can be obtained from PSNR and SSIM metrics, respectively. In subjective evaluations, even object contours are recovered from a compressed depth video. We also verify our method via several practical applications. For these verifications, artifacts on object contours are properly managed for the development of interactive video and discontinuous object surfaces are restored for 3D modeling. Our results suggest that the proposed filter outperforms state-of-the-art filters and is suitable for use in multi-view color plus depth-based interaction- and remote-oriented applications.

Gaoang Wang;Jenq-Neng Hwang;Craig Rose;Farron Wallace; "Uncertainty-Based Active Learning via Sparse Modeling for Image Classification," vol.28(1), pp.316-329, Jan. 2019. Uncertainty sampling-based active learning has been well studied for selecting informative samples to improve the performance of a classifier. In batch-mode active learning, a batch of samples are selected for a query at the same time. The samples with top uncertainty are encouraged to be selected. However, this selection strategy ignores the relations among the samples, because the selected samples may have much redundant information with each other. This paper addresses this problem by proposing a novel method that combines uncertainty, diversity, and density via sparse modeling in the sample selection. We use sparse linear combination to represent the uncertainty of unlabeled pool data with Gaussian kernels, in which the diversity and density are well incorporated. The selective sampling method is proposed before optimization to reduce the representation error. To deal with <inline-formula> <tex-math notation="LaTeX">${l}_{0}$ </tex-math></inline-formula> norm constraint in the sparse problem, two approximated approaches are adopted for efficient optimization. Four image classification data sets are used for evaluation. Extensive experiments related to batch size, feature space, seed size, significant analysis, data transform, and time efficiency demonstrate the advantages of the proposed method.

Tao Wang;Jian Yang;Zexuan Ji;Quansen Sun; "Probabilistic Diffusion for Interactive Image Segmentation," vol.28(1), pp.330-342, Jan. 2019. This paper presents an interactive image segmentation approach in which we formulate segmentation as a probabilistic estimation problem based on the prior user intention. Instead of directly measuring the relationship between pixels and labels, we first estimate the distances between pixel pairs and label pairs using a probabilistic framework. Then, binary probabilities with label pairs are naturally converted to unary probabilities with labels. The higher order relationship helps improve the robustness to user inputs. To improve segmentation accuracy, a likelihood learning framework is proposed to fuse the region and the boundary information of the image by imposing a smoothing constraint on the unary potentials. Furthermore, we establish an equivalence relationship between likelihood learning and likelihood diffusion and propose an iterative diffusion-based optimization strategy to maintain computational efficiency. Experiments on the Berkeley segmentation data set and Microsoft GrabCut database demonstrate that the proposed method can obtain better performance than the state-of-the-art methods.

Sean I. Young;Aous T. Naman;David Taubman; "COGL: Coefficient Graph Laplacians for Optimized JPEG Image Decoding," vol.28(1), pp.343-355, Jan. 2019. We address the problem of decoding joint photographic experts group (JPEG)-encoded images with less visual artifacts. We view the decoding task as an ill-posed inverse problem and find a regularized solution using a convex, graph Laplacian-regularized model. Since the resulting problem is non-smooth and entails non-local regularization, we use fast high-dimensional Gaussian filtering techniques with the proximal gradient descent method to solve our convex problem efficiently. Our patch-based ”coefficient graph” is better suited than the traditional pixel-based ones for regularizing smooth non-stationary signals such as natural images and relates directly to classic non-local means de-noising of images. We also extend our graph along the temporal dimension to handle the decoding of M-JPEG-encoded video. Despite the minimalistic nature of our convex problem, it produces decoded images with similar quality to other more complex, state-of-the-art methods while being up to five times faster. We also expound on the relationship between our method and the classic ANCE method, reinterpreting ANCE from a graph-based regularization perspective.

Shan Li;Weihong Deng; "Reliable Crowdsourcing and Deep Locality-Preserving Learning for Unconstrained Facial Expression Recognition," vol.28(1), pp.356-370, Jan. 2019. Facial expression is central to human experience, but most previous databases and studies are limited to posed facial behavior under controlled conditions. In this paper, we present a novel facial expression database, Real-world Affective Face Database (RAF-DB), which contains approximately 30 000 facial images with uncontrolled poses and illumination from thousands of individuals of diverse ages and races. During the crowdsourcing annotation, each image is independently labeled by approximately 40 annotators. An expectation–maximization algorithm is developed to reliably estimate the emotion labels, which reveals that real-world faces often express compound or even mixture emotions. A cross-database study between RAF-DB and CK+ database further indicates that the action units of real-world emotions are much more diverse than, or even deviate from, those of laboratory-controlled emotions. To address the recognition of multi-modal expressions in the wild, we propose a new deep locality-preserving convolutional neural network (DLP-CNN) method that aims to enhance the discriminative power of deep features by preserving the locality closeness while maximizing the inter-class scatter. Benchmark experiments on 7-class basic expressions and 11-class compound expressions, as well as additional experiments on CK+, MMI, and SFEW 2.0 databases, show that the proposed DLP-CNN outperforms the state-of-the-art handcrafted features and deep learning-based methods for expression recognition in the wild. To promote further study, we have made the RAF database, benchmarks, and descriptor encodings publicly available to the research community.

Mariusz Dzwonkowski;Roman Rykaczewski; "Secure Quaternion Feistel Cipher for DICOM Images," vol.28(1), pp.371-380, Jan. 2019. An improved and extended version of a quaternion-based lossless encryption technique for digital image and communication on medicine (DICOM) images is proposed. We highlight and address several security flaws present in the previous version of the algorithm originally proposed by Dzwonkowski et al. (2015). The newly proposed secure quaternion Feistel cipher (S-QFC) algorithm retains the concept of a modified Feistel network with modular arithmetic and the use of special properties of quaternions to perform rotations of data sequences in 3D space for each of the cipher rounds. A new and more secure key generation scheme based on quaternion Julia sets is utilized. We also introduce both-sided modular matrix multiplications for the encryption and decryption processes. The proposed S-QFC algorithm eliminates the major security flaws of its predecessor while maintaining high computation speed in comparison to other algorithms originally embedded in DICOM (i.e., advanced encryption standard and triple data encryption standard). A computer-based analysis has been carried out. Simulation results and cryptanalysis are shown at the end of this paper.

IEEE Transactions on Medical Imaging - new TOC (2018 September 20) [Website]

* "Table of contents," vol.37(9), pp.C1-C4, Sept. 2018.* Presents the table of contents for this issue of the publication.

* "IEEE Transactions on Medical Imaging publication information," vol.37(9), pp.C2-C2, Sept. 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Lipeng Ning;Yogesh Rathi; "A Dynamic Regression Approach for Frequency-Domain Partial Coherence and Causality Analysis of Functional Brain Networks," vol.37(9), pp.1957-1969, Sept. 2018. Coherence and causality measures are often used to analyze the influence of one region on another during analysis of functional brain networks. The analysis methods usually involve a regression problem, where the signal of interest is decomposed into a mixture of regressor and a residual signal. In this paper, we revisit this basic problem and present solutions that provide the minimal-entropy residuals for different types of regression filters, such as causal, instantaneously causal, and noncausal filters. Using optimal prediction theory, we derive several novel frequency-domain expressions for partial coherence, causality, and conditional causality analysis. In particular, our solution provides a more accurate estimation of the frequency-domain causality compared with the classical Geweke causality measure. Using synthetic examples and in vivo resting-state functional magnetic resonance imaging data from the human connectome project, we show that the proposed solution is more accurate at revealing frequency-domain linear dependence among high-dimensional signals.

Kyounghun Lee;Eung Je Woo;Jin Keun Seo; "A Fidelity-Embedded Regularization Method for Robust Electrical Impedance Tomography," vol.37(9), pp.1970-1977, Sept. 2018. Electrical impedance tomography (EIT) provides functional images of an electrical conductivity distribution inside the human body. Since the 1980s, many potential clinical applications have arisen using inexpensive portable EIT devices. EIT acquires multiple trans-impedance measurements across the body from an array of surface electrodes around a chosen imaging slice. The conductivity image reconstruction from the measured data is a fundamentally ill-posed inverse problem notoriously vulnerable to measurement noise and artifacts. Most available methods invert the ill-conditioned sensitivity or the Jacobian matrix using a regularized least-squares data-fitting technique. Their performances rely on the regularization parameter, which controls the trade-off between fidelity and robustness. For clinical applications of EIT, it would be desirable to develop a method achieving consistent performance over various uncertain data, regardless of the choice of the regularization parameter. Based on the analysis of the structure of the Jacobian matrix, we propose a fidelity-embedded regularization (FER) method and a motion artifact reduction filter. Incorporating the Jacobian matrix in the regularization process, the new FER method with the motion artifact reduction filter offers stable reconstructions of high-fidelity images from noisy data by taking a very large regularization parameter value. The proposed method showed practical merits in experimental studies of chest EIT imaging.

Theodore B. Dubose;David Cunefare;Elijah Cole;Peyman Milanfar;Joseph A. Izatt;Sina Farsiu; "Statistical Models of Signal and Noise and Fundamental Limits of Segmentation Accuracy in Retinal Optical Coherence Tomography," vol.37(9), pp.1978-1988, Sept. 2018. Optical coherence tomography (OCT) has revolutionized diagnosis and prognosis of ophthalmic diseases by visualization and measurement of retinal layers. To speed up the quantitative analysis of disease biomarkers, an increasing number of automatic segmentation algorithms have been proposed to estimate the boundary locations of retinal layers. While the performance of these algorithms has significantly improved in recent years, a critical question to ask is how far we are from a theoretical limit to OCT segmentation performance. In this paper, we present the Cramèr–Rao lower bounds (CRLBs) for the problem of OCT layer segmentation. In deriving the CRLBs, we address the important problem of defining statistical models that best represent the intensity distribution in each layer of the retina. Additionally, we calculate the bounds under an optimal affine bias, reflecting the use of prior knowledge in many segmentation algorithms. Experiments using in vivo images of human retina from a commercial spectral domain OCT system are presented, showing potential for improvement of automated segmentation accuracy. Our general mathematical model can be easily adapted for virtually any OCT system. Furthermore, the statistical models of signal and noise developed in this paper can be utilized for the future improvements of OCT image denoising, reconstruction, and many other applications.

Kuan Lu;Patrick Goodwill;Bo Zheng;Steven Conolly; "Multi-Channel Acquisition for Isotropic Resolution in Magnetic Particle Imaging," vol.37(9), pp.1989-1998, Sept. 2018. Magnetic Particle Imaging (MPI), a molecular imaging modality that images biocompatible superparamagnetic iron oxide tracers, is well-suited for clinical angiography, in vivo cell tracking, cancer detection, and inflammation imaging. MPI is sensitive and quantitative to tracer concentration, with a positive contrast that is not attenuated or corrupted by tissue background. Like other clinical imaging techniques, such as computed tomography, magnetic resonance imaging, and nuclear medicine, MPI can be modeled as a linear and shift-invariant system with a well-defined point spread function (PSF) capturing the system blur. The key difference, as we show here, is that the MPI PSF is highly dependent on scanning parameters and is anisotropic using only a single-imaging trajectory. This anisotropic resolution poses a major challenge for clear and accurate clinical diagnosis. In this paper, we generalize a tensor imaging theory for multidimensional x-space MPI to explore the physical source of this anisotropy, present a multi-channel scanning algorithm to enable isotropic resolution, and experimentally demonstrate isotropic MPI resolution through the construction and the use of two orthogonal excitation and detector coil pairs.

Mathias Unberath;Oliver Taubmann;André Aichert;Stephan Achenbach;Andreas Maier; "Prior-Free Respiratory Motion Estimation in Rotational Angiography," vol.37(9), pp.1999-2009, Sept. 2018. Rotational coronary angiography using C-arm angiography systems enables intra-procedural 3-D imaging that is considered beneficial for diagnostic assessment and interventional guidance. Despite previous efforts, rotational angiography was not yet successfully established in clinical practice for coronary artery procedures due to challenges associated with substantial intra-scan respiratory and cardiac motion. While gating handles cardiac motion during reconstruction, respiratory motion requires compensation. State-of-the-art algorithms rely on 3-D / 2-D registration that requires an uncompensated reconstruction of sufficient quality. To overcome this limitation, we investigate two prior-free respiratory motion estimation methods based on the optimization of: 1) epipolar consistency conditions (ECCs) and 2) a task-based auto-focus measure (AFM). The methods assess redundancies in projection images or impose favorable properties of 3-D space, respectively, and are used to estimate the respiratory motion of the coronary arteries within rotational angiograms. We evaluate our algorithms on the publicly available CAVAREV benchmark and on clinical data. We quantify reductions in error due to respiratory motion compensation using a dedicated reconstruction domain metric. Moreover, we study the improvements in image quality when using an analytic and a novel temporal total variation regularized algebraic reconstruction algorithm. We observed substantial improvement in all figures of merit compared with the uncompensated case. Improvements in image quality presented as a reduction of double edges, blurring, and noise. Benefits of the proposed corrections were notable even in cases suffering little corruption from respiratory motion, translating to an improvement in the vessel sharpness of (6.08 ± 4.46)% and (14.7 ± 8.80)% when the ECC-based and the AFM-based compensation were applied. On the CAVAREV - ata, our motion compensation approach exhibits an improvement of (27.6 ± 7.5)% and (97.0 ± 17.7)% when the ECC and AFM were used, respectively. At the time of writing, our method based on AFM is leading the CAVAREV scoreboard. Both motion estimation strategies are purely image-based and accurately estimate the displacements of the coronary arteries due to respiration. While current evidence suggests the superior performance of AFM, future work will further investigate the use of ECC in the context of angiography as they solely rely on geometric calibration and projection-domain images.

Adam C. Luchies;Brett C. Byram; "Deep Neural Networks for Ultrasound Beamforming," vol.37(9), pp.2010-2021, Sept. 2018. We investigate the use of deep neural networks (DNNs) for suppressing off-axis scattering in ultrasound channel data. Our implementation operates in the frequency domain via the short-time Fourier transform. The inputs to the DNN consisted of the separated real and imaginary components (i.e. in-phase and quadrature components) observed across the aperture of the array, at a single frequency and for a single depth. Different networks were trained for different frequencies. The output had the same structure as the input and the real and imaginary components were combined as complex data before an inverse short-time Fourier transform was used to reconstruct channel data. Using simulation, physical phantom experiment, and in vivo scans from a human liver, we compared this DNN approach to standard delay-and-sum (DAS) beamforming and an adaptive imaging technique that uses the coherence factor. For a simulated point target, the side lobes when using the DNN approach were about 60 dB below those of standard DAS. For a simulated anechoic cyst, the DNN approach improved contrast ratio (CR) and contrast-to-noise (CNR) ratio by 8.8 dB and 0.3 dB, respectively, compared with DAS. For an anechoic cyst in a physical phantom, the DNN approach improved CR and CNR by 17.1 dB and 0.7 dB, respectively. For two in vivo scans, the DNN approach improved CR and CNR by 13.8 dB and 9.7 dB, respectively. We also explored methods for examining how the networks in this paper function.

Jonathan Porée;Mathilde Baudet;François Tournoux;Guy Cloutier;Damien Garcia; "A Dual Tissue-Doppler Optical-Flow Method for Speckle Tracking Echocardiography at High Frame Rate," vol.37(9), pp.2022-2032, Sept. 2018. A coupled computational method for recovering tissue velocity vector fields from high-frame-rate echocardiography is described. Conventional transthoracic echocardiography provides limited temporal resolution, which may prevent accurate estimation of the 2-D myocardial velocity field dynamics. High-frame-rate compound echocardiography using diverging waves with integrated motion compensation has been shown to provide concurrent high-resolution B-mode and tissue Doppler imaging (TDI). In this paper, we propose a regularized least-squares method to provide accurate myocardial velocities at high frame rates. The velocity vector field was formulated as the minimizer of a cost function that is a weighted sum of: 1) the <inline-formula> <tex-math notation="LaTeX">${ell }^{{2}}$ </tex-math></inline-formula>-norm of the material derivative of the B-mode images (optical flow); 2) the <inline-formula> <tex-math notation="LaTeX">${ell }^{{2}}$ </tex-math></inline-formula>-norm of the tissue-Doppler residuals; and 3) a quadratic regularizer that imposes spatial smoothness and well-posedness. A finite difference discretization of the continuous problem was adopted, leading to a sparse linear system. The proposed framework was validated in vitro on a rotating disk with speeds up to 20 cm/s, and compared with speckle tracking echocardiography (STE) by block matching. It was also validated in vivo against TDI and STE in a cross-validation strategy involving parasternal long axis and apical three-chamber views. The proposed method based on the combination of optical flow and tissue Doppler led to more accurate time-resolved velocity vector fields.

Pietro Gori;Olivier Colliot;Linda Marrakchi Kacem;Yulia Worbe;Alexandre Routier;Cyril Poupon;Andreas Hartmann;Nicholas Ayache;Stanley Durrleman; "Double Diffeomorphism: Combining Morphometry and Structural Connectivity Analysis," vol.37(9), pp.2033-2043, Sept. 2018. The brain is composed of several neural circuits which may be seen as anatomical complexes composed of grey matter structures interconnected by white matter tracts. Grey and white matter components may be modeled as 3-D surfaces and curves, respectively. Neurodevelopmental disorders involve morphological and organizational alterations which cannot be jointly captured by usual shape analysis techniques based on single diffeomorphisms. We propose a new deformation scheme, called double diffeomorphism, which is a combination of two diffeomorphisms. The first one captures changes in structural connectivity, whereas the second one recovers the global morphological variations of both grey and white matter structures. This deformation model is integrated into a Bayesian framework for atlas construction. We evaluate it on a data-set of 3-D structures representing the neural circuits of patients with Gilles de la Tourette syndrome (GTS). We show that this approach makes it possible to localise, quantify, and easily visualise the pathological anomalies altering the morphology and organization of the neural circuits. Furthermore, results also indicate that the proposed deformation model better discriminates between controls and GTS patients than a single diffeomorphism.

Afaf Tareef;Yang Song;Heng Huang;Dagan Feng;Mei Chen;Yue Wang;Weidong Cai; "Multi-Pass Fast Watershed for Accurate Segmentation of Overlapping Cervical Cells," vol.37(9), pp.2044-2059, Sept. 2018. The task of segmenting cell nuclei and cytoplasm in pap smear images is one of the most challenging tasks in automated cervix cytological analysis due to specifically the presence of overlapping cells. This paper introduces a multi-pass fast watershed-based method (MPFW) to segment both nucleus and cytoplasm from large cell masses of overlapping cervical cells in three watershed passes. The first pass locates the nuclei with barrier-based watershed on the gradient-based edge map of a pre-processed image. The next pass segments the isolated, touching, and partially overlapping cells with a watershed transform adapted to the cell shape and location. The final pass introduces mutual iterative watersheds separately applied to each nucleus in the largely overlapping clusters to estimate the cell shape. In MPFW, the line-shaped contours of the watershed cells are deformed with ellipse fitting and contour adjustment to give a better representation of cell shapes. The performance of the proposed method has been evaluated using synthetic, real extended depth-of-field, and multi-layers cervical cytology images provided by the first and second overlapping cervical cytology image segmentation challenges in ISBI 2014 and ISBI 2015. The experimental results demonstrate superior performance of the proposed MPFW in terms of segmentation accuracy, detection rate, and time complexity, compared with recent peer methods.

Brian J. Lee;Alexander M. Grant;Chen-Ming Chang;Ronald D. Watkins;Gary H. Glover;Craig S. Levin; "MR Performance in the Presence of a Radio Frequency-Penetrable Positron Emission Tomography (PET) Insert for Simultaneous PET/MRI," vol.37(9), pp.2060-2069, Sept. 2018. Despite the great promise of integrated positron emission tomography (PET)/magnetic resonance (MR) imaging to add molecular information to anatomical and functional MR, its potential impact in medicine is diminished by a very high cost, limiting its dissemination. An RF-penetrable PET ring that can be inserted into any existing MR system has been developed to address this issue. Employing optical signal transmission along with battery power enables the PET ring insert to electrically float with respect to the MR system. Then, inter-modular gaps of the PET ring allow the RF transmit field from the standard built-in body coil to penetrate into the PET fields-of-view (FOV) with some attenuation that can be compensated for. MR performance, including RF noise, magnetic susceptibility, RF penetrability through and <inline-formula> <tex-math notation="LaTeX">$B_{1}$ </tex-math></inline-formula> uniformity within the PET insert, and MR image quality, were analyzed with and without the PET ring present. The simulated and experimentally measured RF field attenuation factors with the PET ring present were −2.7 and −3.2 dB, respectively. The magnetic susceptibility effect (0.063 ppm) and noise emitted from the PET ring in the MR receive channel were insignificant. <inline-formula> <tex-math notation="LaTeX">$B_{1}$ </tex-math></inline-formula> homogeneity of a spherical agar phantom within the PET ring FOV dropped by 8.4% and MR image SNR was reduced by 3.5 and 4.3 dB with the PET present for gradient-recalled echo and fast-spin echo, respectively. This paper demonstrates, for the first time, an RF-penetrable PET insert comprising a full ring of operating detectors that achieves simultaneous PET/MR using the standard built-in body coil as the RF transmitter.

Ivan Olefir;Stratis Tzoumas;Hong Yang;Vasilis Ntziachristos; "A Bayesian Approach to Eigenspectra Optoacoustic Tomography," vol.37(9), pp.2070-2079, Sept. 2018. The quantification of hemoglobin oxygen saturation (sO2) with multispectral optoacoustic (OA) (photoacoustic) tomography (MSOT) is a complex spectral unmixing problem, since the OA spectra of hemoglobin are modified with tissue depth due to depth (location) and wavelength dependencies of optical fluence in tissue. In a recent work, a method termed eigenspectra MSOT (eMSOT) was proposed for addressing the dependence of spectra on fluence and quantifying blood sO2 in deep tissue. While eMSOT offers enhanced sO2 quantification accuracy over conventional unmixing methods, its performance may be compromised by noise and image reconstruction artifacts. In this paper, we propose a novel Bayesian method to improve eMSOT performance in noisy environments. We introduce a spectral reliability map, i.e., a method that can estimate the level of noise superimposed onto the recorded OA spectra. Using this noise estimate, we formulate eMSOT as a Bayesian inverse problem where the inversion constraints are based on probabilistic graphical models. Results based on numerical simulations indicate that the proposed method offers improved accuracy and robustness under high noise levels due the adaptive nature of the Bayesian method.

Reijer L. Leijsen;Wyger M. Brink;Cornelis A. T. van den Berg;Andrew G. Webb;Rob F. Remis; "3-D Contrast Source Inversion-Electrical Properties Tomography," vol.37(9), pp.2080-2089, Sept. 2018. Contrast source inversion-electrical properties tomography (CSI-EPT) is an iterative reconstruction method to retrieve the electrical properties (EPs) of tissues from magnetic resonance data. The method is based on integral representations of the electromagnetic field and has been shown to allow EP reconstructions of small structures as well as tissue boundaries with compelling accuracy. However, to date, the CSI-EPT has been implemented for 2-D configurations only, which limits its applicability. In this paper, a full 3-D extension of the CSI-EPT method is presented, to enable CSI-EPT to be applied to realistic 3-D scenarios. Here, we demonstrate a proof-of-principle of 3-D CSI-EPT and present the reconstructions of a 3-D abdominal body section and a 3-D head model using different settings of the transmit coil. Numerical results show that the full 3-D approach yields accurate reconstructions of the EPs, even at tissue boundaries and is most accurate in regions where the absolute value of the electric field is highest.

Shengheng Liu;Jiabin Jia;Yimin D. Zhang;Yunjie Yang; "Image Reconstruction in Electrical Impedance Tomography Based on Structure-Aware Sparse Bayesian Learning," vol.37(9), pp.2090-2102, Sept. 2018. Electrical impedance tomography (EIT) is developed to investigate the internal conductivity changes of an object through a series of boundary electrodes, and has become increasingly attractive in a broad spectrum of applications. However, the design of optimal tomography image reconstruction algorithms has not achieved the adequate level of progress and matureness. In this paper, we propose an efficient and high-resolution EIT image reconstruction method in the framework of sparse Bayesian learning. Significant performance improvement is achieved by imposing structure-aware priors on the learning process to incorporate the prior knowledge that practical conductivity distribution maps exhibit clustered sparsity and intra-cluster continuity. The proposed method not only achieves high-resolution estimation and preserves the shape information even in low signal-to-noise ratio scenarios but also avoids the time-consuming parameter tuning process. The effectiveness of the proposed algorithm is validated through comparisons with state-of-the-art techniques using extensive numerical simulation and phantom experiment results.

Gopal Nataraj;Jon-Fredrik Nielsen;Clayton Scott;Jeffrey A. Fessler; "Dictionary-Free MRI PERK: Parameter Estimation via Regression with Kernels," vol.37(9), pp.2103-2114, Sept. 2018. This paper introduces a fast, general method for dictionary-free parameter estimation in quantitative magnetic resonance imaging (QMRI) parameter estimation via regression with kernels (PERK). PERK first uses prior distributions and the nonlinear MR signal model to simulate many parameter-measurement pairs. Inspired by machine learning, PERK then takes these parameter-measurement pairs as labeled training points and learns from them a nonlinear regression function using kernel functions and convex optimization. PERK admits a simple implementation as per-voxel nonlinear lifting of MRI measurements followed by linear minimum mean-squared error regression. We demonstrate PERK for <inline-formula> <tex-math notation="LaTeX">$ {textit {T}_{1}}, {textit {T}_{2}}$ </tex-math></inline-formula> estimation, a well-studied application where it is simple to compare PERK estimates against dictionary-based grid search estimates and iterative optimization estimates. Numerical simulations as well as single-slice phantom and in vivo experiments demonstrate that PERK and other tested methods produce comparable <inline-formula> <tex-math notation="LaTeX">$ {textit {T}_{1}}, {textit {T}_{2}}$ </tex-math></inline-formula> estimates in white and gray matter, but PERK is consistently at least <inline-formula> <tex-math notation="LaTeX">$140times $ </tex-math></inline-formula> faster. This acceleration factor may increase by several orders of magnitude for full-volume QMRI estimation problems involving more latent parameters per voxel.

Dmytro Shulga;Oleksii Morozov;Patrick Hunziker; "Solving 3-D PDEs by Tensor B-Spline Methodology: A High Performance Approach Applied to Optical Diffusion Tomography," vol.37(9), pp.2115-2125, Sept. 2018. Solutions of 3-D elliptic PDEs form the basis of many mathematical models in medicine and engineering. Solving elliptic PDEs numerically in 3-D with fine discretization and high precision is challenging for several reasons, including the cost of 3-D meshing, the massive increase in operation count, and memory consumption when a high-order basis is used, and the need to overcome the “curse of dimensionality.” This paper describes how these challenges can be either overcome or relaxed by a Tensor B-spline methodology with the following key properties: 1) the tensor structure of the variational formulation leads to regularity, separability, and sparsity, 2) a method for integration over the complex domain boundaries eliminates meshing, and 3) the formulation induces high-performance and memory-efficient computational algorithms. The methodology was evaluated by application to the forward problem of Optical Diffusion Tomography (ODT), comparing it with the solver from a state-of-the-art Finite-Element Method (FEM)-based ODT reconstruction framework. We found that the Tensor B-spline methodology allows one to solve the 3-D elliptic PDEs accurately and efficiently. It does not require 3-D meshing even on complex and non-convex boundary geometries. The Tensor B-spline approach outperforms and is more accurate than the FEM when the order of the basis function is > 1, requiring fewer operations and lower memory consumption. Thus, the Tensor B-spline methodology is feasible and attractive for solving large elliptic 3-D PDEs encountered in real-world problems.

David Tellez;Maschenka Balkenhol;Irene Otte-Höller;Rob van de Loo;Rob Vogels;Peter Bult;Carla Wauters;Willem Vreuls;Suzanne Mol;Nico Karssemeijer;Geert Litjens;Jeroen van der Laak;Francesco Ciompi; "Whole-Slide Mitosis Detection in H&E Breast Histology Using PHH3 as a Reference to Train Distilled Stain-Invariant Convolutional Networks," vol.37(9), pp.2126-2136, Sept. 2018. Manual counting of mitotic tumor cells in tissue sections constitutes one of the strongest prognostic markers for breast cancer. This procedure, however, is time-consuming and error-prone. We developed a method to automatically detect mitotic figures in breast cancer tissue sections based on convolutional neural networks (CNNs). Application of CNNs to hematoxylin and eosin (H&E) stained histological tissue sections is hampered by noisy and expensive reference standards established by pathologists, lack of generalization due to staining variation across laboratories, and high computational requirements needed to process gigapixel whole-slide images (WSIs). In this paper, we present a method to train and evaluate CNNs to specifically solve these issues in the context of mitosis detection in breast cancer WSIs. First, by combining image analysis of mitotic activity in phosphohistone-H3 restained slides and registration, we built a reference standard for mitosis detection in entire H&E WSIs requiring minimal manual annotation effort. Second, we designed a data augmentation strategy that creates diverse and realistic H&E stain variations by modifying H&E color channels directly. Using it during training combined with network ensembling resulted in a stain invariant mitosis detector. Third, we applied knowledge distillation to reduce the computational requirements of the mitosis detection ensemble with a negligible loss of performance. The system was trained in a single-center cohort and evaluated in an independent multicenter cohort from the cancer genome atlas on the three tasks of the tumor proliferation assessment challenge. We obtained a performance within the top three best methods for most of the tasks of the challenge.

Qiao Zheng;Hervé Delingette;Nicolas Duchateau;Nicholas Ayache; "3-D Consistent and Robust Segmentation of Cardiac Images by Deep Learning With Spatial Propagation," vol.37(9), pp.2137-2148, Sept. 2018. We propose a method based on deep learning to perform cardiac segmentation on short axis Magnetic resonance imaging stacks iteratively from the top slice (around the base) to the bottom slice (around the apex). At each iteration, a novel variant of the U-net is applied to propagate the segmentation of a slice to the adjacent slice below it. In other words, the prediction of a segmentation of a slice is dependent upon the already existing segmentation of an adjacent slice. The 3-D consistency is hence explicitly enforced. The method is trained on a large database of 3078 cases from the U.K. Biobank. It is then tested on the 756 different cases from the U.K. Biobank and three other state-of-the-art cohorts (ACDC with 100 cases, Sunnybrook with 30 cases, and RVSC with 16 cases). Results comparable or even better than the state of the art in terms of distance measures are achieved. They also emphasize the assets of our method, namely, enhanced spatial consistency (currently neither considered nor achieved by the state of the art), and the generalization ability to unseen cases even from other databases.

Rongzhao Zhang;Lei Zhao;Wutao Lou;Jill M. Abrigo;Vincent C. T. Mok;Winnie C. W. Chu;Defeng Wang;Lin Shi; "Automatic Segmentation of Acute Ischemic Stroke From DWI Using 3-D Fully Convolutional DenseNets," vol.37(9), pp.2149-2160, Sept. 2018. Acute ischemic stroke is recognized as a common cerebral vascular disease in aging people. Accurate diagnosis and timely treatment can effectively improve the blood supply of the ischemic area and reduce the risk of disability or even death. Understanding the location and size of infarcts plays a critical role in the diagnosis decision. However, manual localization and quantification of stroke lesions are laborious and time-consuming. In this paper, we propose a novel automatic method to segment acute ischemic stroke from diffusion weighted images (DWIs) using deep 3-D convolutional neural networks (CNNs). Our method can efficiently utilize 3-D contextual information and automatically learn very discriminative features in an end-to-end and data-driven way. To relieve the difficulty of training very deep 3-D CNN, we equip our network with dense connectivity to enable the unimpeded propagation of information and gradients throughout the network. We train our model with Dice objective function to combat the severe class imbalance problem in data. A DWI data set containing 242 subjects (90 for training, 62 for validation, and 90 for testing) with various types of acute ischemic stroke was constructed to evaluate our method. Our model achieved high performance on various metrics (Dice similarity coefficient: 79.13%, lesionwise precision: 92.67%, and lesionwise F1 score: 89.25%), outperforming the other state-of-the-art CNN methods by a large margin. We also evaluated the model on ISLES2015-SSIS data set and achieved very competitive performance, which further demonstrated its generalization capacity. The proposed method is fast and accurate, demonstrating a good potential in clinical routines.

Yuan Gao;Kun Wang;Shixin Jiang;Yuhao Liu;Ting Ai;Jie Tian; "Corrections for “Bioluminescence Tomography Based on Gaussian Weighted Laplace Prior Regularization for Morphological Imaging of Glioma” [Nov 17 2343-2354[Name:_blank]]," vol.37(9), pp.2161-2161, Sept. 2018. In [1], the affiliation for Y. Gao, K. Wang and J. Tian should have appeared as follows:.

* "IEEE Life Sciences Conference," vol.37(9), pp.2162-2162, Sept. 2018.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "EMBS Micro and Nanotechnology in Medicine Conference," vol.37(9), pp.2163-2163, Sept. 2018.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "EMBS Micro and Nanotechnology in Medicine Conference," vol.37(9), pp.2164-2164, Sept. 2018.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "IEEE Transactions on Medical Imaging information for authors," vol.37(9), pp.C3-C3, Sept. 2018.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

IET Image Processing - new TOC (2018 September 20) [Website]

Praveen Kumar Reddy Yelampalli;Jagadish Nayak;Vilas H. Gaidhane; "Daubechies wavelet-based local feature descriptor for multimodal medical image registration," vol.12(10), pp.1692-1702, 10 2018. A new local feature descriptor recursive Daubechies pattern (RDbW) is developed by defining and encoding the Daubechies wavelet decomposed center–neighbour pixel relationship in the local texture. RDbW features are applied in spatial alignment (registration) of multimodal medical images using a Procrustes analysis (PA)-based affine transformation function and the registered images are further fused by employing a wavelet-based fusion method. A significant amount of experiments is conducted and the registration and fusion accuracy of the proposed feature descriptor is compared with the prominent existing methods such as local binary patterns (LBP), local tetra pattern (LTrP), local diagonal extrema pattern (LDEP), and local diagonal Laplacian pattern (LDLP). Experimental results show the present registration method improves the average registration accuracy by 38, 47, 71, and 76% in contrast to LDLP, LDEP, LTrP, and LBP, respectively. Further, the fusion results of the current approach exhibit an average improvement in entropy by 11%, standard deviation by 6% edge strength by 12%, sharpness by 23%, and average gradient by 16% when compared with all other feature descriptors used for registering the images. Concepts presented here can be used widely in analysing the combined information present in multimodal medical images.

Yu Pan;Rongke Liu;Qiuchen Du; "Depth extraction method with subpixel matching for light-coding-based depth camera," vol.12(10), pp.1703-1712, 10 2018. Depth images extracted by light-coding-based depth cameras are widely used to reconstruct three-dimensional scenes in recent years. However, the retrieved depth accuracy greatly influences the reconstruction quality. Here, the authors present an appropriate depth extraction method based on subpixel matching to improve the depth accuracy. The proposed method utilises nearest neighbour interpolation to the projector's image plane to obtain depth values at subpixel accuracy, thereby better preserving the important depth information without changing any inner structure of the depth camera. Experimental results show that the proposed method improves the depth images both on image quality and depth accuracy.

Huajun Song;Peihua Qiu; "Three-dimensional image registration using distributed parallel computing," vol.12(10), pp.1713-1720, 10 2018. Three-dimensional (3D) images have become increasingly popular in practice. They are commonly used in medical imaging applications. In such applications, it is often critical to compare two 3D images, or monitor a sequence of 3D images. To make the image comparison or image monitoring valid, the related 3D images should be geometrically aligned first, which is called image registration (IR). However, IR for 3D images would take much computing time, especially when a flexible method is considered, which does not impose any parametric form on the underlying geometric transformation. Here, the authors explore a fast-computing environment for 3D IR based on the distributed parallel computing. The selected 3D IR method is based on the Taylor's expansion and 3D local kernel smoothing. It is flexible, but involves much computation. The authors demonstrate that this fast-computing environment can effectively handle the computing problem while keeping the good properties of the 3D IR method. The method discussed here is therefore useful for applications involving big data.

Satish Rapaka;Pullakura Rajesh Kumar; "Efficient approach for non-ideal iris segmentation using improved particle swarm optimisation-based multilevel thresholding and geodesic active contours," vol.12(10), pp.1721-1729, 10 2018. Segmentation is an important step in iris recognition framework because the accuracy of the iris recognition system is affected by the segmentation of the iris. The image acquisition introduces noise artefacts such as specular reflections, eyelids/eyelashes occlusions and overlapping intensities, which makes the segmentation process difficult. An efficient method has been proposed for the segmentation of iris images that deal with non-circular iris boundaries and other noise artefacts mentioned above. The proposed method uses the Otsu multilevel thresholding based on improved particle swarm optimisation technique as a pre-segmentation step. Pre-segmentation step delimits the iris region from the other parts of an eye image. The geodesic active contours incorporated with a novel stopping function is then used to segment non-circular iris boundaries. The recognition accuracy of the proposed method is verified using the standard databases, CASIA v3 Interval and UBIRISv1. Obtained results have been compared with existing methods and have an encouraging performance.

Li Yongxue;Zhao Min;Sun Dihua; "Fast enhancement algorithm of highway tunnel image based on constraint of imaging model," vol.12(10), pp.1730-1735, 10 2018. Due to uneven illumination and dim environment in the tunnel, the monitored image is blurred, which makes it difficult to recognise the traffic status. Therefore, it is necessary to enhance the tunnel image in advance. In this study, a fast image enhancement algorithm based on imaging model constraint is proposed. First, the method uses the combination of global atmospheric light and partitioned atmospheric light to estimate the local atmospheric light. Second, the transmission is estimated based on the formula derived from the imaging model constraints. Third, the method uses a constant instead of illumination to balance tunnel image illumination. Last, the tunnel image is enhanced according to the imaging model. Experimental and comparative analysis results show that the proposed method can rapidly and effectively enhance the tunnel image.

Urvashi Prakash Shukla;Satyasai Jagannath Nanda; "Denoising hyperspectral images using Hilbert vibration decomposition with cluster validation," vol.12(10), pp.1736-1745, 10 2018. Denoising of hyperspectral images is an essential step to remove the visual artifacts and improve the quality of an image. There are various sources of noise such as dark current, thermal and read noise produced due to detectors, stochastic error of photo-counting and so on which leads to variability of noise both in spatial and spectral domains. In this study, author proposes a novel denoising method based on concept of Hilbert vibration decomposition (HVD). Being iterative in nature it segregates initial amplitude composition into various components which are composed of slow varying wavelength. Any hyperspectral image is captured by the sensor over contiguous wavelengths. Thus, variation in intensities over the spectral dimension is less. HVD separates pixels in decreasing order of their intensity and results in denoising of the image. To evaluate method, various noise conditions have been tested on three real datasets: Washington DC mall, Urban and Pavia University. The validation is done both visually and quantitatively. The denoising with almost 100% mean structural similarity index confirms superiority of the designed method. Clustering and spectral analysis of various denoised images have also been reported. Clustering accuracy of 65% is achieved by the HVD as compared to other methods.

Huimin Huang;Zuofeng Zhou;Jianzhong Cao; "Non-rigid point set registration by high-dimensional representation," vol.12(10), pp.1746-1752, 10 2018. Non-rigid point set registration is a key component in many computer vision and pattern recognition tasks. In this study, the authors propose a robust non-rigid point set registration method based on high-dimensional representation. Their central idea is to map the point sets into a high-dimensional space by integrating the relative structure information into the coordinates of points. On the one hand, the point set registration is formulated as the estimation of a mixture of densities in high-dimensional space. On the other hand, the relative distances are used to compute the local features which assign the membership probabilities of the mixture model. The proposed model captures discriminative relative information and enables to preserve both global and local structures of the point set during matching. Extensive experiments on both synthesised and real data demonstrate that the proposed method outperforms the state-of-the-art methods under various types of distortions, especially for the deformation and rotation degradations.

Vineet Kumar;Abhijit Asati;Anu Gupta; "Memory-efficient architecture of circle Hough transform and its FPGA implementation for iris localisation," vol.12(10), pp.1753-1761, 10 2018. This study presents a circle Hough transform (CHT) architecture that provides memory reduction between 74 and 93% without and with little degradation in the accuracy, respectively. For an image of P × Q pixels, the standard (direct) CHT requires a two-dimensional (2D) accumulator array of P × Q cells, but the proposed CHT uses a 2D accumulator array of (P/m) × (Q/n) cells for coarse circle detection and two 1D accumulator arrays of P × 1 and Q × 1 cells for fine detection, therein reducing the memory by a factor of m × n (approximately). The proposed CHT architecture was applied to iris localisation application and carried out its comprehensive evaluation. The average accuracy of the proposed CHT for iris localisation (inner plus outer iris-circle detection) is 98% with memory reduction of 87% compared with the direct CHT. The proposed CHT architecture was implemented on field programmable logic array targeting Xilinx Zynq device. The proposed CHT hardware takes processing time of 6.25 ms (average) for iris localisation in an image of 320 × 240 px2. The proposed work is compared with the previous work, which shows improved results. Finally, the effect of additive Gaussian noise on the CHT performance is investigated.

Wentao Fan;Nizar Bouguila;Sami Bourouis;Yacine Laalaoui; "[[http://ieeexplore.ieee.org/document/8467484][Entropy-based variational Bayes learning framework for data clustering][Name:_blank]]," vol.12(10), pp.1762-1772, 10 2018. A novel framework is developed for the modelling and clustering of proportional data (i.e. normalised histograms) based on the Beta-Liouville mixture model. This framework is based on incremental model selection, by testing if a given component was truly Beta-Liouville distributed. Specifically, the authors compare the theoretical maximum entropy of the given component with the estimated entropy obtained by the MeanNN estimator. If a significant difference was gained from this comparison, this component is considered as not well fitted and is then splitted into two new components with a proper initialisation. Our approach is tested through synthetic data sets and real-world applications which involve human gesture recognition and vehicle tracking for traffic monitoring purposes, which demonstrate that the authors' approach is superior to comparable techniques.

Jinfeng Pan;Jin Shen;Mingliang Gao;Liju Yin;Faying Liu;Guofeng Zou; "Orthogonal gradient measurement matrix optimisation method," vol.12(10), pp.1773-1779, 10 2018. The optimisation of measurement matrix that is within the compressive sensing framework is considered in this study. Based on the fact that an information factor with smaller mutual coherence performs better, the gradient measurement matrix optimisation method is improved by an orthogonal search direction revision factor. This algorithm updates the approximation of ideal Gram matrix of information operator and the measurement matrix alternatingly. Using measurement matrix and sparse basis to represent the Gram matrix, the measurement matrix is optimised by the gradient algorithm, in which an orthogonal gradient search direction revision factor is proposed and utilised to further improve the performance of measurement matrix. This orthogonal factor is computed by the Cayley transform of a real skew symmetric matrix that is related to the gradient and the measurement matrix. Results of several experiments show that compared with the initial random matrix, the optimised measurement matrix can lead to better signal reconstruction quality.

Jaya Prakash Sahoo;Samit Ari;Dipak Kumar Ghosh; "Hand gesture recognition using DWT and F-ratio based feature descriptor," vol.12(10), pp.1780-1787, 10 2018. This study demonstrates the development of vision based static hand gesture recognition system using web camera in real-time applications. The vision based static hand gesture recognition system is developed using the following steps: preprocessing, feature extraction and classification. The preprocessing stage consists of illumination compensation, segmentation, filtering, hand region detection and image resize. This study proposes a discrete wavelet transform (DWT) and Fisher ratio (F-ratio) based feature extraction technique to classify the hand gestures in an uncontrolled environment. This method is not only robust towards distortion and gesture vocabulary, but also invariant to translation and rotation of hand gestures. A linear support vector machine is used as a classifier to recognise the hand gestures. The performance of the proposed method is evaluated on two standard public datasets and one indigenously developed complex background dataset for recognition of hand gestures. All above three datasets are developed based on American Sign Language (ASL) hand alphabets. The experimental result is evaluated in terms of mean accuracy. Two possible real-time applications are conducted, one is for interpretation of ASL sign alphabets and another is for image browsing.

Arash Ashtari Nakhaei;Mohammad Sadegh Helfroush;Habibollah Danyali;Mohammed Ghanbari; "Subjectively correlated estimation of noise due to blurriness distortion based on auto-regressive model using the Yule–Walker equations," vol.12(10), pp.1788-1796, 10 2018. In this study, a block-based estimation of noise due to blurriness distortion is proposed based on auto-regressive (AR) modelling. In the proposed method; a de-correlated, low-energy version of the blurred image is auto regressively modelled. To this end, AR parameters are estimated using the Yule–Walker equations. As these equations include auto-correlation function (ACF) coefficients, ACF estimation is also required. The Yule–Walker equations are solved making use of Durbin–Levinson algorithm. Finally, noise energy is mathematically defined and computed for each block. Since blurriness is a signal-dependent image distortion, estimating and describing its characteristics via a noise like that of the AR model input, is significant. In fact, extracting features of such ‘noise’ can lead to the design and development of a new method of image quality metrics. Inspired by the ‘stem cells’ concept in medical science that is convertible to other cell types, the AR model input is called ‘stem noise’. To visualise contribution of the ‘Stem Noise’ in the reconstruction of blurriness image distortion, a map called stem noise energy map is created. It is shown that the characteristics of the estimated noise energy are well correlated with the human subjective scores.

Yuling Wang;Ming Li;Guoyun Zhong;Junhua Li;Yuming Lu; "Circular trace transform and its PCA-based fusion features for image representation," vol.12(10), pp.1797-1806, 10 2018. To improve the image representation efficiency of trace transform (TT) features to images with circular and arc-shaped textures, the authors propose circular TT (CTT) to extract features. CTT consists of tracing an image with circles around which certain functionals of the image function are calculated. Quadruple CTT features can be generated through three successive functionals in the results of CTT, while different quadruple features can be obtained by choosing different combinations of successive functionals. These quadruple features can represent different texture properties and deeper intrinsic information of an image. By fusing CTT features and TT features based on PCA (FFCT_PCA), they construct a new complementary descriptor with much less dimension, further improving the representation performance for mixed texture images. Experimental results demonstrate that CTT has better performance than TT in recognising images with circular and arc-shaped textures, and FFCT_PCA has the potential to outperform the state-of-the-art feature extraction methods.

Michael Melek;Ahmed Khattab;Mohamed F. Abu-Elyazeed; "Fast matching pursuit for sparse representation-based face recognition," vol.12(10), pp.1807-1814, 10 2018. Even though face recognition is one of the most studied pattern recognition problems, most existing solutions still lack efficiency and high speed. Here, the authors present a new framework for face recognition which is efficient, fast, and robust against variations of illumination, expression, and pose. For feature extraction, the authors propose extracting Gabor features in order to be resilient to variations in illumination, facial expressions, and pose. In contrast to the related literature, the authors then apply supervised locality-preserving projections (SLPP) with heat kernel weights. The authors’ feature extraction approach achieves a higher recognition rate compared to both traditional unsupervised LPP and SLPP with constant weights. For classification, the authors use the recently proposed sparse representation-based classification (SRC). However, instead of performing SRC using the computationally expensive <inline-formula><tex-math notation="TeX">$ell _1$</tex-math>1<inline-graphic xlink:href="IET-IPR.2017.1263.IM1.gif" /></inline-formula> minimisation, the authors propose performing SRC using fast matching pursuit, which considerably improves the classification performance. The authors’ proposed framework achieves ∼99% recognition rate using four benchmark face databases, significantly faster than related frameworks.

Hongda Sheng;Xuanjing Shen;Yingda Lyu;Zenan Shi;Shuyang Ma; "Image splicing detection based on Markov features in discrete octonion cosine transform domain," vol.12(10), pp.1815-1823, 10 2018. To improve the poor robustness and low accuracy of the existing algorithms of image splicing detection, a novel passive image forgery detection method is proposed in this study, which is based on DOCT (discrete octonion cosine transform) and Markov. By introducing the octonion and DOCT, the colour information of six image channels (the RGB model and the HSI model) can be exhaustively extracted, which enhances the robustness of the algorithm. On the issue of improving the detection accuracy, the standard deviation is used to characterise the relationship of the colour information between the parts of DOCT coefficient matrix, and the K-fold cross-validation is introduced to improve the identification performance of the classifier. The steps of the algorithm are as follows: Firstly, the 8 × 8 block DOCT transform is used to the original image to obtain parts of block DOCT coefficient. Secondly, the standard deviation is used to process the corresponding parts of all blocks of the image. Finally, the Markov feature vector of the DOCT coefficient is extracted and feds to the LIBSVM (a library for support vector machines). When using LIBSVM for classification, K-fold cross-validation is executed to select the best parameter pairs. The experiment results demonstrate that the algorithm is superior to the other state-of-the-art splicing detection methods.

Yonggang Lin;Yongrong Zheng;Ying Fu;Hua Huang; "Hyperspectral image super-resolution under misaligned hybrid camera system," vol.12(10), pp.1824-1831, 10 2018. Hyperspectral imaging has been widely used for agriculture, astronomy, surveillance, and so on. However, hyperspectral imaging usually suffers from low-spatial resolution, due to the limited photons in individual bands. Recently, more hyperspectral image super-resolution methods have been developed by fusing the low-resolution hyperspectral image and high-resolution RGB image, but most of them did not consider the misalignment between two input images. In this study, the authors present an effective method to restore a high-resolution hyperspectral image from the misaligned low-resolution hyperspectral image and high-resolution RGB image, which exploits spectral and spatial correlation in hyperspectral and RGB images. Specifically, they employ the spectral sparsity to restore the high-resolution hyperspectral image on the misaligned part, and then simultaneously employ spectral and spatial structure correlation to restore the high-resolution hyperspectral image on the aligned area, which can be fused to obtain the high-quality hyperspectral image restoration under a misaligned hybrid camera system. Experimental results show that the proposed method outperforms the state-of-the-art hyperspectral image super-resolution methods under a misaligned hybrid camera system in terms of both objective metric and subjective visual quality.

Deepak Kumar Panda;Sukadev Meher; "Adaptive spatio-temporal background subtraction using improved Wronskian change detection scheme in Gaussian mixture model framework," vol.12(10), pp.1832-1843, 10 2018. Background subtraction (BS) is a fundamental step for moving object detection in various video surveillance applications. Gaussian mixture model (GMM) is a widely used BS technique which provides a good compromise between robustness to the background variations and real-time constraints. However, GMM does not support the spatial relationship among neighbouring pixels and it uses a fixed learning rate for every pixel during the parameter update. On the other hand, Wronskian change detection model (WM) is a spatial-domain BS technique which solves misclassification of pixels but fails in the presence of dynamic background. In this study, a novel spatio-temporal BS technique is proposed that exploits spatial relation of Wronskian function and employs it with a new fuzzy adaptive learning rate in a GMM framework. Instead of using WM directly, an improved WM is proposed by adaptively finding out the ratio of the current pixel to the background pixel or its reciprocal, and a weighted Wronskian is developed to mitigate the effect of dynamic background pixels. Additionally, a new fuzzy adaptive learning rate is employed in the GMM framework. Experimental results of the proposed framework yield better silhouette of the moving objects as compared with the state-of-the-art techniques.

Adnan A.Y. Mustafa; "Probabilistic binary similarity distance for quick binary image matching," vol.12(10), pp.1844-1856, 10 2018. Here, the author presents the gamma binary distance, an exceptional distance for measuring similarity between binary images. The gamma distance is a probabilistic pixel mapping measure that is a modification of the Hamming distance. Employing a probabilistic approach to image matching enables gamma to measure similarity more accurately than employing traditional binary distances. The author shows the advantage of employing the gamma distance for similarity measurement by comparing it to three of the most popular similarity distances used for binary image matching: correlation, sum of the absolute difference method, and mutual information. Results of extensive testing conducted on a large database are presented where the superiority of the gamma distance over other similarity distances is shown.

Munish Kumar;Priyanka Singh; "FPR using machine learning with multi-feature method," vol.12(10), pp.1857-1865, 10 2018. Biometrics authentication is considered as most secure and reliable method to recognise and identify person's identity. Researchers put efforts to find efficient ways to secure and classify the solutions to biometric problems. In this category, fingerprint recognition (FPR) is most widely used biometric trait for person identification/verification. The present work focuses an FPR technique, which uses the grey-level difference method, discrete wavelet transforms and edge histogram descriptor for fingerprint representation and matching. Wavelet shrinkage used for noise removal in the image. Ridge flow estimation is calculated using the gradient approach. SVM and Hamming distance similarity measures are used for recognition. The experiment result has been tested on the standard 2000–2004 fingerprint verification competition dataset and the accuracy of proposed algorithm was reported to be well above 98%.

Zhenghao Shi;Yaowei Li;Minghua Zhao;Yaning Feng;Lifeng He; "Multi-stage filtering for single rainy image enhancement," vol.12(10), pp.1866-1872, 10 2018. Rain image enhancement is important for outdoor computer vision applications. In this study, the authors propose a multi-stage filtering method for single rainy image enhancement. It is based on their new rainy image model, and consists of two main operations: rain streaks removal and rain fog removal. For rain streaks removal, based on one key observation that the low-pass version of a rainy image and that of a non-rainy image of the same scene are almost the same after appropriate low-pass filtering, they remove rain streaks from rainy images by decomposing an input rainy image (or a rainy component image) into the low-frequency (LF) part and the high-frequency (HF) part via an LF smooth filter, i.e. the traditional Gaussian filter with a simple subtraction operation in multiple different stages. After rain streaks removal, dark channel prior-based method was employed for rain fog removal. Experimental results show that the proposed algorithm generated comparable outputs with most of the state-of-the-art algorithms with low computation cost.

Hajer Ouerghi;Olfa Mourali;Ezzeddine Zagrouba; "Non-subsampled shearlet transform based MRI and PET brain image fusion using simplified pulse coupled neural network and weight local features in YIQ colour space," vol.12(10), pp.1873-1880, 10 2018. Magnetic resonance imaging (MRI) and positron emission tomography (PET) image fusion is a recent hybrid modality used in several oncology applications. The MRI image shows the brain tissue anatomy and does not contain any functional information, while the PET image indicates the brain function and has a low spatial resolution. A perfect MRI–PET fusion method preserves the functional information of the PET image and adds spatial characteristics of the MRI image with the less possible spatial distortion. In this context, the authors propose an efficient MRI–PET image fusion approach based on non-subsampled shearlet transform (NSST) and simplified pulse-coupled neural network model (S-PCNN). First, the PET image is transformed to YIQ independent components. Then, the source registered MRI image and the Y-component of PET image are decomposed into low-frequency (LF) and high-frequency (HF) subbands using NSST. LF coefficients are fused using weight region standard deviation (SD) and local energy, while HF coefficients are combined based on S-PCCN which is motivated by an adaptive-linking strength coefficient. Finally, inverse NSST and inverse YIQ are applied to get the fused image. Experimental results demonstrate that the proposed method has a better performance than other current approaches in terms of fusion mutual information, entropy, SD, fusion quality, and spatial frequency.

Ju Hwan Lee;Yoo Na Hwang;Ga Young Kim;Kim Sung Min; "Segmentation of the lumen and media-adventitial borders in intravascular ultrasound images using a geometric deformable model," vol.12(10), pp.1881-1891, 10 2018. This study presents a geometric deformable model-based segmentation approach to segmentation of the intima and media-adventitial (MA) borders in sequential intravascular ultrasound (IVUS) images. The initial estimation of the vessel borders was done manually only for the first frame of each sequence. After the border initialisation, pre-processing including edge preservation, noise reduction, and dead zone preservation was successively performed on each IVUS frame. To improve segmentation performance, the image masks were determined preliminarily by local binary pattern-based mask initialisation. Then, the inner and outer borders were approximated using a modified distance regularised level set evolution model. The results showed superior performance of the suggested approach for estimating intima and MA layers from the IVUS images. The corresponding correlation coefficients of area, vessel perimeter, maximum vessel diameter, and maximum lumen diameter were r = 0.782, r = 0.716, r = 0.956, and r = 0.874 for the 20 MHz images, respectively, and r = 0.990, r = 0.995, r = 0.989, and r = 0.996 for the 45 MHz images, respectively. In addition, linear regression analysis indicated that the manual segmentation had significantly high similarity at r > 0.967 and r > 0.993 for 20 and 45 MHz images, respectively.

Kang Ni;Yiquan Wu; "Adaptive patched L0 gradient minimisation model applied on image smoothing," vol.12(10), pp.1892-1902, 10 2018. L0 gradient minimisation model, one of edge-aware image smoothing method, also suffers from the stair-casing effect and images with strong textures cannot be smoothed effectively and weak edges or structures will be smoothed overly. The authors propose a method to overcome these drawbacks above. To begin with, the image is subjected to non-subsampled shearlet transform to obtain high-frequency component, and combine all high-frequency component by maximum local energy rules to obtain the high-frequency decomposition image, afterwards, introducing the data term associated with high-frequency decomposition image to keep the similarity of edge and structure between the input and smoothed image. Secondly, the patched L0 gradient minimisation model is presented for improving the description of local information, since different size of the patches has the different texture, exploiting the coefficient of variation to define the size of patch. Finally, defining the adaptive smoothing coefficient based on the gradient to make sure that the smoothing effect of the patch is optimal. The proposed model is applied to image smoothing with desirable results successfully, and the comparisons with other state-of-the-art edge-preserving image smoothing algorithms demonstrate the great performances of edge-preserving and texture smoothing.

Tejaswini Kar;Priyadarshi Kanungo; "Motion and illumination defiant cut detection based on Weber features," vol.12(10), pp.1903-1912, 10 2018. The spontaneous proliferation of video data necessitates implementing hierarchical structures for various content management applications. Temporal video segmentation is the key towards such management. To address the problem of temporal segmentation, the current communication exploits the concept of psychological behaviour of the human visual system. Towards this goal an abrupt cut detection scheme has been proposed based on Weber's law which provides a strong spatial correlation among the neighbouring pixels. Thus, the authors provide a robust and unique solution for abrupt shot boundary detection when the frames are affected partially or fully by flashlight, fire and flicker, high motion associated with an object or camera. Further, they have devised a model for generating an automatic threshold, taking into account the statistics of the feature vector which quadrates itself with the variation in the contents of the video. The effectiveness of the proposed framework is validated by exhaustive comparison with few contemporary and recent approaches by using benchmark datasets TRECVID 2001, TRECVID 2002, TRECVID 2007 and some publicly available videos. The results obtained give credence to the remarkable improvement in the performance while preserving a good trade-off between missed hits and false hits as compared to the state-of-the-art methods.

Juan J. Montesinos-García;Rafael Martinez-Guerra; "Colour image encryption via fractional chaotic state estimation," vol.12(10), pp.1913-1920, 10 2018. This study introduces an encryption algorithm for colour RGB images and text, the encryption is based on the synchronisation of fractional chaotic systems, the synchronisation has the topology of master–slave, where the transmitter is the master system and the receiver is the slave system, this last one is designed as a new smoothed sliding modes state observer for fractional chaotic systems. The encryption algorithm provides security against common cryptographic techniques, including known and chosen plaintext attacks.

IEEE Transactions on Signal Processing - new TOC (2018 September 20) [Website]

* "Table of Contents," vol.66(20), pp.5208-5209, Oct.15, 2018.*

* "Table of Contents [EDICS[Name:_blank]]," vol.66(20), pp.5210-5211, Oct.15, 2018.*

Vikram Krishnamurthy;Sijia Gao; "Syntactic Enhancement to VSIMM for Roadmap Based Anomalous Trajectory Detection: A Natural Language Processing Approach," vol.66(20), pp.5212-5227, Oct.15, 15 2018. Syntactic tracking aims to classify a target's spatio-temporal trajectory by using natural language processing models. This paper proposes constrained stochastic context-free grammar (CSCFG) models for target trajectories confined to a roadmap. We present a particle filtering algorithm that exploits the CSCFG model structure to estimate the target's trajectory. This metalevel algorithm operates in conjunction with a base-level target tracking algorithm. Extensive numerical results using simulated ground moving target indicator radar measurements show useful improvement in both trajectory classification and target state (both coordinates and velocity) estimation.

Eweda Eweda; "Tracking Analysis of High Order Stochastic Gradient Adaptive Filtering Algorithms," vol.66(20), pp.5228-5239, Oct.15, 15 2018. This paper analyzes the tracking performance of the family of adaptive filtering algorithms based on minimizing the <inline-formula><tex-math notation="LaTeX">${{2}}L$</tex-math></inline-formula>-th moment of the estimation error, with <inline-formula><tex-math notation="LaTeX">$L$</tex-math></inline-formula> being an integer greater than 1. The analysis is done for a Markov plant without the usually used assumption of small degree of non-stationarity (DNS). General <inline-formula><tex-math notation="LaTeX">$L$</tex-math></inline-formula> and DNS are considered in the analysis. Dependence of the minimum steady-state mean square weight deviation on the values of <inline-formula> <tex-math notation="LaTeX">$L$</tex-math></inline-formula>, DNS and type of the noise distribution is studied. A measure for evaluating the steady-state stability of the algorithm is introduced. Dependence of the steady-state stability on the values of <inline-formula><tex-math notation="LaTeX">$L$</tex-math></inline-formula>, DNS, and type of the noise distribution is studied. Some results are surprising. Theoretical results are supported by simulations.

Navid Tadayon;Sonia Aïssa; "Radio Resource Allocation and Pricing: Auction-Based Design and Applications," vol.66(20), pp.5240-5254, Oct.15, 15 2018. The “unlimited” performance and machine-centric architecture visions for future wireless networks transform the fundamental task of allocating radio resources into a complex optimization problem that is not quickly solvable. Inspired by the increasing intelligence of connected machines, and the prosperity of auctions as efficient allocation mechanisms in the economic sector, this paper provides an alternative perspective to the problem of optimal spectrum assignment for the fifth generation (5G) of wireless networks. In a systematic approach to deal with this problem, an efficacious allocation mechanism is characterized by six axioms: incentive compatibility, individual rationality, fairness, efficiency, revenue maximization, and computational manageability. The first three are incorporated into the allocation mechanism through a nonlinear spectrum pricing. By inducing incentive compatibility through these prices, revelation of the true valuations becomes the Nash Equilibrium and puts the mechanism in the class of revelation mechanisms. The latter fact triggers the realization of the last three axioms, whereby an optimization problem is formed to find the optimal mechanism in the class of revelation mechanisms, which, by the virtue of the revelation principle, is the optimal mechanism among all auction classes. Further, it is shown that the proposed mechanism is highly scalable, as the solution to the optimization problem is obtained by root-finding operations and solving almost linear system of equations. These properties make the proposed resource allocation mechanism an ideal candidate for deployment in 5G networks.

Bo Wang;Yanping Zhao;Fengye Hu;Ying-Chang Liang; "Anomaly Detection With Subgraph Search and Vertex Classification Preprocessing in Chung-Lu Random Networks," vol.66(20), pp.5255-5268, Oct.15, 15 2018. We study network anomaly detection problem with subgraph search and vertex classification techniques in Chung-Lu networks, where small anomalous networks are embedded in background networks. The general detectors traditionally utilize the spectral characteristics of whole networks to decide whether the anomalous networks exist or not. Moreover, many algorithms model the background networks with special random graphs, such as Erdős–Rényi random graphs. However, these methods may not achieve good detection performance because the spectral information of anomalous networks extracting from the relational data of the whole networks is subtle. Furthermore, the assumptions of the models limit the applications of the methods. In order to improve the detection capability of algorithms and relax the restriction of the background networks models, we develop two preprocessing approaches, referred to as subgraph search and vertex classification, and propose two detection algorithms, which can be applicable to the generalized network model (Chung-Lu random graph). By leveraging statistic features of priori data, the subgraph search method can obtain local network nodes that are more anomalous than the remainder. Moreover, the vertex classification is used to further distinguish the anomalous nodes from the local ones. And then, by using the relational data corresponding to the local nodes, the detection statistics are constructed to detect the anomalous network. Additionally, based on concentration of measure theory, the probability bounds of performance analysis are derived for the presented algorithms. Simulation examples are provided to illustrate that the proposed detectors can achieve better detection performance than the existing algorithms.

Jingwei Zhang;Yong Zeng;Rui Zhang; "UAV-Enabled Radio Access Network: Multi-Mode Communication and Trajectory Design," vol.66(20), pp.5269-5284, Oct.15, 15 2018. In this paper, we consider an unmanned aerial vehicle (UAV)-enabled radio access network (RAN) with the UAV acting as an aerial platform to communicate with a set of ground users (GUs) in a variety of modes of practical interest, including data collection in the uplink, data transmission in the downlink, and data relaying between GUs involving both the uplink and downlink. Under this general framework, two UAV operation scenarios are considered: periodic operation, where the UAV serves the GUs in a periodic manner by following a certain trajectory repeatedly, and one-time operation where the UAV serves the GUs with one single fly and then leaves for another mission. In each scenario, we aim to minimize the UAV periodic flight duration or mission completion time, while satisfying the target data requirement of each GU via a joint UAV trajectory and communication resource allocation design approach. Iterative algorithms are proposed to find efficient locally optimal solutions by utilizing successive convex optimization and block coordinate descent techniques. Moreover, as the quality of the solutions obtained by the proposed algorithms critically depends on the initial UAV trajectory adopted, we propose new methods to design the initial trajectories for both operation scenarios by leveraging the existing results for solving the classic Traveling Salesman Problem and Pickup-and-Delivery Problem. Numerical results show that the proposed trajectory initialization designs lead to significant performance gains compared to the benchmark initialization based on circular trajectory.

Filip Elvander;Andreas Jakobsson;Johan Karlsson; "Interpolation and Extrapolation of Toeplitz Matrices via Optimal Mass Transport," vol.66(20), pp.5285-5298, Oct.15, 15 2018. In this work, we propose a novel method for quantifying distances between Toeplitz structured covariance matrices. By exploiting the spectral representation of Toeplitz matrices, the proposed distance measure is defined based on an optimal mass transport problem in the spectral domain. This may then be interpreted in the covariance domain, suggesting a natural way of interpolating and extrapolating Toeplitz matrices, such that the positive semidefiniteness and the Toeplitz structure of these matrices are preserved. The proposed distance measure is also shown to be contractive with respect to both additive and multiplicative noise and thereby allows for a quantification of the decreased distance between signals when these are corrupted by noise. Finally, we illustrate how this approach can be used for several applications in signal processing. In particular, we consider interpolation and extrapolation of Toeplitz matrices, as well as clustering problems and tracking of slowly varying stochastic processes.

Yu Xia;Song Li; "Identifiability of Multichannel Blind Deconvolution and Nonconvex Regularization Algorithm," vol.66(20), pp.5299-5312, Oct.15, 15 2018. In this paper, we consider the multichannel blind deconvolution problem, where we observe the output of channels <inline-formula><tex-math notation="LaTeX">$mathbf {h}_{i}in mathbb {R}^n$</tex-math></inline-formula>( <inline-formula><tex-math notation="LaTeX">$i=1,...,N$</tex-math></inline-formula>) that all convolve with the same unknown input signal <inline-formula><tex-math notation="LaTeX">$mathbf {x}in mathbb {R}^n$</tex-math> </inline-formula>. We wish to estimate the input signal and blur kernels simultaneously. Existing theoretical results showed that the original inputs are identifiable under subspace assumptions. However, the subspaces discussed before were randomly or generically chosen. Here, we propose deterministic subspace assumption, which is widely used in practice, and give some theoretical results. First of all, we derive tight sufficient condition for identifiability of signal and convolution kernels, which is only violated on a set of Lebesgue measure zero. Then, we present a nonconvex regularization algorithm by a lifting method and approximate the rank-one constraint via the difference of nuclear norm and Frobenius norm. The global minimizer of the proposed nonconvex algorithm is rank-one matrix under mild conditions on parameters and noise level. The stability result is also shown under the assumption that the inputs lie in a compact set. Besides, the computation of our regularization model is carried out and any limit point of iterations converges to a stationary point of our model. Finally, we provide numerical experiments to show that our nonconvex regularization model outperforms convex relaxation models, such as nuclear norm minimization and some nonconvex methods, such as alternating minimization method and spectral method.

Wei Chen; "Simultaneously Sparse and Low-Rank Matrix Reconstruction via Nonconvex and Nonseparable Regularization," vol.66(20), pp.5313-5323, Oct.15, 15 2018. Many real-world problems involve the recovery of a matrix from linear measurements, where the matrix lies close to some low-dimensional structure. This paper considers the problem of reconstructing a matrix with a simultaneously sparse and low-rank model. As surrogate functions of the sparsity and the matrix rank that are non-convex and discontinuous, the <inline-formula><tex-math notation="LaTeX">$ell _1$</tex-math></inline-formula> norm and the nuclear norm are often used instead to derive efficient algorithms to promote sparse and low-rank characteristics, respectively. However, the <inline-formula><tex-math notation="LaTeX">$ell _1$</tex-math></inline-formula> norm and the nuclear norm are loose approximations, and furthermore, recent study reveals using convex regularizations for joint structures cannot do better, orderwise, than exploiting only one of the structures. Motivated by the construction of non-convex and nonseparable regularization in sparse Bayesian learning, a new optimization problem is formulated in the latent space for recovering a simultaneously sparse and low-rank matrix. The newly proposed non-convex cost function is proved to have the ability to recover a simultaneously sparse and low-rank matrix with a sufficient number of noiseless linear measurements. In addition, an algorithm is derived to solve the resulting non-convex optimization problem, and convergence analysis of the proposed algorithm is provided in this paper. The performance of the proposed approach is demonstrated by experiments using both synthetic data and real hyperspectral images for compressive sensing applications.

David W. Lin; "An Analysis of the Performance of ML Blind OFDM Symbol Timing Estimation," vol.66(20), pp.5324-5337, Oct.15, 15 2018. Symbol timing offset (STO) estimation constitutes an important part in orthogonal frequency-division multiplexing (OFDM) signal synchronization, and many blind methods based on the maximum likelihood (ML) principle have been proposed. A theoretical analysis of the estimation performance, however, is very difficult, due in part to the discrete nature of the STO. As a result, most studies on OFDM STO estimation have relied only on computer simulation for performance evaluation. This paper presents a theoretical performance analysis for ML-based blind OFDM STO estimation. We start with a review of the formulation of ML blind STO estimation problem and its solution. We then conduct the said performance analysis. The analysis turns out to involve a study of the eigenvalues of some matrix functions of the autocorrelation of the received signal. As the general case leads to highly complicated mathematics, we treat the case of full-band OFDM in additive white Gaussian noise in more detail. In this case, we are able to obtain mathematically tractable bounds and approximation to the estimation error probability and the corresponding mean-square error. In high signal-to-noise ratio (SNR), these bounds and approximation show inverse power-law dependence on the SNR. We also compare the bounds with simulation results and with an existing approximation in the literature.

Junan Zhu;Dror Baron; "Performance Limits With Additive Error Metrics in Noisy Multimeasurement Vector Problems," vol.66(20), pp.5338-5348, Oct.15, 15 2018. Real-world applications such as magnetic resonance imaging with multiple coils, multiuser communication, and diffuse optical tomography often assume a linear model, where several sparse signals sharing common sparse supports are acquired by several measurement matrices and then contaminated by noise. Multimeasurement vector (MMV) problems consider the estimation or reconstruction of such signals. In different applications, the estimation error that we want to minimize could be the mean squared error or other metrics, such as the mean absolute error and the support set error. Seeing that minimizing different error metrics is useful in MMV problems, we study information-theoretic performance limits for MMV signal estimation with arbitrary additive error metrics. We also propose a message passing algorithmic framework that achieves the optimal performance, and rigorously prove the optimality of our algorithm for a special case. We further conjecture the optimality of our algorithm for some general cases and back it up through numerical examples. As an application of our MMV algorithm, we propose a novel setup for active user detection in multiuser communication and demonstrate the promise of our proposed setup.

Nhan Thanh Nguyen;Kyungchun Lee; "Coverage and Cell-Edge Sum-Rate Analysis of mmWave Massive MIMO Systems With ORP Schemes and MMSE Receivers," vol.66(20), pp.5349-5363, Oct.15, 15 2018. In this study, we consider the downlink of millimeter-wave massive multiple-input multiple-output systems that employ orthogonal random precoding (ORP) and minimum-mean-square-error (MMSE) receivers. In the ORP scheme, a precoding matrix consisting of orthogonal vectors is utilized at the transmitter, which provides beamforming gains to enhance the maximum signal-to-interference-plus-noise ratio (SINR) at the receivers. In this study, the ability of the ORP scheme with MMSE receivers to extend the cell coverage and its sum-rate performance for cell-edge users are investigated. In particular, we first derive the approximate distribution of the maximum SINR at the output of the MMSE receiver. Then, to analyze and optimize the ORP scheme with MMSE receivers in terms of coverage and sum-rate performance, the analytical expressions for the downlink coverage probability and sum rate are derived. It is shown that there is a tradeoff between coverage and sum-rate performance in the ORP scheme. In particular, a smaller number of precoding vectors provides larger coverage; however, it leads to a lower sum rate. To achieve optimal tradeoff between coverage and sum rate for the ORP scheme, we propose using multiple random precoder groups over multiple time slots. It is shown that the proposed scheme is capable of significantly enhancing coverage performance while preserving an acceptable sum rate for cell-edge users. The simulation results validate the analytical findings.

Shiqi Gong;Shuai Wang;Sheng Chen;Chengwen Xing;Xing Wei; "Time-Invariant Joint Transmit and Receive Beampattern Optimization for Polarization-Subarray Based Frequency Diverse Array Radar," vol.66(20), pp.5364-5379, Oct.15, 15 2018. We propose a polarization-subarray based frequency diverse array (FDA) radar with the subarray-based FDA as the transmit (Tx) array and the polarization-sensitive subarray-based FDA (PSFDA) as the receive (Rx) array. The subarray-based FDA has the capability to achieve a single maximum beampattern at the target location, while the PSFDA can provide an extra degree of freedom to further suppress the interference and, thus, to improve the radar's signal-to-interference-plus-noise ratio (SINR). The time-dependent frequency offsets are designed for the proposed radar to realize the time-invariant beampattern at the desired target location over the whole pulse duration. To further improve the target detection performance, the time-invariant joint Tx–Rx beampattern design is considered based on the output SINR maximization. To effectively solve the nonconvex output SINR maximization problem, a suboptimal alternating optimization algorithm is proposed to iteratively optimize the FDA Tx beamforming, the PSFDA spatial pointings, and the PSFDA Rx beamforming. Numerical experiments illustrate that the time-invariant and single-maximum joint Tx–Rx beampattern at the target location is achieved. Moreover, compared to the basic FDA and logarithmic frequency offset FDA as well as conventional phased array radars, the proposed polarization-subarray based FDA radar achieves a significant SINR improvement, particularly when the desired target and the interferences are spatially indistinguishable.

Tao Sun;Hao Jiang;Lizhi Cheng;Wei Zhu; "Iteratively Linearized Reweighted Alternating Direction Method of Multipliers for a Class of Nonconvex Problems," vol.66(20), pp.5380-5391, Oct.15, 15 2018. In this paper, we consider solving a class of nonconvex and nonsmooth problems frequently appearing in signal processing and machine learning research. The traditional alternating direction method of multipliers encounters troubles in both mathematics and computations in solving the nonconvex and nonsmooth subproblem. In view of this, we propose a reweighted alternating direction method of multipliers. In this algorithm, all subproblems are convex and easy to solve. We also provide several guarantees for the convergence and prove that the algorithm globally converges to a critical point of an auxiliary function with the help of the Kurdyka–Łojasiewicz property. Several numerical results are presented to demonstrate the efficiency of the proposed algorithm.

Freweyni K. Teklehaymanot;Michael Muma;Abdelhak M. Zoubir; "Bayesian Cluster Enumeration Criterion for Unsupervised Learning," vol.66(20), pp.5392-5406, Oct.15, 15 2018. We derive a new Bayesian Information Criterion (BIC) by formulating the problem of estimating the number of clusters in an observed dataset as maximization of the posterior probability of the candidate models. Given that some mild assumptions are satisfied, we provide a general BIC expression for a broad class of data distributions. This serves as a starting point when deriving the BIC for specific distributions. Along this line, we provide a closed-form BIC expression for multivariate Gaussian distributed variables. We show that incorporating the data structure of the clustering problem into the derivation of the BIC results in an expression whose penalty term is different from that of the original BIC. We propose a two-step cluster enumeration algorithm. First, a model-based unsupervised learning algorithm partitions the data according to a given set of candidate models. Subsequently, the number of clusters is determined as the one associated with the model for which the proposed BIC is maximal. The performance of the proposed two-step algorithm is tested using synthetic and real datasets.

Pol del Aguila Pla;Joakim Jaldén; "Cell Detection by Functional Inverse Diffusion and Nonnegative Group Sparsity—Part I: Modeling and Inverse Problems," vol.66(20), pp.5407-5421, Oct.15, 15 2018. In this two-part paper, we present a novel framework and methodology to analyze data from certain image-based biochemical assays, e.g., ELISPOT and Fluorospot assays. In this first part, we start by presenting a physical partial differential equations (PDE) model up to image acquisition for these biochemical assays. Then, we use the PDEs’ Green function to derive a novel parameterization of the acquired images. This parameterization allows us to propose a functional optimization problem to address inverse diffusion. In particular, we propose a nonnegative group-sparsity regularized optimization problem with the goal of localizing and characterizing the biological cells involved in the said assays. We continue by proposing a suitable discretization scheme that enables both the generation of synthetic data and implementable algorithms to address inverse diffusion. We end Part I by providing a preliminary comparison between the results of our methodology and an expert human labeler on real data. Part II is devoted to providing an accelerated proximal gradient algorithm to solve the proposed problem and to the empirical validation of our methodology.

Pol del Aguila Pla;Joakim Jaldén; "Cell Detection by Functional Inverse Diffusion and Non-Negative Group Sparsity—Part II: Proximal Optimization and Performance Evaluation," vol.66(20), pp.5422-5437, Oct.15, 15 2018. In this two-part paper, we present a novel framework and methodology to analyze data from certain image-based biochemical assays, e.g., ELISPOT and Fluorospot assays. In this second part, we focus on our algorithmic contributions. We provide an algorithm for functional inverse diffusion that solves the variational problem we posed in Part I. As part of the derivation of this algorithm, we present the proximal operator for the non-negative group-sparsity regularizer, which is a novel result that is of interest in itself, also in comparison to previous results on the proximal operator of a sum of functions. We then present a discretized approximated implementation of our algorithm and evaluate it both in terms of operational cell-detection metrics and in terms of distributional optimal-transport metrics.

Haoran Sun;Xiangyi Chen;Qingjiang Shi;Mingyi Hong;Xiao Fu;Nicholas D. Sidiropoulos; "Learning to Optimize: Training Deep Neural Networks for Interference Management," vol.66(20), pp.5438-5453, Oct.15, 15 2018. Numerical optimization has played a central role in addressing key signal processing (SP) problems. Highly effective methods have been developed for a large variety of SP applications such as communications, radar, filter design, and speech and image analytics, just to name a few. However, optimization algorithms often entail considerable complexity, which creates a serious gap between theoretical design/analysis and real-time processing. In this paper, we aim at providing a new learning-based perspective to address this challenging issue. The key idea is to treat the input and output of an SP algorithm as an unknown nonlinear mapping and use a deep neural network (DNN) to approximate it. If the nonlinear mapping can be learned accurately by a DNN of moderate size, then SP tasks can be performed effectively—since passing the input through a DNN only requires a small number of simple operations. In our paper, we first identify a class of optimization algorithms that can be accurately approximated by a fully connected DNN. Second, to demonstrate the effectiveness of the proposed approach, we apply it to approximate a popular interference management algorithm, namely, the WMMSE algorithm. Extensive experiments using both synthetically generated wireless channel data and real DSL channel data have been conducted. It is shown that, in practice, only a small network is sufficient to obtain high approximation accuracy, and DNNs can achieve orders of magnitude speedup in computational time compared to the state-of-the-art interference management algorithm.

Thomas Lundgaard Hansen;Peter Bjørn Jørgensen;Mihai-Alin Badiu;Bernard Henri Fleury; "An Iterative Receiver for OFDM With Sparsity-Based Parametric Channel Estimation," vol.66(20), pp.5454-5469, Oct.15, 15 2018. In this paper, we design a receiver that iteratively passes soft information between the channel estimation and data decoding stages. The receiver incorporates sparsity-based parametric channel estimation. State-of-the-art sparsity-based iterative receivers simplify the channel estimation problem by restricting the multipath delays to a grid. Our receiver does not impose such a restriction. As a result, it does not suffer from the leakage effect, which destroys sparsity. Communication at near capacity rates in high SNR requires a large modulation order. Due to the close proximity of modulation symbols in such systems, the grid-based approximation is of insufficient accuracy. We show numerically that a state-of-the-art iterative receiver with grid-based sparse channel estimation exhibits a bit-error-rate floor in the high SNR regime. On the contrary, our receiver performs very close to the perfect channel state information bound for all SNR values. We also demonstrate both theoretically and numerically that parametric channel estimation works well in dense channels, i.e., when the number of multipath components is large and each individual component cannot be resolved.

Binnan Zhuang;Dongning Guo;Ermin Wei;Michael L. Honig; "Large-Scale Spectrum Allocation for Cellular Networks via Sparse Optimization," vol.66(20), pp.5470-5483, Oct.15, 15 2018. This paper studies joint spectrum allocation and user association in large heterogeneous cellular networks. The objective is to maximize some network utility function based on given traffic statistics collected over a slow timescale, conceived to be seconds to minutes. A key challenge is scalability: interference across cells creates dependencies across the entire network, making the optimization problem computationally challenging as the size of the network becomes large. A suboptimal solution is presented, which performs well in networks consisting of 100 access points (APs) serving several hundred user devices. This is achieved by optimizing over local overlapping neighborhoods, defined by interference conditions, and by exploiting the sparsity of a globally optimal solution. Specifically, with a total of <inline-formula><tex-math notation="LaTeX">$k$</tex-math></inline-formula> user devices in the entire network, it suffices to divide the spectrum into <inline-formula><tex-math notation="LaTeX">$k$ </tex-math></inline-formula> segments, where each segment is mapped to a particular set, or pattern, of active APs within each local neighborhood. The problem is then to find a mapping of segments to patterns, and to optimize the widths of the segments. A convex relaxation is proposed for this, which relies on a reweighted <inline-formula><tex-math notation="LaTeX">$ell _1$</tex-math></inline-formula> approximation of an <inline-formula> <tex-math notation="LaTeX">$ell _0$</tex-math></inline-formula> constraint, and is used to enforce the mapping of a unique pattern to each spectrum segment. A distributed implementation based on alternating direction method of multipliers is also proposed. Numerical comparisons with benchmark schemes show that the proposed method achieves a substantial increase in achievable throughput and/or reduction in the average packet delay.

Shahar Bar;Joseph Tabrikian; "A Sequential Framework for Composite Hypothesis Testing," vol.66(20), pp.5484-5499, Oct.15, 15 2018. This paper addresses the problem of classification via composite sequential hypothesis testing. We focus on two possible schemes for the hypotheses: non-nested and nested. For the first case, we present the generalized sequential probability ratio test (GSPRT) and provide an analysis of its asymptotic optimality. Yet, for the nested case, this algorithm is shown to be inconsistent. Consequently, an alternative solution is derived based on Bayesian considerations, similar to the ones used for the Bayesian information criterion and asymptotic maximum a posteriori probability criterion for model order selection. The proposed test, named penalized GSPRT (PGSPRT), is based on restraining the exponential growth of the GSPRT with respect to the sequential probability ratio test. Furthermore, the commonly used performance measure for sequential tests, known as the average sample number, is evaluated for the PGSPRT under each of the hypotheses. Simulations are carried out to compare the performance measures of the proposed algorithms for two nested model order selection problems.

IEEE Signal Processing Letters - new TOC (2018 September 20) [Website]

Giacomo Capizzi;Salvatore Coco;Grazia Lo Sciuto;Christian Napoli; "A New Iterative FIR Filter Design Approach Using a Gaussian Approximation," vol.25(11), pp.1615-1619, Nov. 2018. The letter presents a novel iterative methodology for the design of FIR filters based on an approximation of the desired filter frequency response using a Gabor system generated by the Gaussian function. The proposed method exhibits simplicity of implementation, comparable to that of window-based design methods and ensures accuracy in the fulfillment of design requirements, comparable to the one achieved by the Parks–McClellans method. Furthermore, two other advantages of this method are: closed-form formula for the tap coefficients of the filter and the smooth, monotonically decreasing behavior of the frequency response from dc to infinite frequency.

Jeonghun Park;Robert W. Heath; "Analysis of Blockage Sensing by Radars in Random Cellular Networks," vol.25(11), pp.1620-1624, Nov. 2018. We characterize the detection probability of blockage sensing by radars deployed on towers in cellular networks. If the signal-to-interference ratio of the reflected pilot signal is larger than the predefined threshold and there is no other blockage between the radar and the corresponding blockage, the radar successfully detects the blockage. Modeling radar and blockage locations using stochastic geometry, we derive the detection probability as a function of the system parameters, chiefly the radar and blockage densities. Leveraging the obtained expression, we provide some guidelines for efficiently deploying radars to enhance the detection probability.

IEEE Journal of Selected Topics in Signal Processing - new TOC (2018 September 20) [Website]

* "Table of Contents," vol.12(4), pp.573-574, Aug. 2018.* Presents the table of contents for this issue of the publication.

D. Kundur;J. Contreras;D. Srinivasan;N. Gatsis;S. Wang;S. Peeta; "Introduction to the Issue on Signal and Information Processing for Critical Infrastructures," vol.12(4), pp.575-577, Aug. 2018. The papers in this special section focus on the use of signal and information processing for critical infrastructures. Critical infrastructures such as the smart electric power grid, gas and water utility networks, transportation networks, and communication networks are crucially supporting quality of life and economic growth. Future critical infrastructures are envisioned to integrate sensory data acquisition, communication and computation technologies, and signal processing to offer improved services to their end-users. Such an integration promises to have profound effects in improving societal welfare by enabling more efficient, open, consumer-centric, environmentally-friendly and resilient modern critical infrastructures. Thus, the design mantra for the evolution of critical infrastructures can be described, in part, as knowledge is power. At the heart of many technological challenges underlying the vision of evolved critical infrastructures is the need for signal and information processing.

Junbo Zhao;Lamine Mili; "A Robust Generalized-Maximum Likelihood Unscented Kalman Filter for Power System Dynamic State Estimation," vol.12(4), pp.578-592, Aug. 2018. This paper develops a new robust generalized maximum-likelihood-type unscented Kalman filter (GM-UKF) that is able to suppress observation and innovation outliers while filtering out non-Gaussian process and measurement noise. Because the errors of the real and reactive power measurements calculated using phasor measurement units (PMUs) follow long-tailed probability distributions, the conventional UKF provides strongly biased state estimates since it relies on the weighted least squares estimator. By contrast, the state estimates and residuals of our GM-UKF are proved to be roughly Gaussian, allowing the sigma points to reliably approximate the mean and the covariance matrices of the predicted and corrected state vectors. To develop our GM-UKF, we first derive a batch-mode regression form by processing the predictions and observations simultaneously, where the statistical linearization approach is used. We show that the set of equations so derived are equivalent to those of the unscented transformation. Then, a robust GM-estimator that minimizes a convex Huber cost function while using weights calculated via projection statistics (PSs) is proposed. The PSs are applied to a two-dimensional matrix that consists of a serially correlated predicted state and innovation vectors to detect observation and innovation outliers. These outliers are suppressed by the GM-estimator using the iteratively reweighted least squares algorithm. Finally, the asymptotic error covariance matrix of the GM-UKF state estimates is derived from the total influence function. Extensive simulation results carried out on IEEE New England 39-bus 10-machine test system verify the effectiveness and robustness of the proposed method.

Saurabh Sihag;Ali Tajer; "Power System State Estimation Under Model Uncertainty," vol.12(4), pp.593-606, Aug. 2018. This paper considers the general state estimation in power systems when the system model is not fully known. Model uncertainty might be caused by lack of full information about the network model, or by unpredicted disruptions or changes to the grid topology, model, or parameters. This paper focuses on a setting for state estimation in which besides the nominal model, the system might follow a group of alternative models. Including alternative possibilities for the system model, introduces a new dimension to state estimation. Specifically, the state estimator needs to detect whether the system model has deviated from its nominal model, and if it is deemed to have deviated, then also isolate the actual model. These estimation, detection, and isolation decisions are inherently coupled due to the fact that isolating the true model is never perfect (due to noisy measurements), the effect of which transcends the isolation process, and affects the estimation routine as well. This paper establishes the fundamental interplay between the detection, isolation, and estimation routines, designs the optimal attendant rules, and provides an algorithm for implementing these rules in a unified framework. The optimal framework is applied to the IEEE 14-bus system model and IEEE 118-bus model, and the performance is compared against the existing relevant approaches.

Neelabh Kashyap;Stefan Werner;Yih-Fang Huang; "Decentralized PMU-Assisted Power System State Estimation With Reduced Interarea Communication," vol.12(4), pp.607-616, Aug. 2018. This paper presents a decentralized approach to multiarea power system state estimation using a combination of conventional measurement devices and newer phasor measurement units (PMU). We employ a reduced-order approach to state estimation, wherein the PMU observable and unobservable components of the state vector are estimated separately in each control area. The estimation problem is solved by exchanging information through an improved gossip-based protocol, which takes into account the weak measurement coupling between control areas. A detailed analysis of the proposed protocol reveals that the amount of information exchange is always less than the naïve gossip-based scheme. We show through simulations on the IEEE 118-bus test system that, in practice, the gossip iterations converge in one iteration regardless of the number of control areas. Our simulations also show that the marginal information exchange is also considerably lower when using the proposed method.

Shuai Zhang;Yingshuai Hao;Meng Wang;Joe H. Chow; "Multichannel Hankel Matrix Completion Through Nonconvex Optimization," vol.12(4), pp.617-632, Aug. 2018. This paper studies the multichannel missing data recovery problem when the measurements are generated by a dynamical system. A new model, termed multichannel low-rank Hankel matrices, is proposed to characterize the intrinsic low-dimensional structures in multichannel time series. The data recovery problem is formulated as a nonconvex optimization problem, and two fast algorithms (AM-FIHT and RAM-FIHT), both with linear convergence rates, are developed to recover the missing points with provable performance guarantees. The required number of observations is significantly reduced, compared with conventional low-rank completion methods. Our methods are verified through numerical experiments on synthetic data and recorded synchrophasor data in power systems.

Keisuke Maeda;Sho Takahashi;Takahiro Ogawa;Miki Haseyama; "Estimation of Deterioration Levels of Transmission Towers via Deep Learning Maximizing Canonical Correlation Between Heterogeneous Features," vol.12(4), pp.633-644, Aug. 2018. This paper presents estimation of deterioration levels of transmission towers via deep learning maximizing the canonical correlation between heterogeneous features. In the proposed method, we newly construct a correlation-maximizing deep extreme learning machine (CMDELM) based on a local receptive field (LRF). For accurate deterioration level estimation, it is necessary to obtain semantic information that effectively represents deterioration levels. However, since the amount of training data for transmission towers is small, it is difficult to perform feature transformation by using many hidden layers such as general deep learning methods. In CMDELM-LRF, one hidden layer, which maximizes the canonical correlation between visual features and text features obtained from inspection text data, is newly inserted. Specifically, by using projections obtained by maximizing the canonical correlation as weight parameters of the hidden layer, feature transformation for extracting semantic information is realized without designing many hidden layers. This is the main contribution of this paper. Consequently, CMDELM-LRF realizes accurate deterioration level estimation from a small amount of training data.

Roohallah Khatami;Majid Heidarifar;Masood Parvania;Pramod Khargonekar; "Scheduling and Pricing of Load Flexibility in Power Systems," vol.12(4), pp.645-656, Aug. 2018. This paper proposes a fundamental approach for scheduling and pricing of load flexibility in power systems operation. An optimal control model is proposed to co-optimize the continuous-time flexibility of loads with the operation of generating units to supply the flexibility requirements of the net-load, while satisfying delay-based and deadline-based service quality constraints of the flexible loads. A function space-based solution method is developed to solve the continuous-time problem, which is based on reducing the dimensionality of the continuous-time decision and parameter trajectories by modeling them in a finite-order function space formed by Bernstein polynomials. The proposed method converts the continuous-time problem into a mixed-integer linear programming problem with the Bernstein coordinates of the trajectories as the decision variables. The proposed method not only allows for full exploitation of the load flexibility through higher order solution spaces, but also includes the traditional discrete-time solution as a special case. This paper proves that the Lagrange multiplier associated with the continuous-time power balance constraint is the continuous-time marginal price of electricity in the presence of flexible loads. The marginal price is calculated in a closed from, which demonstrates the dependence of the price on the incremental cost rates of generating units, on parameters of the flexible loads, as well as on ramping limitations of generating units and flexible loads. Numerical results, provided for the IEEE Reliability Test System, demonstrate the effectiveness of the proposed model in deploying load flexibility to reduce operation cost and ramping requirement of the system, as well as smoothing the load and marginal price trajectories of the system.

Xuanyu Cao;Junshan Zhang;H. Vincent Poor; "Joint Energy Procurement and Demand Response Towards Optimal Deployment of Renewables," vol.12(4), pp.657-672, Aug. 2018. In this paper, joint energy procurement and demand response is studied from the perspective of the operator of a power system. The operator procures energy from both renewable energy sources (RESs) and the spot market. We observe the fact that the RESs may incur considerable infrastructure cost. This cost is taken into account and the optimal planning of renewables is examined by controlling the investment in RES infrastructures. Due to the uncertainty of renewables, the operator can also purchase energy directly from the spot market to compensate for the possible deficit incurred by the realization of the random renewable energy. By setting appropriate prices, the operator sells the collected energy to heterogeneous end users with different demand response characteristics. We model the decision making process of the operator as a two-stage optimization problem. The optimal decisions on the renewable deployment, energy purchase from the spot market, and pricing schemes are derived. Several solution structures are observed and a computationally efficient algorithm, requiring only closed-form calculation and simple bisection search, is proposed to compute the optimal decisions. Finally, numerical experiments are conducted to verify the optimality of the proposed algorithm and the solution structures observed theoretically. In particular, the impact of renewable penetration and the importance of its optimal design are highlighted.

Kaiqing Zhang;Wei Shi;Hao Zhu;Emiliano Dall’Anese;Tamer Başar; "Dynamic Power Distribution System Management With a Locally Connected Communication Network," vol.12(4), pp.673-687, Aug. 2018. Coordinated optimization and control of distribution-level assets enables a reliable and optimal integration of massive amount of distributed energy resources (DERs) and facilitates distribution system management (DSM). Accordingly, the objective is to coordinate the power injection at the DERs to maintain certain quantities across the network, e.g., voltage magnitude, line flows, and line losses, to be close to a desired profile. By and large, the performance of the DSM algorithms has been challenged by two factors: 1) the possibly nonstrongly connected communication network over DERs that hinders the coordination; and 2) the dynamics of the real system caused by the DERs with heterogeneous capabilities, time-varying operating conditions, and real-time measurement mismatches. In this paper, we investigate the modeling and algorithm design and analysis with the consideration of these two factors. In particular, a game-theoretic characterization is first proposed to account for a locally connected communication network over DERs, along with the analysis of the existence and uniqueness of the Nash equilibrium therein. To achieve the equilibrium in a distributed fashion, a projected-gradient-based asynchronous DSM algorithm is then advocated. The algorithm performance, including the convergence speed and the tracking error, is analytically guaranteed under the dynamic setting. Extensive numerical tests on both synthetic and realistic cases corroborate the analytical results derived.

Hyun-Suk Lee;Cem Tekin;Mihaela van der Schaar;Jang-Won Lee; "Adaptive Contextual Learning for Unit Commitment in Microgrids With Renewable Energy Sources," vol.12(4), pp.688-702, Aug. 2018. In this paper, we study a unit commitment (UC) problem where the goal is to minimize the operating costs of a microgrid that involves renewable energy sources. Since traditional UC algorithms use a priori information about uncertainties such as the load demand and the renewable power outputs, their performances highly depend on the accuracy of the a priori information, especially in microgrids due to their limited scale and size. This makes the algorithms impractical in settings where the past data are not sufficient to construct an accurate prior of the uncertainties. To resolve this issue, we develop an adaptively partitioned contextual learning algorithm for UC (AP-CLUC) that learns the best UC schedule and minimizes the total cost over time in an online manner without requiring any a priori information. AP-CLUC effectively learns the effects of the uncertainties on the cost by adaptively considering context information strongly correlated with the uncertainties, such as the past load demand and weather conditions. For AP-CLUC, we first prove an analytical bound on the performance, which shows that its average total cost converges to that of the optimal policy with perfect a priori information. Then, we show via simulations that AP-CLUC achieves competitive performance with respect to the traditional UC algorithms with perfect a priori information, and it achieves better performance than them even with small errors on the information. These results demonstrate the effectiveness of utilizing the context information and the adaptive management of the past data for the UC problem.

Xuanyu Cao;Junshan Zhang;H. Vincent Poor; "A Virtual-Queue-Based Algorithm for Constrained Online Convex Optimization With Applications to Data Center Resource Allocation," vol.12(4), pp.703-716, Aug. 2018. In this paper, online convex optimization (OCO) problems with time-varying objective and constraint functions are studied from the perspective of an agent who takes actions in real time. Information about the current objective and constraint functions is revealed only after the corresponding action is already chosen. Inspired by a fast converging algorithm for time-invariant optimization in the very recent work [1], we develop a novel online algorithm based on virtual queues for constrained OCO. Optimal points of the dynamic optimization problems with full knowledge of the current objective and constraint functions are used as a dynamic benchmark sequence. Upper bounds on the regrets with respect to the dynamic benchmark and the constraint violations are derived for the presented algorithm in terms of the temporal variations of the underlying dynamic optimization problems. It is observed that the proposed algorithm possesses sublinear regret and sublinear constraint violations, as long as the temporal variations of the optimization problems are sublinear, i.e., the objective and constraint functions do not vary too drastically across time. The performance bounds of the proposed algorithm are superior to those of the state-of-the-art OCO method in most scenarios. Besides, different from the saddle point methods widely used in constrained OCO, the stepsize of the proposed algorithm does not rely on the total time horizon, which may be unknown in practice. Finally, the algorithm is applied to a dynamic resource allocation problem in data center networks. Numerical experiments are conducted to corroborate the merit of the developed algorithm and its advantage over the state-of-the-art.

Sindri Magnússon;Chinwendu Enyioha;Na Li;Carlo Fischione;Vahid Tarokh; "Communication Complexity of Dual Decomposition Methods for Distributed Resource Allocation Optimization," vol.12(4), pp.717-732, Aug. 2018. Dual decomposition methods are among the most prominent approaches for finding primal/dual saddle point solutions of resource allocation optimization problems. To deploy these methods in the emerging Internet of things networks, which will often have limited data rates, it is important to understand the communication overhead they require. Motivated by this, we introduce and explore two measures of communication complexity of dual decomposition methods to identify the most efficient communication among these algorithms. The first measure is €-complexity, which quantifies the minimal number of bits needed to find an €-accurate solution. The second measure is b-complexity, which quantifies the best possible solution accuracy that can be achieved from communicating b bits. We find the exact €and b-complexity of a class of resource allocation problems where a single supplier allocates resources to multiple users. For both the primal and dual problems, the €-complexity grows proportionally to log2 (1/€) and the b-complexity proportionally to 1/2b. We also introduce a variant of the €- and b-complexity measures where only algorithms that ensure primal feasibility of the iterates are allowed. Such algorithms are often desirable because overuse of the resources can overload the respective systems, e.g., by causing blackouts in power systems. We provide upper and lower bounds on the convergence rate of these primal feasible complexity measures. In particular, we show that the b-complexity cannot converge at a faster rate than O(1/b). Therefore, the results demonstrate a tradeoff between fast convergence and primal feasibility. We illustrate the result by numerical studies.

Pei-Duo Yu;Chee Wei Tan;Hung-Lin Fu; "Averting Cascading Failures in Networked Infrastructures: Poset-Constrained Graph Algorithms," vol.12(4), pp.733-748, Aug. 2018. Cascading failures in critical networked infrastructures that result even from a single source of failure often lead to rapidly widespread outages as witnessed in the 2013 Northeast blackout in Northern America. The ensuing problem of containing future cascading failures by placement of protection or monitoring nodes in the network is complicated by the uncertainty of the failure source and the missing observation of how the cascading might unravel, be it the past cascading failures or the future ones. This paper examines the problem of minimizing the outage when a cascading failure from a single source occurs. A stochastic optimization problem is formulated where a limited number of protection nodes, when placed strategically in the network to mitigate systemic risk, can minimize the expected spread of cascading failure. We propose the vaccine centrality, which is a network centrality based on the partially ordered sets (poset) characteristics of the stochastic program and distributed message-passing, to design efficient approximation algorithms with provable approximation ratio guarantees. In particular, we illustrate how the vaccine centrality and the poset-constrained graph algorithms can be designed to tradeoff between complexity and optimality, as illustrated through a series of numerical experiments. This paper points toward a general framework of network centrality as statistical inference to design rigorous graph analytics for statistical problems in networks.

Lakshay Narula;Todd E. Humphreys; "Requirements for Secure Clock Synchronization," vol.12(4), pp.749-762, Aug. 2018. This paper establishes a fundamental theory of secure clock synchronization. Accurate clock synchronization is the backbone of systems managing power distribution, financial transactions, telecommunication operations, database services, etc. Some clock synchronization (time transfer) systems, such as the global navigation satellite systems, are based on one-way communication from a master to a slave clock. Others, such as the network transport protocol, and the IEEE 1588 precision time protocol (PTP), involve two-way communication between the master and slave. This paper shows that all one-way time transfer protocols are vulnerable to replay attacks that can potentially compromise timing information. A set of conditions for secure two-way clock synchronization is proposed and proved to be necessary and sufficient. It is shown that IEEE 1588 PTP, although a two-way synchronization protocol, is not compliant with these conditions, and is therefore insecure. Requirements for secure IEEE 1588 PTP are proposed, and a second example protocol is offered to illustrate the range of compliant systems.

Chensheng Liu;Jing Wu;Chengnian Long;Deepa Kundur; "Reactance Perturbation for Detecting and Identifying FDI Attacks in Power System State Estimation," vol.12(4), pp.763-776, Aug. 2018. False data injection (FDI) attacks have recently been introduced as an important class of cyberattacks in modern power systems. By coordinating the injection of false data in selected meters readings, an FDI attacks can bypass bad data detection methods in power system state estimation. In this paper, we propose a strategy to enhance detection and identification of an FDI that leverages reactance perturbation. We begin by deriving conditions to mitigate attacks in noiseless systems that relates the likelihood of attack detection and identification to the rank of the composite matrix, limited by power system topology and the deployment of meters. Based on such conditions, we design a secure reactance perturbation algorithm that maximizes the likelihood of an FDI attack detection and identification while minimizing the effect on the operational cost of power systems, e.g., power losses on transmission lines. Simulations on a 6-bus and the IEEE 57-bus system verify the performance of the secure reactance perturbation and the effect on power losses in both noiseless and noisy systems.

Arash Mohammadi;Konstantinos N. Plataniotis; "Noncircular Attacks on Phasor Measurement Units for State Estimation in Smart Grid," vol.12(4), pp.777-789, Aug. 2018. With the evolution of phasor measurement units (PMUs) and the proposition to incorporate a large number of PMUs in future smart grids, it is critical to identify and prevent potential (zero-day) cyber attacks on phasor signals. The PMUs are the forefront of sensor technologies used in the smart grid and produce phasor voltage and current readings, which are complex-valued in nature. In this regard, the paper investigates potential attacks on complex-valued PMU signals and proposes the new paradigm of data-injection attacks, referred to as noncircular attacks. Existing state estimation algorithms and attack monitoring solutions assume that the PMU observations have statistical characteristics similar to that of real-valued signals. This assumption makes PMUs extremely defenseless against the proposed noncircular attacks. In this paper, we introduce the noncircular attack model, evaluate (both analytically and via experiments) the potential destructive nature of such attacks, propose a Bhattacharyya distance detector for monitoring the system against cyber attacks by transforming the detection problem to an equivalent problem of comparing innovation sequences in distribution via statistical distance measures, and propose a circularization approach, which enables the conventional detection algorithms to identify noncircular attacks.

Varun Badrinath Krishna;Carl A. Gunter;William H. Sanders; "Evaluating Detectors on Optimal Attack Vectors That Enable Electricity Theft and DER Fraud," vol.12(4), pp.790-805, Aug. 2018. Worldwide, utilities are losing billions of dollars annually because of electricity theft. The detection of electricity theft has been a topic of research for decades. In this paper, we extend our prior work in the context of advanced metering infrastructures, wherein smart meters are compromised and made to under-report consumption. To the best of our knowledge, this paper presents the first study of meter fraud in the context of distributed energy resources (DERs). With an increased penetration of DERs in modern power grids, and with the decline in electricity prices, we show that there is incentive for electricity generators to over-report generation. We quantify the economic impact of cyber-attacks (on meters) that are optimal in that they maximize fraud while circumventing detectors. In doing so, we use consumption data from Ireland, solar generation data from the U.S. and Australia, and wind generation data from France.

Mohasinina Binte Kamal;Gihan J. Mendis;Jin Wei; "Intelligent Soft Computing-Based Security Control for Energy Management Architecture of Hybrid Emergency Power System for More-Electric Aircrafts," vol.12(4), pp.806-816, Aug. 2018. This paper proposes an attack-resilient intelligent soft computing-based security control for energy management architecture for a hybrid emergency power system of more-electric aircrafts (MEAs). Our proposed architecture develops an adaptive neuro-fuzzy inference system (ANFIS)-based method in conjunction with a specific recurrent neural network architecture called long-short-term memory (LSTM) to evaluate the integrity of the power output of the critical components, which have higher vulnerability to the potential cyber-attacks and critical for the effective energy management and emergency control. We evaluate the integrity of the critical measurements that have high vulnerability to the potential cyber-attacks by using LSTM techniques. After identifying the compromised measurements, we continue to activate the closed-loop ANFIS mechanism in which the structure of ANFIS is controlled according to the accuracy of its current estimations by probing the physical couplings in the system. In our simulation, we evaluate the performance of our proposed LSTM-ANFIS -based method to support the integrity of the energy management strategies used in hybrid emergency power system for MEAs by using TensorFlow and MATLAB/Simulink co-simulation environment. Our simulation results illustrate the effectiveness of our proposed method in effectively evaluating the integrity of critical data and achieving resilient control.

IEEE Signal Processing Magazine - new TOC (2018 September 20) [Website]

* "Front Cover," vol.35(5), pp.C1-C1, Sept. 2018.* Presents the front cover for this issue of the publication.

* "Table of Contents," vol.35(5), pp.1-2, Sept. 2018.* Presents the table of contents for this issue of the publication.

* "Staff Listing," vol.35(5), pp.2-2, Sept. 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Robert W. Heath; "GlobalSIP and Beyond [From the Editor[Name:_blank]]," vol.35(5), pp.3-15, Sept. 2018. Presents the introductory editorial for this issue of the publication.

Ali H. Sayed; "Science Is Blind [President's Message[Name:_blank]]," vol.35(5), pp.4-6, Sept. 2018. Presents the President's message for this issue of the publication.

* "Election of Regional Directors-at-Large and Members-at-Large [Society News[Name:_blank]]," vol.35(5), pp.7-8, Sept. 2018.* Presents information on the SPS society election of regional officers.

* "New Society Officer Elected and Editors-in-Chief Named for 2019 [Society News[Name:_blank]]," vol.35(5), pp.8-8, Sept. 2018.* Presents the SPS society newly elected officers and Editors-in-Chief.

John Edwards; "Signal Processing Opens the Internet of Things to a New World of Possibilities: Research Leads to New Internet of Things Technologies and Applications [Special Reports[Name:_blank]]," vol.35(5), pp.9-12, Sept. 2018. The Internet of Things (IoT ) refers to the wireless connection of ordinary objects, such as vehicles, cash machines, door locks, cameras, industrial controls, and municipal traffic systems, to the Internet. Research firm BI Intelligence predicts that 22.5 billion devices will be connected to the IoT in 2021, compared to 6.6 billion in 2016. Signal processing is playing a significant role in expanding the number of IoT technologies and applications. Realizing that the IoT has emerged as perhaps the most important new technology since the arrival of the Internet itself, researchers worldwide are now turning to signal processing to support and augment new IoT services and make existing applications less expensive and more practical.

Chenren Xu;Yan Lindsay Sun;Konstantinos Kostas N. Plataniotis;Nic Lane; "Signal Processing and the Internet of Things [From the Guest Editors[Name:_blank]]," vol.35(5), pp.13-15, Sept. 2018. The notion of the Internet of Things (IoT) has emerged as a last-mile solution for connecting various cyber technologies to our everyday life. It envisions a three-tier architecture where highly distributed and heterogeneous sensor data will be collected through a gateway and made available to the Internet to be readily accessible for a wide range of applications. Today, with ever increasing types of IoT devices as well as the growing demand being placed on the end user, the sensing platform, and the computing and storage infrastructure, more is being asked of engineers, designers, and scientists. Discusses how we can leverage today’s pervasive cloud and network infrastructure to foster more intriguing applications with more demanding signal processing and machine learning techniques.

Chenren Xu;Lei Yang;Pengyu Zhang; "Practical Backscatter Communication Systems for Battery-Free Internet of Things: A Tutorial and Survey of Recent Research," vol.35(5), pp.16-27, Sept. 2018. Backscatter presents an emerging ultralow-power wireless communication paradigm. The ability to offer submilliwatt power consumption makes it a competitive core technology for Internet of Things (IoT) applications. In this article, we provide a tutorial of backscatter communication from the signal processing perspective as well as a survey of the recent research activities in this domain, primarily focusing on bistatic backscatter systems. We also discuss the unique real-world applications empowered by backscatter communication and identify open questions in this domain. We believe this article will shed light on the low-power wireless connectivity design toward building and deploying IoT services in the wild.

Aggelos Bletsas;Panos N. Alevizos;Georgios Vougioukas; "The Art of Signal Processing in Backscatter Radio for μW (or Less) Internet of Things: Intelligent Signal Processing and Backscatter Radio Enabling Batteryless Connectivity," vol.35(5), pp.28-40, Sept. 2018. Backscatter (or simply scatter) radio is based on reflection principles, where each tag modulates information on top of an illuminating signal, by simply connecting its antenna to different loads; modulation of information is based on the modifications of the tag antenna-load reflection coefficient, requiring in principle only a switch and omitting powerconsuming signal conditioning units, such as mixers, amplifiers, oscillators, and filters. The ultralow-power nature of backscatter radio, in conjunction with the recent advances in multiple access and achieved communication ranges (on the order of hundreds of meters to kilometers), due to intelligent signal processing, elevate backscatter radio as the de facto communication principle e for nW (or less)-level consumption, last-mile connectivity, and Internet of Things (IoT) networking. This article is an update to the state-of-the-art advances in the emerging backscatter radio domain, focusing on the signal processing engine, including ambient illumination from existing signals, as well as unconventional backscatter radio-based IoT technologies that could revolutionize environmental sensing and agriculture. Finally, the offered research methodology and techniques in short-packet, channel-encoded (or not), coherent (or not) sequence detection will assist researchers in radio-frequency identification (RFID)/backscatter radio as well as other domains of the telecommunications industry.

Liang Xiao;Xiaoyue Wan;Xiaozhen Lu;Yanyong Zhang;Di Wu; "IoT Security Techniques Based on Machine Learning: How Do IoT Devices Use AI to Enhance Security?," vol.35(5), pp.41-49, Sept. 2018. The Internet of things (IoT), which integrates a variety of devices into networks to provide advanced and intelligent services, has to protect user privacy and address attacks such as spoofing attacks, denial of service (DoS) attacks, jamming, and eavesdropping. We investigate the attack model for IoT systems and review the IoT security solutions based on machine-learning (ML) techniques including supervised learning, unsupervised learning, and reinforcement learning (RL). ML-based IoT authentication, access control, secure offloading, and malware detection schemes to protect data privacy are the focus of this article. We also discuss the challenges that need to be addressed to implement these ML-based security schemes in practical IoT systems.

Jiangfan Zhang;Rick S. Blum;H. Vincent Poor; "Approaches to Secure Inference in the Internet of Things: Performance Bounds, Algorithms, and Effective Attacks on IoT Sensor Networks," vol.35(5), pp.50-63, Sept. 2018. The Internet of Things (IoT) improves pervasive sensing and control capabilities via the aid of modern digital communication, signal processing, and massive deployment of sensors but presents severe security challenges. Attackers can modify the data entering or communicated from the IoT sensors, which can have a serious impact on any algorithm using these data for inference. This article describes how to provide tight bounds (with sufficient data) on the performance of the best unbiased algorithms estimating a parameter from the attacked data and communications under any assumed statistical model describing how the sensor data depends on the parameter before attack. The results hold regardless of the unbiased estimation algorithm adopted, which could employ deep learning, machine learning, statistical signal processing, or any other approach. Example algorithms that achieve performance close to these bounds are illustrated. Attacks that make the attacked data useless for reducing these bounds are also described. These attacks provide a guaranteed attack performance in terms of the bounds regardless of the algorithms the unbiased estimation system employs. References are supplied that provide various extensions to all of the specific results presented in this article and a brief discussion of low-complexity encryption and physical layer security is provided.

Yuan Chen;Soummya Kar;Jose M.F. Moura; "The Internet of Things: Secure Distributed Inference," vol.35(5), pp.64-75, Sept. 2018. The growth in the number of devices connected to the Internet of Things (IoT) poses major challenges in security. The integrity and trustworthiness of data and data analytics are increasingly important concerns in IoT applications. These are compounded by the highly distributed nature of IoT devices, making it infeasible to prevent attacks and intrusions on all data sources. Adversaries may hijack devices and compromise their data. As a result, reactive countermeasures, such as intrusion detection and resilient analytics, become vital components of security. This article overviews algorithms for secure distributed inference in IoT.

Lu Zhou;Kuo-Hui Yeh;Gerhard Hancke;Zhe Liu;Chunhua Su; "Security and Privacy for the Industrial Internet of Things: An Overview of Approaches to Safeguarding Endpoints," vol.35(5), pp.76-87, Sept. 2018. Endpoint devices form a core part of the architecture of the Industrial Internet of Things (IIoT). Aspects of endpoint device security also extend to related technology paradigms, such as cyberphysical systems (CPSs), edge computing, and fog computing. In this sphere, there have been several initiatives to define and promote safer and more secure IIoT networks, with the Industrial Internet Consortium (IIC) and OpenFog Consortium having developed security framework specifications detailing the techniques and technologies to secure industrial endpoints.

Liang Liu;Erik G. Larsson;Wei Yu;Petar Popovski;Cedomir Stefanovic;Elisabeth de Carvalho; "Sparse Signal Processing for Grant-Free Massive Connectivity: A Future Paradigm for Random Access Protocols in the Internet of Things," vol.35(5), pp.88-99, Sept. 2018. The next wave of wireless technologies will proliferate in connecting sensors, machines, and robots for myriad new applications, thereby creating the fabric for the Internet of Things (IoT). A generic scenario for IoT connectivity involves a massive number of machine-type connections, but in a typical application, only a small (unknown) subset of devices are active at any given instant; therefore, one of the key challenges of providing massive IoT connectivity is to detect the active devices first and then decode their data with low latency. This article advocates the usage of grant-free, rather than grantbased random access schemes to overcome the challenge of massive IoT access. Several key signal processing techniques that promote the performance of the grant-free strategies are outlined, with a primary focus on advanced compressed sensing techniques and their applications for the efficient detection of active devices. We argue that massive multiple-input, multiple-output (MIMO) is especially well suited for massive IoT connectivity because the device detection error can be driven to zero asymptotically in the limit as the number of antennas at the base station (BS) goes to infinity by using the multiplemeasurement vector (MMV) compressed sensing techniques. This article also provides a perspective on several related important techniques for massive access, such as embedding short messages onto the device-activity detection process and the coded random access.

Wayes Tushar;Nipun Wijerathne;Wen-Tai Li;Chau Yuen;H. Vincent Poor;Tapan Kumar Saha;Kristin L. Wood; "Internet of Things for Green Building Management: Disruptive Innovations Through Low-Cost Sensor Technology and Artificial Intelligence," vol.35(5), pp.100-110, Sept. 2018. Buildings consume 60% of global electricity. However, current building management systems (BMSs) are highly expensive and difficult to justify for small- to medium-sized buildings. The Internet of Things (IoT), which can collect and monitor a large amount of data on different aspects of a building and feed the data to the BMS's processor, provides a new opportunity to integrate intelligence into the BMS for monitoring and managing a building's energy consumption to reduce costs. Although an extensive literature is available on, separately, IoTbased BMSs and applications of signal processing techniques for some building energy-management tasks, a detailed study of their integration to address the overall BMS is limited. As such, this article will address the current gap by providing an overview of an IoT-based BMS that leverages signal processing and machine-learning techniques. We demonstrate how to extract high-level building occupancy information through simple, low-cost IoT sensors and study how human activities impact a building's energy use-information that can be exploited to design energy conservation measures that reduce the building's energy consumption.

Viswam Nathan;Sudip Paul;Temiloluwa Prioleau;Li Niu;Bobak J. Mortazavi;Stephen A. Cambone;Ashok Veeraraghavan;Ashutosh Sabharwal;Roozbeh Jafari; "A Survey on Smart Homes for Aging in Place: Toward Solutions to the Specific Needs of the Elderly," vol.35(5), pp.111-119, Sept. 2018. Advances in engineering and health science have brought a significant improvement in health care and increased life expectancy. As a result, there has been a substantial growth in the number of older adults around the globe, and that number is rising. According to a United Nations report, between 2015 and 2030, the number of adults over the age of 60 is projected to grow by 56%, with the total reaching nearly 2.1 billion by the year 2050 [1]. Because of this, the cost of traditional health care continues to grow proportionally. Additionally, a significant portion of the elderly have multiple, simultaneous chronic conditions and require specialized geriatric care. However, the required number of geriatricians to provide essential care for the existing population is four times lower than the actual number of practitioners, and the demandsupply gap continues to grow [2]. All of these factors have created new challenges in providing suitable and affordable care for the elderly to live independently, more commonly known as aging in place.

Yuan He;Junchen Guo;Xiaolong Zheng; "From Surveillance to Digital Twin: Challenges and Recent Advances of Signal Processing for Industrial Internet of Things," vol.35(5), pp.120-129, Sept. 2018. With the recent advances in the Internet of Things (IoT), the significance of information technologies to modern industry is upgraded from purely providing surveillancecentric functions to building a comprehensive information framework of the industrial processes. Innovative techniques and concepts emerge under such circumstances, e.g., digital twin, which essentially involves data acquisition, human-machine-product interconnection, knowledge discovery and generation, and intelligent control, etc. Signal processing techniques are crucial to the aforementioned procedures but face unprecedented challenges when they are applied in the complex industrial environments. In this article, we survey the promising industrial applications of IoT technologies and discuss the challenges and recent advances in this area. We also share our early experience with Pavatar, a real-world industrial IoT system that enables comprehensive surveillance and remote diagnosis for ultrahigh-voltage converter station (UHVCS). Potential research challenges in building such a system are also categorized and discussed to illuminate the future directions.

Eva Arias-de-Reyna;Pau Closas;Davide Dardari;Petar M. Djuric; "Crowd-Based Learning of Spatial Fields for the Internet of Things: From Harvesting of Data to Inference," vol.35(5), pp.130-139, Sept. 2018. The knowledge of spatial distributions of physical quantities, such as radio-frequency (RF) interference, pollution, geomagnetic field magnitude, temperature, humidity, audio, and light intensity, will foster the development of new context-aware applications. For example, knowing the distribution of RF interference might significantly improve cognitive radio systems [1], [2]. Similarly, knowing the spatial variations of the geomagnetic field could support autonomous navigation of robots (including drones) in factories and/or hazardous scenarios [3]. Other examples are related to the estimation of temperature gradients, detection of sources of RF signals, or percentages of certain chemical components. As a result, people could get personalized health-related information based on their exposure to sources of risks (e.g., chemical or pollution). We refer to these spatial distributions of physical quantities as spatial fields. All of the aforementioned examples have in common that learning the spatial fields requires a large number of sensors (agents) surveying the area [4], [5].

Petros Spachos;Ioannis Papapanagiotou;Konstantinos N. Plataniotis; "Microlocation for Smart Buildings in the Era of the Internet of Things: A Survey of Technologies, Techniques, and Approaches," vol.35(5), pp.140-152, Sept. 2018. Microlocation plays a key role in the transformation of traditional buildings into smart infrastructure. Microlocation is the process of locating any entity with a very high accuracy, possibly in centimeters. Such technologies require high detection accuracy, energy efficiency, wide reception range, low cost, and availability. In this article, we provide insights into various microlocation-enabling technologies, techniques, and services and discuss how they can accelerate the incorporation of the Internet of Things (IoT) in smart buildings. We cover the challenges and examine some signal processing filtering techniques such that microlocation-enabling technologies and services can be thoroughly integrated with an IoT-equipped smart building. An experiment with Bluetooth Low-Energy (BLE) beacons used for microlocation is also presented.

Moe Z. Win;Florian Meyer;Zhenyu Liu;Wenhan Dai;Stefania Bartoletti;Andrea Conti; "Efficient Multisensor Localization for the Internet of Things: Exploring a New Class of Scalable Localization Algorithms," vol.35(5), pp.153-167, Sept. 2018. In the era of the Internet of Things (IoT), efficient localization is essential for emerging mass-market services and applications. IoT devices are heterogeneous in signaling, sensing, and mobility, and their resources for computation and communication are typically limited. Therefore, to enable location awareness in large-scale IoT networks, there is a need for efficient, scalable, and distributed multisensor fusion algorithms. This article presents a framework for designing network localization and navigation (NLN) for the IoT. Multisensor localization and operation algorithms developed within NLN can exploit spatiotemporal cooperation, are suitable for arbitrary, largenetwork sizes, and only rely on an information exchange among neighboring devices. The advantages of NLN are evaluated in a large-scale IoT network with 500 agents. In particular, because of multisensor fusion and cooperation, the presented network localization and operation algorithms can provide attractive localization performance and reduce communication overhead and energy consumption.

Matthew Stamm;Paolo Bestagini;Lucio Marcenaro;Patrizio Campisi; "Forensic Camera Model Identification: Highlights from the IEEE Signal Processing Cup 2018 Student Competition [SP Competitions[Name:_blank]]," vol.35(5), pp.168-174, Sept. 2018. Determining the make and model of the camera that captured an image has been an important research area in information forensics for more than a decade [1]-[3] (see Figure 1). Information about which type of camera captured an image can be used to help determine or verify the origin of an image and can form an important piece of evidence in some scenarios, such as analyzing images involved in child exploitation investigations. While metadata may contain information about an image's source camera, metadata is both easy to falsify and frequently missing from an image. As a result, signal processing researchers have developed information forensic algorithms that can exploit traces intrinsically present in the image itself.

Pragnan Chakravorty; "What Is a Signal? [Lecture Notes[Name:_blank]]," vol.35(5), pp.175-177, Sept. 2018. After decades of advances in signal processing, this article goes back to square one, when the word signal was defined. Here we investigate if everything is all right with this stepping stone of defining a signal.

Yuejie Chi; "Low-Rank Matrix Completion [Lecture Notes[Name:_blank]]," vol.35(5), pp.178-181, Sept. 2018. Imagine one observes a small subset of entries in a large matrix and aims to recover the entire matrix. Without a priori knowledge of the matrix, this problem is highly ill-posed. Fortunately, data matrices often exhibit low-dimensional structures that can be used effectively to regularize the solution space. The celebrated effectiveness of principal component analysis (PCA) in science and engineering suggests that most variability of real-world data can be accounted for by projecting the data onto a few directions known as the principal components. Correspondingly, the data matrix can be modeled as a low-rank matrix, at least approximately. Is it possible to complete a partially observed matrix if its rank, i.e., its maximum number of linearly independent row or column vectors, is small?

* "[Dates Ahead[Name:_blank]]," vol.35(5), pp.182-182, Sept. 2018.* Presents the SPS society calendar of upcoming events

Martin Haardt;Christoph Mecklenbrauker;Peter Willett; "Highlights from the Sensor Array and Multichannel Technical Committee: Spotlight on the IEEE Signal Processing Society Technical Committees [In the Spotlight[Name:_blank]]," vol.35(5), pp.183-185, Sept. 2018. Presents the mission and work of hte SPS society Sensor Array and Multichannel Technical Committee.

Robert W. Heath;Nuria González-Prelcic; "Convoluted [Humor[Name:_blank]]," vol.35(5), pp.186-186, Sept. 2018. Various puzzles, quizzes, games, humorous definitions, or mathematical that should engage the interest of readers.

* "[For Your Consideration[Name:_blank]]," vol.35(5), pp.186-186, Sept. 2018.* Presents corrections to the paper, “Deep convolutional neural models for picture-quality prediction,” (Kim, J., et al), IEEE Signal Process. Mag., vol. 34, no. 6, pp. 130–141, Nov. 2017.

Wei Yu;Joakim Jalden; "Perspectives in Signal Processing for Communications and Networking: Spotlight on the IEEE Signal Processing Society Technical Committees [In the Spotlight[Name:_blank]]," vol.35(5), pp.188-183, Sept. 2018. Presents the mission and work of the Signal Processing for Communications and Networking Technical Committee (SPCOM-TC).

IET Signal Processing - new TOC (2018 September 20) [Website]

Mengmeng Liao;Xiaodong Gu; "Hybrid classification approach using extreme learning machine and sparse representation classifier with adaptive threshold," vol.12(7), pp.811-818, 9 2018. Here, the authors propose a hybrid classification approach using extreme learning machine (ELM) and sparse representation classifier (SRC) with adaptive threshold, which they called ATELMSRC. ATELMSRC can adaptively adjust the threshold, and make more test images correctly classified by ELM compared with ELMSRC, which not only reduces the classification time greatly but also improves the classification accuracy. In addition, primal augmented Lagrangian method is used in ATELMSRC to speed up the solution of <inline-formula><tex-math notation="TeX">$ell _1$</tex-math>1<inline-graphic xlink:href="IET-SPR.2017.0514.IM1.gif" /></inline-formula>-norm, which also speeds up the classification process. Experimental results on USPS handwritten digits data set and UMIST face data set show that the total classification time of the authors ATELMSRC is very short for large data sets, only 1/310 of SRC, 1/805 of extended SRC (ESRC), and 1/41 of ELMSRC. Meanwhile, the classification accuracy of the authors’ ATELMSRC is 97.80% on USPS handwritten digits data set, and 99.27% on UMIST face data set, which are higher than those of ELM, SRC, ESRC, ELMSRC etc.

Saeed Mohammadzadeh;Osman Kukrer; "Adaptive beamforming based on theoretical interference-plus-noise covariance and direction-of-arrival estimation," vol.12(7), pp.819-825, 9 2018. A simple and effective adaptive beamforming technique is proposed for uniform linear arrays, which are based on projection processing for covariance matrix construction and desired-signal steering vector estimation. The optimal minimum variance distortion-less response beamformer is closely achieved through approximating the interference-plus-noise covariance matrix by utilising the eigenvalue decomposition of the received signal's covariance matrix. Moreover, the direction-of-arrival (DOA) of the desired signal is estimated by maximising the beamformer output power in a certain angular sector. In particular, the proposed beamformer utilises the aforementioned DOA in order to estimate the desired-signal's steering vector for general steering vector mismatches. Simulation results indicate the better performance of the proposed method in the presence of the different errors compared with some of the recently introduced techniques.

Lei Gao;Zhongliang Jing;Minzhe Li;Han Pan; "Robust adaptive filtering for extended target tracking with heavy-tailed noise in clutter," vol.12(7), pp.826-835, 9 2018. A robust adaptive filter is proposed by using the variational Bayesian (VB) inference to extended target tracking with heavy-tailed noise in clutter. An explicit distribution is used to describe the non-Gaussian heavy-tailed noise based on Student's t-distribution. The need for arbitrary decisions is then eliminated, and the robust operation is provided which is less sensitive to extreme observation. Moreover, an approximate measurement update using the analytical techniques of VB methods is derived to approximate the posterior states at each time step. To obtain a more accurate result, clutter estimation is also integrated considering the uncertainty of target tracking in a cluttered environment. The performance of the proposed algorithm is demonstrated with simulated data.

Junyi Zuo;Binhua Yan;Wei Lian; "Forward–backward particle smoother for non-linear systems with one-step random measurement delay," vol.12(7), pp.836-843, 9 2018. This study is concerned with the state smoothing problem for a class of non-linear discrete-time stochastic systems with one-step random measurement delay (ORMD). The main contribution is that the forward–backward particle smoothing scheme is successfully extended to the systems with ORMD. First, the particle filter, specially designed to deal with ORMD, is implemented to obtain the filtering distribution and the joint distribution of state history. Then, by marginalising the joint distribution, the one-step fixed-lag smoothing distribution can be obtained. Finally, based on the forward–backward smoothing scheme, the particle approximation of the fixed-interval smoothing distribution can be obtained by re-weighting the particles which have been used in the foregoing one-step fixed-lag smoothing distribution. Simulation results demonstrate the effectiveness of the proposed smoother.

Hongyi Li;Chaojie Wang;Di Zhao; "Filter bank properties of envelope modified EMD methods," vol.12(7), pp.844-851, 9 2018. Envelope modified versions of the empirical mode decomposition (EMD) method such as the B-spline interpolation-based EMD (B-EMD) method and cardinal spline interpolation-based EMD (C-EMD) method have been proposed recently for purpose of improving its effectiveness. To shed further light on their performance, the behaviours of these EMD-type methods in the presence of white Gaussian noises are investigated in this study based on extensive numerical experiments. Similarly to the EMD method, it turns out that the envelope modified EMD methods also act as filter banks essentially. However, the spectra among the first several intrinsic mode functions of the B-EMD method have fewer overlaps than those of the EMD and C-EMD methods, which indicate that the B-EMD method has a better ability to alleviate the mode mixing problem for signals with higher frequencies. On the other hand, the C-EMD method is shown to perform better than the EMD and B-EMD methods on separating tones with lower frequencies.

Hongyi Li;Shengyu Chen;Shaofeng Xu;Ziming Song;Jiaxin Chen;Di Zhao; "EMI signal feature enhancement based on extreme energy difference and deep auto-encoder," vol.12(7), pp.852-856, 9 2018. To enhance features of different electromagnetic interference (EMI) signals, which are significant for further feature extraction and pattern recognition, the authors propose an EMI signal feature enhancement method based on extreme energy difference and a deep auto-encoder. Experimental results show that this method can effectively enhance features of EMI signals and improve recognition accuracy.

Paweł Poczekajło;Krzysztof Wawryn; "Algorithm for realisation, parameter analysis, and measurement of pipelined separable 3D finite impulse response filters composed of Givens rotation structures," vol.12(7), pp.857-867, 9 2018. A novel algorithm for the realisation of an orthogonal digital system performing three-dimensional filtering for a separable transfer function is presented in this study. The algorithm is based on a state-space approach and consists of synthesis and implementation algorithms. A structure composed of Givens rotators and delay elements is obtained. A coordinate rotation digital computer algorithm has been used to implement Givens rotators in a pipelined structure. The obtained structure has been realised on a field-programmable gate array (FPGA) chip and its performance has been evaluated. It achieved good finite precision, good sensitivity of filter amplitude to filter coefficients, less noise, better impulse response, and less FPGA chip occupation. To verify the obtained results, they have been compared to the results obtained using a direct-form structure consisting of adders, multipliers, and delay elements.

Xingyu He;Ningning Tong;Xiaowei Hu;Weike Feng; "Radar pulse completion and high-resolution imaging with SAs based on reweighted ANM," vol.12(7), pp.868-872, 9 2018. In actual condition, array elements deficiency or transmission errors lead to incomplete data, which is called sparse aperture (SA) data. In inverse synthetic aperture radar (ISAR) imaging, this large-gaped data produces poor-quality ISAR images when using traditional range–Doppler algorithm. Recently, imaging algorithms based on compressed sensing (CS) theory alleviate this problem effectively because CS theory indicates that sparse signal can be reconstructed from incomplete measurements. However, the basis mismatch problem in CS-based algorithms may degrade the ISAR image. In this study, a reweighted atomic-norm minimisation (ANM) (RAM)-based imaging method is proposed. RAM is a gridless sparse method, which can enhance sparsity and resolution. RAM formulates an optimisation problem and iteratively carries out ANM with a sound reweighting strategy. By reformulating the RAM as a semi-definite programme, the echoes with full aperture (FA) are reconstructed from SA data. After that, ISAR imaging with the reconstructed FA data is achieved via the conventional azimuth compression method. Simulated and real data results demonstrate the effectiveness and superiority of the proposed method.

Rania Chakroun;Mondher Frikha;Leila Beltaïfa zouari; "New approach for short utterance speaker identification," vol.12(7), pp.873-880, 9 2018. Recent advances in the speaker recognition (SR) field showed remarkably accurate and outperforming algorithms. However, their performances drastically degrade when the sparse amount of data is available. Nowadays, recognising a speaker identity when only a small amount of speech data is involved for testing and training remains a key consideration since many real world applications often have access to only speech data having a limited duration. In this study, the authors present a new improved approach, based on new information detected from the speech signal, to improve the task of automatic speaker identification. In doing so, they highlight how the detection of the speaker dialect can be explored to address the research problem related to short utterance SR. Results obtained with the new regional system are presented which provide a comparison between this system and the state-of-the-art systems for speaker identification task.

Natacha Ruchaud;Jean-Luc Dugelay; "JPEG-based scalable privacy protection and image data utility preservation," vol.12(7), pp.881-887, 9 2018. Here, the authors propose a scalable scrambling algorithm operating in the discrete cosine transform (DCT) domain within the JPEG codec. The goal is to ensure that people are no more identifiable while keeping their actions still understandable regardless of the image size. For each 8 × 8 block, the authors encrypt the DCT coefficients to protect data information, and shift them towards the high frequencies to make the DC position available. Whereas encrypted coefficients appear as noise in the protected image, the DC position is dedicated to restitute some of the original information (e.g. the average colour associated with one or a group of blocks). The proposed approach automatically sets the value of each DC according to the region of interest size in order to keep the level of privacy protection strong enough. Comparing to existing methods, the proposed privacy protection framework provides flexibility concerning the appearance of the protected version which makes it stronger for protecting the privacy even during potential attacks. Moreover, the method does not cause excessive perturbation for the recognition of the actions and slightly decreases the efficiency of the JPEG standard.

Mohammad Javad Rezaei;Mohammad Reza Mosavi; "Hybrid anti-jamming approach for kinematic global positioning system receivers," vol.12(7), pp.888-895, 9 2018. In this study, a new hybrid anti-jamming system is proposed for kinematic global positioning system receivers. The proposed system employs a short-time Fourier transform (STFT)-based pre-correlation block to guarantee that the receiver can acquire at least four satellites in jamming environments. It also employs a discrete wavelet transform-based denoising block in the navigation unit of the receiver to increase positioning accuracy, which was degraded due to the jamming and also due to the movement of the receiver. Simulation results demonstrate that the proposed system has a better anti-jamming performance compared with previous methods. They show that the average positioning accuracies of the proposed system are 47, 45, and 43% better than the standard STFT-based mitigation method, wavelet-packets transform (WPT)-assisted filter, and WPT-based hybrid system, respectively.

Shoba Sivapatham;Rajavel Ramadoss; "Performance improvement of monaural speech separation system using image analysis techniques," vol.12(7), pp.896-906, 9 2018. This research work proposes an image analysis-based algorithm to enhance the time–frequency (TF) mask obtained in the initial segmentation of CASA-based monaural speech separation system to improve speech quality and intelligibility. It consists of labelling the initial segmentation mask, boundary extraction, active pixel detection and eliminating the non-active pixels related to noise. In labelling, the TF mask obtained is labelled as periodicity pixel (P) matrix and non-periodicity pixel (NP) matrix. Next speech boundaries are created by connecting all the possible nearby P and NP matrix. Some speech boundary may include noisy TF units as holes; these holes are treated using the proposed algorithm. The proposed algorithm is evaluated with the quality and intelligibility measures such as signal to noise ratio (SNR), perceptual evaluation of speech quality, <inline-formula><tex-math notation="TeX">$P_{{rm EL}}$</tex-math>PEL<inline-graphic xlink:href="IET-SPR.2017.0375.IM1.gif" /></inline-formula>, <inline-formula><tex-math notation="TeX">$P_{{rm NR}}$</tex-math>PNR<inline-graphic xlink:href="IET-SPR.2017.0375.IM2.gif" /></inline-formula>, coherence speech intelligibility index (CSII), normalised covariance metric (NCM), and short-time objective intelligibility (STOI). The experimental resul- s show that the proposed algorithm improves the speech quality by increasing the SNR with an average value of 9.91 dB and reduces the <inline-formula><tex-math notation="TeX">$P_{{rm NR}}$</tex-math>PNR<inline-graphic xlink:href="IET-SPR.2017.0375.IM3.gif" /></inline-formula> by an average value of 25.6% and also improves the speech intelligibility in terms of CSII, NCM, and STOI when compared with the input noisy speech mixture

Pushpraj Tanwar;Ajay Somkuwar; "Hard component detection of transient noise and its removal using empirical mode decomposition and wavelet-based predictive filter," vol.12(7), pp.907-916, 9 2018. In this study, the authors propose a novel method that provides weapon sound (WS) interference reduction of military instruction sound or instructions sound (IS). This method comprises of the following steps. In the first step, the mixed signal is split into its basic constituents using principal component analysis. The second step provides the intrinsic mode functions using the empirical mode decomposition method. In the third step, the fundamental frequency component is extracted by cepstrum analysis. The localisation of WS is done using the grid search window-dominant signal subspace-based method in the fourth step. Multiscale prediction and filtering using Daubechies wavelet-based prediction are applied in the final step. The proposed method provides better results than existing baseline methods, for various signal-to-noise ratios. The simulations have also been performed on recorded WS corrupted IS signals from three different distances for the various input signal-to-weapon noise ratios, to verify the accuracy of the proposed method.

Yunlong Wang;Ying Wu;Ding Wang;Yuan Shen; "TDOA and FDOA based source localisation via importance sampling," vol.12(7), pp.917-929, 9 2018. Source localisation withtime-difference-of-arrival (TDOA) and frequency-difference-of-arrival (FDOA)measurements is of great interest since it can provide the location information with highaccuracy.Although the maximumlikelihood (ML) estimator exhibits excellent asymptotic properties, the non-linearity and non-convexity of ML estimator requiremuch computation resources.In this study, source localisation with TDOA and FDOA measurementsis developed viaMonteCarlo importance sampling (IS).In particular, the optimalperformance can be guaranteed by constructing an optimalimportance function whosecovariance is equivalent to the inverse of Fisher information matrix.The derived variance of the proposed estimator showsgood consistency with the theoretical lowerbound. The improved performance of the proposed method is due to the optimal selection ofimportance function and it canconverge to the global optimum with a large number of samples. Although an initial estimate of source localisation information isrequired, the proposedmethod is robust to this a priori knowledge via IS. Moreover, the scenario ofconsidering sensor location uncertainties is analysed and the corresponding IS based solution is derived. Simulation results show that the proposed methods can achieve the Cramér–Rao lower bound at moderate level noises and is superior to several existing methods.

Xiaogang Huang;Jingling Zhang;Meilei Lv;Gang Shen;Jianhua Yang; "Realising the decomposition of a multi-frequency signal under the coloured noise background by the adaptive stochastic resonance in the non-linear system with periodic potential," vol.12(7), pp.930-936, 9 2018. The authors investigate a multi-frequency signal which is decomposed failure by the traditional empirical mode decomposition (EMD) method. Moreover, the multi-frequency signal submerged in the coloured noise increases the difficulty in signal decomposition. As a result, this noisy signal is decomposed unsuccessfully by the cooperation of the adaptive stochastic resonance (SR) in the classic bistable system and EMD. Then, a method combined adaptive SR in the periodic potential system and EMD is put forward to realise the decomposition. Meanwhile, the random particle swarm optimisation algorithm is applied to reach the optimal situation when signal-to-noise ratio attains the maximum value. Different simulation results verify the effectiveness of the proposed method. The proposed method might be useful in dealing with signal processing problems.

IEEE Transactions on Geoscience and Remote Sensing - new TOC (2018 September 20) [Website]

* "Front Cover," vol.56(9), pp.C1-C1, Sept. 2018.* Presents the front cover for this issue of the publication.

* "IEEE Transactions on Geoscience and Remote Sensing publication information," vol.56(9), pp.C2-C2, Sept. 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Table of contents," vol.56(9), pp.4971-5588, Sept. 2018.* Presents the table of contents for this issue of the publication.

Xingrui Yu;He Zhang;Chunbo Luo;Hairong Qi;Peng Ren; "Oil Spill Segmentation via Adversarial <inline-formula> <tex-math notation="LaTeX">$f$ </tex-math></inline-formula>-Divergence Learning," vol.56(9), pp.4973-4988, Sept. 2018. We develop an automatic oil spill segmentation method in terms of <inline-formula> <tex-math notation="LaTeX">$f$ </tex-math></inline-formula>-divergence minimization. We exploit <inline-formula> <tex-math notation="LaTeX">$f$ </tex-math></inline-formula>-divergence for measuring the disagreement between the distributions of ground-truth and generated oil spill segmentations. To render tractable optimization, we minimize the tight lower bound of the <inline-formula> <tex-math notation="LaTeX">$f$ </tex-math></inline-formula>-divergence by adversarial training a regressor and a generator, which are structured in different forms of deep neural networks separately. The generator aims at producing accurate oil spill segmentation, while the regressor characterizes discriminative distributions with respect to true and generated oil spill segmentations. It is the coplay between the generator net and the regressor net against each other that achieves a minimal of the maximum lower bound for the <inline-formula> <tex-math notation="LaTeX">$f$ </tex-math></inline-formula>-divergence. The adversarial strategy enhances the representational powers of both the generator and the regressor and avoids requesting large amounts of labeled data for training the deep network parameters. In addition, the trained generator net enables automatic oil spill detection that does not require manual initialization. Benefiting from the comprehensiveness of <inline-formula> <tex-math notation="LaTeX">$f$ </tex-math></inline-formula>-divergence for characterizing diversified distributions, our framework can accurately segment variously shaped oil spills in noisy synthetic aperture radar images. Experimental results validate the effectiveness of the proposed oil spill segmentation framework.

Alejandro Monsivais-Huertero;Pang-Wei Liu;Jasmeet Judge; "Phenology-Based Backscattering Model for Corn at L-Band," vol.56(9), pp.4989-5005, Sept. 2018. In this paper, we developed and evaluated a phenology-based coherent scattering model to estimate terrain backscatter at the L-band for growing corn. The scattering model accounted for combined effects from periodicity in soil and vegetation, and changes in plant structure and phenology. The model estimates were compared with observations during the two growing seasons in North Central Florida. The unbiased average root-mean-square (rms) differences between the model and observations decreased from 5 to 1.31 dB when these combined effects were included. During the early stage, direct scattering from soil was the primary scattering mechanism, and as the vegetation increased, the interactions between stems and soil became the dominant scattering mechanism. The most sensitive soil parameters were moisture content and rms height, and vegetation parameters were the widths of stems, leaves, and ears, and the stem water content. This paper demonstrates that it is necessary to consider periodicity and plant structural effects in algorithms to retrieve realistic soil moisture in agricultural terrain.

Hui Bi;Guoan Bi;Bingchen Zhang;Wen Hong; "Complex-Image-Based Sparse SAR Imaging and its Equivalence," vol.56(9), pp.5006-5014, Sept. 2018. Using sparse signal processing to replace matched filtering (MF) in synthetic aperture radar (SAR) imaging has shown significant potential to improve image quality. Due to the huge computational cost needed, it is difficult to apply conventional observation-matrix-based sparse SAR imaging method for large-scene reconstruction. The azimuth-range decouple method is able to minimize the computational complexity and achieve image performance similar to that obtained by the observation-matrix-based algorithm. However, there still exist two difficult problems in sparse SAR imaging, i.e., real-time processing and lack of raw data. To solve these problems, this paper presents a novel complex-image-based sparse SAR imaging method. It is found that if the input MF-recovered SAR complex image is obtained via fully sampled raw data, the proposed method can achieve an identical high-resolution image to that obtained by the azimuth-range decouple algorithm. The computational complexity is also decreased to the same order as that of MF, which makes the real-time sparse SAR imaging become possible. In addition, it should be noted that even though without raw data, the proposed method can still obtain impressive sparse recovery performance by using only the available complex image. Performance analysis and experimental results on real data validate the proposed method.

Shuang-Xi Zhang;Meng-Dao Xing;Ya-Li Zong; "A Novel Weighted Doppler Centroid Estimation Approach Based on Electromagnetic Scattering Model for Multichannel in Azimuth HRWS SAR System," vol.56(9), pp.5015-5034, Sept. 2018. Similar to the conventional squint mode synthetic aperture radar (SAR) imaging processing, the estimation of Doppler centroid is one key problem for the low-squint-mode (within the range of [−5°, 5°]) multichannel in an azimuth high-resolution and wide-swath (HRWS) SAR system. In this paper, based on the electromagnetic scattering model, a novel weighted Doppler centroid estimation approach is proposed for the multichannel in the azimuth HRWS (MC-HRWS) SAR system. First, Maxwell’s equations are employed to derive the echo model for the multichannel SAR system. Then, an improved backscattering model is presented based on the small perturbation method and the available echo model, which is adopted to produce the weights for Doppler centroid estimation. More importantly, in order to improve the precision of Doppler centroid estimation, a weighted ambiguity-free Doppler centroid estimation approach is proposed, where the range-variant characteristic of Doppler centroid is employed to resolve the ambiguity number of Doppler centroid, and the echoes from different range bins with different signal-to-noise ratios are considered. In addition, the Cramer–Rao low bound of the estimated Doppler centroid is also discussed in this paper. The effectiveness of the proposed Doppler centroid estimation approach is verified via simulated and real measured low-squint-mode MC-HRWS SAR data.

Yi Zhong;Yang Yang;Xi Zhu;Yan Huang;Eryk Dutkiewicz;Zheng Zhou;Ting Jiang; "Impact of Seasonal Variations on Foliage Penetration Experiment: A WSN-Based Device-Free Sensing Approach," vol.56(9), pp.5035-5045, Sept. 2018. Foliage penetration (FOPEN) has been found to be a critical mission for a variety of applications, ranging from surveillance to military. Recently, an emerging technology, namely wireless sensor network (WSN)-based device-free sensing (DFS), has been introduced to the domain of FOPEN. This technology only utilizes radio-frequency signals for target detection and classification; thus, no additional hardware is required, just a wireless transceiver. Although the feasibility of using this technology for human detection indoors has been explored to some extent, it is questionable if the same technology can be transferred to outdoors. As far as FOPEN is concerned, the impact of seasonal variations on detection accuracy can be severe. To address this concern, in this paper, an experiment is conducted in four seasons, and how to ensure reasonable detection accuracy with seasonal variations is intensively investigated. To fully evaluate the potential of using the WSN-based DFS for FOPEN, an impulse-radio ultrawideband technology-based prototype is used to collect data samples in different seasons. Unlike the conventional approach based on a combination of statistical properties of received-signal strength and a support vector machine, this approach adopts two special measures for performance enhancement. One measure is to use a higher order cumulant (HOC) algorithm for feature extraction, so that the impact on detection accuracy due to unwanted clutters can be minimized. The other one is to determine the optimal parameters of the classifier by means of a flower pollination algorithm. Consequently, the adverse effects on detection accuracy due to variations of weather conditions in four seasons can be accommodated. According to the experimental result, it is shown that the average classification accuracy of the presented approach can be improved by at least 20% under all seasons with an ensured robustness.

Lin Zhu;Yushi Chen;Pedram Ghamisi;Jón Atli Benediktsson; "Generative Adversarial Networks for Hyperspectral Image Classification," vol.56(9), pp.5046-5063, Sept. 2018. A generative adversarial network (GAN) usually contains a generative network and a discriminative network in competition with each other. The GAN has shown its capability in a variety of applications. In this paper, the usefulness and effectiveness of GAN for classification of hyperspectral images (HSIs) are explored for the first time. In the proposed GAN, a convolutional neural network (CNN) is designed to discriminate the inputs and another CNN is used to generate so-called fake inputs. The aforementioned CNNs are trained together: the generative CNN tries to generate fake inputs that are as real as possible, and the discriminative CNN tries to classify the real and fake inputs. This kind of adversarial training improves the generalization capability of the discriminative CNN, which is really important when the training samples are limited. Specifically, we propose two schemes: 1) a well-designed 1D-GAN as a spectral classifier and 2) a robust 3D-GAN as a spectral–spatial classifier. Furthermore, the generated adversarial samples are used with real training samples to fine-tune the discriminative CNN, which improves the final classification performance. The proposed classifiers are carried out on three widely used hyperspectral data sets: Salinas, Indiana Pines, and Kennedy Space Center. The obtained results reveal that the proposed models provide competitive results compared to the state-of-the-art methods. In addition, the proposed GANs open new opportunities in the remote sensing community for the challenging task of HSI classification and also reveal the huge potential of GAN-based methods for the analysis of such complex and inherently nonlinear data.

Malek G. M. Hussain; "Performance Analysis of Space-Time Array Processing Using Ultrawideband-Throb Signals for High-Resolution Imaging," vol.56(9), pp.5064-5082, Sept. 2018. In this paper, performance analysis is conducted to demonstrate that the combination of employing ultrawideband (UWB) throb signals and array beamforming can enhance the resolution performance as well as the robustness of imaging radar systems, i.e., airborne and spaceborne synthetic aperture radar. A mathematical model for the response of an array beamforming system, with systematic faults and channel instability, to received UWB-throb signals is derived. The effect of parametric random errors associated with this array beamforming system on the processing gain is analyzed for the UWB-throb signal and the linear FM chirp signal. Space–time resolution function, an essential tool for waveform design, is derived for the UWB-throb signal corrupted by parametric random errors. By using the Monte Carlo computer simulation technique, plots are generated for the space–time resolution function, temporal profile, array factor, and energy beampattern, for different values of the signals’ design parameters and the statistical parameters of the random errors. The plots reveal the degradations in the beamforming performance due to the presence of parametric random errors, i.e., drop in the central peak amplitude, rise of sidelobe level, generation of spurious sidelobes, and beam-shape loss affecting beamwidth and beam-pointing accuracy. In the presence of parametric random errors, the robustness of array beamforming using the UWB-throb signal is superior to that achievable by array beamforming using the linear-FM chirp signal. The resolution angle obtained from the energy pattern of the UWB-throb signal provides a tradeoff between signal design parameters and array size for improving the angular-resolution capability. Such a tradeoff is desirable in practice to achieve a balance between system complexity and cost effectiveness.

Francesc Aulí-Llinàs;Michael W. Marcellin;Victor Sanchez;Joan Bartrina-Rapesta;Miguel Hernández-Cabronero; "Dual Link Image Coding for Earth Observation Satellites," vol.56(9), pp.5083-5096, Sept. 2018. The conventional strategy to download images captured by satellites is to compress the data on board and then transmit them via the downlink. It often happens that the capacity of the downlink is too small to accommodate all the acquired data, so the images are trimmed and/or transmitted through lossy regimes. This paper introduces a coding system that increases the amount and quality of the downloaded imaging data. The main insight of this paper is to use both the uplink and the downlink to code the images. The uplink is employed to send reference information to the satellite so that the onboard coding system can achieve higher efficiency. This reference information is computed on the ground, possibly employing extensive data and computational resources. The proposed system is called dual link image coding. As it is devised in this paper, it is suitable for Earth observation satellites with polar orbits. Experimental results obtained for data sets acquired by the Landsat 8 satellite indicate significant coding gains with respect to conventional methods.

Yuehong Chen;Yong Ge;Yu Chen;Yan Jin;Ru An; "Subpixel Land Cover Mapping Using Multiscale Spatial Dependence," vol.56(9), pp.5097-5106, Sept. 2018. This paper proposes a new subpixel mapping (SPM) method based on multiscale spatial dependence (MSD). At the beginning, it adopts object-based and pixel-based soft classifications to generate the class proportions within each object and each pixel, respectively. Then, the object-scale spatial dependence of land cover classes is extracted from the class proportions of objects, and the combined spatial dependence at both pixel scale and subpixel scale is obtained from the class proportions of pixels. Furthermore, these spatial dependences are fused as the MSD for each subpixel. Last, a linear optimization model on each object is built to determine where the land cover classes spatially distribute within each mixed object at subpixel scales. Three experiments on two synthetic images and a real remote sensing image are carried out to evaluate the effectiveness of MSD. The experimental results show that MSD performed better than four existing SPM methods by generating less isolated classified pixels than those generated by three pixel-based SPM methods and more land cover local details than that generated by an object-based SPM method. Hence, MSD provides a valuable solution to producing land cover maps at subpixel scales.

Deepak Gopalakrishnan;Anantharaman Chandrasekar; "Improved 4-DVar Simulation of Indian Ocean Tropical Cyclones Using a Regional Model," vol.56(9), pp.5107-5114, Sept. 2018. The performance of the 4-D variational (4-DVar) data assimilation system over the 3-D variational assimilation system is investigated for the simulation of tropical cyclones (TCs), using the weather research and forecasting model. Two TCs (cyclone Thane and cyclone Hudhud) that formed over the North Indian Ocean have been considered in this paper. Surface and upper air observations from Global Telecommunications Systems in combination with ocean surface winds and satellite derived atmospheric motion vectors are assimilated cyclically at an interval of 6 h. The analysis is then integrated for a period of 72 h in the free forecast mode for both the assimilation runs. The 4-DVar run results for cyclone Thane shows significant impact on rainfall forecast, with an average equitable threat score (ETS) twice that of 3-DVar run, while the 4-DVar run for cyclone Hudhud shows a little improvement in ETS. The average initial error in mean sea level pressure for analysis fields for 4-DVar run is found to about half of the 3-DVar counterpart for the case of cyclone Hudhud. Improvement in 4-DVar simulation of maximum surface wind speed is found to be 7% and 14% for cyclone Thane and Hudhud, respectively. Also, the 4-DVar run for cyclone Thane resulted in an average reduction of 10% in track error simulation, while the same for cyclone Hudhud revealed a little improvement.

Pierre Kokou;Philip Willemsen;Mounir Lekouara;Madani Arioua;Andreu Mora;Pieter Van den Braembussche;Emanuele Neri;Donny M. A. Aminou; "Algorithmic Chain for Lightning Detection and False Event Filtering Based on the MTG Lightning Imager," vol.56(9), pp.5115-5124, Sept. 2018. Meteosat Third Generation (MTG) is the next generation of European meteorological geostationary satellites, set to be launched in 2021. Besides ensuring continuity with Meteosat Second Generation imagery mission, the new series will feature new instruments, such as the Lightning Imager (LI), a high-speed optical detector providing near real-time lightning detection capabilities over Europe and Africa. The instrument will register events on pixels, where a lightning pulse generates a transient in the acquired radiance. In parallel, signal variations due to a number of unwanted sources, e.g., acquisition noise or jitter movement, are expected to produce false events. The challenge for on-board and on-ground processing is, thus, to discard as many false events as possible while keeping a majority of the true lightning events. This paper discusses a chain of algorithms that can be used by the LI for the detection of lightning and for the filtering of false events. Some of these algorithms have been developed in the framework of internal research and simulations conducted by the MTG team at the European Space Agency on an in-house LI simulator and therefore will not necessarily reflect the ultimate operational processing chain. The application of the chain on a representative scenario shows that 99.5% of the false events can be eliminated while keeping 83.6% of the true events, before generating the LI higher level data products. Machine learning techniques have also been studied as an alternative for on-ground event processing, and preliminary results indicate promising potential.

Antonio Pauciullo;Diego Reale;Walter Franzé;Gianfranco Fornaro; "Multi-Look in GLRT-Based Detection of Single and Double Persistent Scatterers," vol.56(9), pp.5125-5137, Sept. 2018. Persistent scatterer (PS) interferometry and more recently synthetic aperture radar tomography have shown to be powerful tools in urban scenarios for providing 3-D point clouds in the reconstruction of buildings as well as in the monitoring of their possible slow temporal deformations. The detection of PSs represents a fundamental aspect, which in the literature has been mainly addressed at full resolution (single-look detection), thus considering only the scatterer coherence properties along the different acquisitions. In this paper, we investigate the benefits offered by the usage of multiple observation looks. Multi-look generalized likelihood ratio test detection schemes are derived and analyzed in terms of detection performances. The analysis shows that even a slight multi-look can provide a dramatic improvement on the detection capability both on simulated and real data, especially in the areas characterized by a low signal-to-noise ratio and in the presence of a limited number of acquisitions.

Lei Shu;Kenneth McIsaac;Gordon R. Osinski; "Learning Spatial–Spectral Features for Hyperspectral Image Classification," vol.56(9), pp.5138-5147, Sept. 2018. Combining spatial information with spectral information for classifying hyperspectral images can dramatically improve the performance. This paper proposes a simple but innovative framework to automatically generate spatial–spectral features for hyperspectral image classification. Two unsupervised learning methods—<inline-formula> <tex-math notation="LaTeX">$K$ </tex-math></inline-formula>-means and principal component analysis—are utilized to learn the spatial feature bases in each decorrelated spectral band. The spatial feature representations are extracted with these spatial feature bases. Then, spatial–spectral features are generated by concatenating the spatial feature representations in all/principal spectral bands. The experimental results indicate that the proposed method is flexible enough to generate rich spatial–spectral features and can outperform the other state-of-the-art methods.

Antonio Cuccaro;Raffaele Solimene; "Inverse Source Problem for a Host Medium Having Pointlike Inhomogeneities," vol.56(9), pp.5148-5159, Sept. 2018. The reconstruction of a source embedded within a multipath environment, which is created by inserting a grid of point scatterers in the scene, is addressed. In particular, the source Fourier spectrum is assumed known so that the focus here is on the reconstruction of the spatial support. As well documented, multipath can allow for resolution improvement. However, it also gives rise to artifacts when a backpropagation-like imaging is adopted. In this paper, we study in detail how resolution improvement and artifacts depend on the grid layout by employing a weighted backpropagation algorithm. More in detail, stationary phase arguments are used to predict the reconstruction leading order terms to which resolution improvement is linked. Moreover, it is shown that artifacts are mainly due to high-order terms and are dependent on the point scatterers’ arrangement. The nature of such artifacts is studied, and a simple way to mitigate their role (without resolution loss) is introduced; it consists in a suitable nonuniform grid arrangement with a “hole in the center.” Backpropagation is then compared with an inverse filtering imaging based on the truncated singular value decomposition (TSVD) of the radiation operator. It is shown that the TSVD is less prone to artifacts and can, in principle, allow for a higher resolution improvement. However, when model error (due to multiple scattering between the elements of the grid) and/or noise corrupt data, backpropagation performs definitely better. The theoretical findings are supported by an extensive numerical analysis. In particular, to keep the figures simple, we consider only 2-D cases.

Nina Hoareau;Marcos Portabella;Wenming Lin;Joaquim Ballabrera-Poy;Antonio Turiel; "Error Characterization of Sea Surface Salinity Products Using Triple Collocation Analysis," vol.56(9), pp.5160-5168, Sept. 2018. The triple collocation (TC) technique allows the simultaneous calibration of three independent, collocated data sources, while providing an estimate of their accuracy. In this paper, the TC is adapted to validate different salinity data products along the tropical band. The representativeness error (the true variance resolved by the relatively high-resolution systems but not by the relatively low-resolution system) is accounted for in the validation process. A method based on the intercalibration capabilities of TC is used to estimate the representativeness error for each triplet, which is found to impact between 15% and 50% the error estimation of the different products. The method also sorts the different products in terms of their resolving spatiotemporal scales. Six salinity products (sorted from smaller to larger scales) used were: the in situ data from the Global Tropical Moored Buoy Array (TAO), the GLORYS2V3 ocean reanalysis output provided by Copernicus, the satellite-derived Aquarius Level 3 version 4 (AV4) and Soil Moisture and Ocean Salinity (SMOS) objectively analyzed (SOA) maps, and the climatology maps provided by the World Ocean Atlas (WOA). This calibration study is limited to the year 2013, a year when all the products were available. This validation approach aims to assess the quality of the different salinity products at the satellite-resolved spatiotemporal scales. The results show that, at the AV4 resolved scales, the Aquarius product has an error of 0.17, and outperforms TAO, GLORYS2V3, and the SOA maps. However, at the SOA resolved scales (which are coarser than those of the Aquarius product because of the large OA correlation radii used), the SMOS product has an error of 0.20, slightly lower than that of GLORYS2V3, Aquarius, and TAO. The WOA products show the highest errors. Higher order calibration may lead to a more accurate assessment of the quality of the climatological products.

Anthony Campbell;Yeqiao Wang; "Examining the Influence of Tidal Stage on Salt Marsh Mapping Using High-Spatial-Resolution Satellite Remote Sensing and Topobathymetric LiDAR," vol.56(9), pp.5169-5176, Sept. 2018. Salt marsh vegetation extent and zonation are often controlled by bottom up factors determined in part by the frequency and duration of tidal inundation. Tidal inundation during remote-sensing mapping of salt marsh resources can alter the resulting image classification. The degree of this impact on mapping with very high resolution (VHR) imagery has yet to be determined. This paper utilizes topobathymetric light detection and ranging (LiDAR) data and bathtub models of a tidal stage at 5 cm intervals from mean low water (MLW) to mean high water (MHW) and determines the impact of tidal variation in salt marsh mapping within Jamaica Bay, NY, USA. Tidal inundation models were compared with the Worldview-2 and Quickbird-2 imageries acquired at a range of tidal stages. The modeled inundation of normalized difference vegetation index and smooth cordgrass (S. alterniflora) maps was compared from MLW to MHW. This paper finds that at 0.6 m above MLW, only 3.5% of S. alterniflora is inundated. This paper demonstrates a modeling approach integrating VHR satellite remote-sensing data and topobathymetric LiDAR data to address tidal variation in salt marsh mapping. The incremental modeling of the tidal stage is important for understanding areas most at risk from sea level rise and informs management decisions in accordance with this.

Zhaoyun Zong;Yurong Wang;Kun Li;Xingyao Yin; "Broadband Seismic Inversion for Low-Frequency Component of the Model Parameter," vol.56(9), pp.5177-5184, Sept. 2018. Seismic inversion is an important approach in parameters estimation in the fields of geosciences. The low-frequency component of the model parameter plays an important role in seismic inversion. The emergence of broadband seismic data acquisition and processing technologies is pushing the attention of the low frequency to a new level. With the review of the research status of the estimation of the low-frequency models in history and the low-frequency component contained in the complex frequency domain, a novel broadband seismic Bayesian inversion approach in the complex frequency domain is proposed to implement the estimation of the low-frequency component of the model parameter. The proposed approach makes full use of the advantage of broadband seismic data and the low-frequency component of the damped wave fields in the complex frequency domain. The kernel function of the proposed inversion approach is built with Bayesian inference. Synthetic examples demonstrate the feasibility and robustness of the proposed inversion approach in the estimation of the low-frequency component of the model parameter. A field data example verifies the feasibility and reasonability of the proposed inversion approach in application. Finally, the estimated low-frequency component of the model parameter is utilized as the initial model for the suggested seismic Bayesian inversion method in time domain. Model and field data examples further verify the effectiveness and superiority of the proposed inversion method in the final estimation of the model parameter by comparing with the conventional inversion approach.

Clara Estela Jiménez Tejero;Valenti Sallares;César R. Ranero; "Appraisal of Instantaneous Phase-Based Functions in Adjoint Waveform Inversion," vol.56(9), pp.5185-5197, Sept. 2018. Complex signal analysis allows separation of instantaneous envelope and phase of seismic waveforms. Seismic attributes have long routinely been used in geological interpretation and signal processing of seismic data as robust tools to highlight relevant characteristics of seismic waveforms. In the context of adjoint waveform inversion (AWI), it is crucial choosing an efficient parameter to describe the seismic data. The most straightforward option is using whole waveforms but the mixing of amplitude and phase parameters increases the nonlinearity inherent to the methodology. Several studies support the good functioning of the instantaneous phase (IP), a more linear parameter to measure the misfit between synthetic and recorded data. The IP is calculated using the inverse of the tangent function, where its principal value can be defined either wrapped in between different limits or also unwrapped. The wrapped phase presents phase jumps that reflect as noise in the inversion results. The conditioning of these discontinuities solves the problem partially and the continuous unwrapped IP is not a good descriptor of the waveform. For this reason, it is worth to explore beyond the traditional description of the IP parameter. Two alternative functions have been studied: 1) a revision of the triangular IP and 2) the first implementation of the normalized signal. The main objective of this paper is therefore, to review the fundamentals of the IP attribute in order to design robust IP-based objective functions which allow mitigating the inherent nonlinearity in the AWI method.

Chen Wang;Zehua Dong;Xiaojuan Zhang;Xiaojun Liu;Guangyou Fang; "Method for Anisotropic Crystal-Orientation Fabrics Detection Using Radio-Wave Depolarization in Radar Sounding of Mars Polar Layered Deposits," vol.56(9), pp.5198-5206, Sept. 2018. The polar layered deposits (PLDs) provide a wealth of information about the past climate evolution of Mars. Surface mass fluxes and ice flow mainly governed topography and layering of the PLD. China’s Mars probe including an orbiter and a landing rover will be launched by 2020. A new type satellite-borne Mars penetrating radar instrument has been selected to be a part of the payloads on the orbiter. Its main scientific objectives are to map the distribution of water, water–ice and to detect the soil characteristics at global scale on the Martian crust. Compared with Mars Advanced Radar for Subsurface and Ionospheric Sounding and Shallow Radar, the biggest difference is that the antenna system of this Mars penetrating radar consists of two dipole antennas mutually perpendicular. This special configuration enables the investigation of the ice flow of PLD by detecting and analyzing the features of anisotropic crystal-orientation fabric (COF). Thus, relying on the fact that the radio waves are depolarized while passing through an anisotropic COF layer, in this paper, a method for anisotropic COF detection based on this radar system is proposed. The radar echo formulation of anisotropic COF is derived and the ratio of the signals measured by the two perpendicular antennas is used to analyze the anisotropy of COF. We demonstrate that the ratio is an ideal criterion for the detection and analysis of COF, since it contains all parameters about the anisotropy feature of COF and it is independent of the attenuation in the propagation path and the reflection coefficient. In order to verify the validity of the derived analytical expression of the ratio for the detection of COF, finite-different time-domain simulations are carried out based on a simple model of the subsurface of PLD which contains an anisotropic COF layer. The advantages of this method, the potential application scenarios, and the effects of the Martian environment are also- discussed.

Jianjie Wang;Chao Liu;Min Min;Xiuqing Hu;Qifeng Lu;Letu Husi; "Effects and Applications of Satellite Radiometer 2.25-<inline-formula> <tex-math notation="LaTeX">$mu$ </tex-math></inline-formula>m Channel on Cloud Property Retrievals," vol.56(9), pp.5207-5216, Sept. 2018. Near-infrared (NIR) channels, such as the 1.6- and 2.13-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula> channels of Moderate Resolution Imaging Spectroradiometer (MODIS), play an important role in inferring cloud properties because of their sensitivity to cloud amount and particle size. Instead of the 2.13-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula> channel, which has shown great success on MODIS, the central wavelength of the Visible Infrared Imaging Radiometer Suite (VIIRS) is shifted to 2.25 <inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula>. This paper investigates the influences of NIR channels (i.e., 2.13 and <inline-formula> <tex-math notation="LaTeX">$2.25~mu text{m}$ </tex-math></inline-formula>) on cloud optical and microphysical property retrievals and reveals the potential applications of the 2.25-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula> channel to cloud thermodynamic phase and multilayer cloud detections by combining with the 1.6-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula> channel. Rigorous radiative transfer simulations are performed to provide theoretical reflectance at the channels of interest, and MODIS and VIIRS observations are used for case studies. Our results indicate a minor influence of the 2.25-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula> channel on cloud optical depth and effective particle size retrievals. In combination with the 1.6-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula> channel, the 2.25-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula> channel provides additional information indicating cloud phases. However, the 1.6- and 2.13-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></i- line-formula> channels do not show any sensitivity to cloud phase. Furthermore, by considering the infrared-based cloud phase results, the 1.6- and 2.25-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula> channel combination becomes possible to infer multilayer clouds. Case studies based on simultaneous MODIS and VIIRS observations demonstrate the capability of the 1.6–2.25-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula> channel combination for determining cloud phase and multilayer clouds. Collocated satellite-based active lidar observations further validate these advantages of the 2.25-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula> channel over the original 2.13-<inline-formula> <tex-math notation="LaTeX">$mu text{m}$ </tex-math></inline-formula> channel.

Mario Nieto-Hidalgo;Antonio-Javier Gallego;Pablo Gil;Antonio Pertusa; "Two-Stage Convolutional Neural Network for Ship and Spill Detection Using SLAR Images," vol.56(9), pp.5217-5230, Sept. 2018. This paper presents a system for the detection of ships and oil spills using side-looking airborne radar (SLAR) images. The proposed method employs a two-stage architecture composed of three pairs of convolutional neural networks (CNNs). Each pair of networks is trained to recognize a single class (ship, oil spill, and coast) by following two steps: a first network performs a coarse detection, and then, a second specialized CNN obtains the precise localization of the pixels belonging to each class. After classification, a postprocessing stage is performed by applying a morphological opening filter in order to eliminate small look-alikes, and removing those oil spills and ships that are surrounded by a minimum amount of coast. Data augmentation is performed to increase the number of samples, owing to the difficulty involved in obtaining a sufficient number of correctly labeled SLAR images. The proposed method is evaluated and compared to a single multiclass CNN architecture and to previous state-of-the-art methods using accuracy, precision, recall, F-measure, and intersection over union. The results show that the proposed method is efficient and competitive, and outperforms the approaches previously used for this task.

Hanwen Yu;Hyongki Lee;Ting Yuan;Ning Cao; "A Novel Method for Deformation Estimation Based on Multibaseline InSAR Phase Unwrapping," vol.56(9), pp.5231-5243, Sept. 2018. The three-pass differential synthetic aperture radar interferometry (DInSAR) is one of the approaches in radar interferometry applications for measuring the deformation of the earth surface. The conventional three-pass DInSAR needs a successful single-baseline (SB) phase unwrapping (PU) procedure on each interferogram to ensure accurate deformation monitoring. Because of the limitation of the phase continuity assumption, the SB PU becomes a challenging processing step when the study area has strong phase variation. In this paper, the multibaseline (MB) InSAR PU methodology, which can eliminate the phase continuity assumption by means of the baseline diversity, is transplanted into the conventional three-pass DInSAR domain. Based on MB PU, a novel terrain deformation estimation approach is developed. Both theoretical analysis and experimental results demonstrate that the proposed method is an effective surface deformation estimation method.

Gang Zheng;Xiaofeng Li;Lizhang Zhou;Jingsong Yang;Lin Ren;Peng Chen;Huaguo Zhang;Xiulin Lou; "Development of a Gray-Level Co-Occurrence Matrix-Based Texture Orientation Estimation Method and Its Application in Sea Surface Wind Direction Retrieval From SAR Imagery," vol.56(9), pp.5244-5260, Sept. 2018. A gray-level co-occurrence matrix (GLCM)-based method was developed for better texture orientation estimation in remote sensing imagery. A GLCM is essentially the joint probability distribution of gray levels at the position pairs satisfying a specific relative position within an image. We first found that when the relative position is aligned with texture orientation, larger elements of the corresponding GLCM are concentrated diagonally. Then, we developed a new texture orientation estimation method. The method uses the GLCMs of relative positions equally spaced in orientation and distance, and three schemes of these GLCMs are calculated. A GLCM-derived parameter is then defined to quantitatively measure the degree of diagonal concentration of the GLCM elements, and its integral over the variable of relative distance is selected as an indicator to find the dominant texture orientation(s). For testing, we applied the method to 44 selected images containing one or multiple aligned textures. The results show that the method is in good agreement with visual inspections from 45 randomly selected people, and is insensitive to large typical noises and illumination change. In addition, using (any) one GLCM calculation scheme over the others does not significantly affect the results. Finally, the method was applied to sea surface wind direction (SSWD) retrieval from 89 synthetic aperture radar images. In the application test, the developed method achieves better SSWD retrieval accuracy than do the commonly used Fourier transform- and gradient-based methods by 8.13° and 16.09° against the European Centre for Medium-Range Weather Forecast ERA-Interim reanalysis data and 10.21° and 17.31° against the cross-calibrated multiplatform data.

Radhika Ravi;Yun-Jou Lin;Magdy Elbahnasawy;Tamer Shamseldin;Ayman Habib; "Bias Impact Analysis and Calibration of Terrestrial Mobile LiDAR System With Several Spinning Multibeam Laser Scanners," vol.56(9), pp.5261-5275, Sept. 2018. This paper proposes a multiunit light detection and ranging (LiDAR) system calibration procedure to directly estimate the mounting parameters relating multiple spinning multibeam laser scanners to the global navigation satellite system/inertial navigation system (GNSS/INS) unit onboard a mobile terrestrial platform in order to derive point clouds with high-positional accuracy. This procedure is based on the use of conjugate planar/linear features in overlapping point clouds derived from different drive runs. In order to increase the efficiency of semiautomatic conjugate feature extraction from LiDAR data, specifically designed calibration boards covered by highly reflective surfaces that could be easily deployed and set up within outdoor environments are used in this paper. To ensure the accuracy of the estimated mounting parameters, an optimal configuration of target primitives and drive runs is determined by analyzing the potential impact of bias in mounting parameters of a LiDAR unit on the resultant point cloud for different orientations of target primitives and different drive run scenarios. This impact is also verified experimentally by simulating a bias in each mounting parameter separately. Finally, the optimal configuration is used within an experimental setup to evaluate the performance of the proposed calibration procedure through the a posteriori variance factor of least squares adjustment and the quality of fit of adjusted point cloud to linear/planar surfaces before and after the calibration process. The proposed calibration approach attained an accuracy of 1.42 cm, which is better than the accuracy expected based on the specifications of the involved hardware components, i.e., the LiDAR and GNSS/INS units.

Wei Li;Yunbin Yuan;Jikun Ou;Yujin He; "IGGtrop_SH and IGGtrop_rH: Two Improved Empirical Tropospheric Delay Models Based on Vertical Reduction Functions," vol.56(9), pp.5276-5288, Sept. 2018. IGGtrop_SH and IGGtrop_rH are two improved tropospheric delay models that are established based on empirical vertical reduction functions and capable of providing zenith tropospheric delay (ZTD) correction for radio space geodetic analysis without meteorological measurements. The vertical dependence of ZTD mean values is represented by an exponential function (four coefficients in the high latitudes and six coefficients in other regions) and the vertical dependence of ZTD seasonal variation amplitudes is represented by a fifth degree polynomial. IGGtrop_SH considers both annual and semiannual variations of ZTD, while IGGtrop_rH only considers the annual variation of ZTD, which is suitable for applications more concerned about time and data storage space. The new models are validated by international Global Navigation Satellite System service tropospheric products at 292 globally distributed tracking stations between January 2010 and December 2013. The average bias values over the four years are around −0.46 cm for both models; the globally averaged root-mean-square error of IGGtrop_SH is about 3.86 cm and the value of IGGtrop_rH is about 3.97 cm. Comparison between IGGtrop_SH and the previous IGGtrop model suggests that the inclusion of semiannual variation leads to an apparent improvement of ZTD correction performance in the Northern Hemisphere, especially for middle latitudes, while no obvious change is seen in the Southern Hemisphere.

S. Bonafoni;F. Alimenti;L. Roselli; "An Efficient Gain Estimation in the Calibration of Noise-Adding Total Power Radiometers for Radiometric Resolution Improvement," vol.56(9), pp.5289-5298, Sept. 2018. Calibration of microwave radiometers is a crucial task for reliable and accurate antenna temperature measurements in remote sensing applications. This paper describes a processing procedure for data calibration of noise-adding (NA) total power (TP) radiometers, without thermal stabilization, able to improve the radiometric resolution performance and keep good accuracy. This method, easily implementable, is based on the strong dependence of the radiometric gain on the internal physical temperature of the system. It provides a calibration of the output voltage measured in the TP mode exploiting the noise source power injection only every 30 min. The quality of the proposed procedure was assessed by means of three experiments carried out in different years and environmental conditions, using a low-cost NA TP radiometer operating at 12.65 GHz. The measurements show an uncertainty better than 0.7 K and, above all, a clear improvement in radiometric resolution (below 0.1 K). The radiometric resolution benefit is particularly appreciable in applications where the aim is the detection of small radiation power increments. Two experimental tests show how this method for data calibration effectively resolves the warm target counting inside the antenna footprint, while the same data measured with the standard NA procedure do not allow the same detection capability.

Jean-Christophe Poisson;Graham D. Quartly;Andrey A. Kurekin;Pierre Thibaut;Duc Hoang;Francesco Nencioli; "Development of an ENVISAT Altimetry Processor Providing Sea Level Continuity Between Open Ocean and Arctic Leads," vol.56(9), pp.5299-5319, Sept. 2018. Over the Arctic regions, current conventional altimetry products suffer from a lack of coverage or from degraded performance due to the inadequacy of the standard processing applied in the ground segments. This paper presents a set of dedicated algorithms able to process consistently returns from open ocean and from sea-ice leads in the Arctic Ocean (detection of water surfaces and derivation of water levels using returns from these surfaces). This processing extends the area over which a precise sea level can be computed. In the frame of the European Space Agency Sea Level Climate Change Initiative (http://cci.esa.int), we have first developed a new surface identification method combining two complementary solutions, one using a multiple-criteria approach (in particular the backscattering coefficient and the peakiness coefficient of the waveforms) and one based on a supervised neural network approach. Then, a new physical model has been developed (modified from the Brown model to include anisotropy in the scattering from calm protected water surfaces) and has been implemented in a maximum likelihood estimation retracker. This allows us to process both sea-ice lead waveforms (characterized by their peaky shapes) and ocean waveforms (more diffuse returns), guaranteeing, by construction, continuity between open ocean and ice-covered regions. This new processing has been used to produce maps of Arctic sea level anomaly from 18-Hz ENVIronment SATellite/RA-2 data.

Jie Dong;Mingsheng Liao;Lu Zhang;Jianya Gong; "A Unified Approach of Multitemporal SAR Data Filtering Through Adaptive Estimation of Complex Covariance Matrix," vol.56(9), pp.5320-5333, Sept. 2018. Speckle inherent in synthetic aperture radar (SAR) images usually complicates visual interpretation and brings difficulty to information extraction for applications. Current speckle filters are mainly developed for single SAR image or an image pair (InSAR or PolInSAR). Although some multichannel filters are proposed, they only exploit pixel intensity to identify statistically homogeneous pixels (SHPs). In this paper, we present a new unified approach to filter multitemporal SAR images by adaptively estimating complex covariance matrix-based multitemporal filtering, named CCM-MTF. The key idea is to employ generalized likelihood ratio (GLR) test on the Wishart distributed initial CCM to evaluate the similarity between two pixels. A special design is given to the initial CCM estimation, in which temporal samples are used instead of spatially neighboring samples. Then, a threshold determined by the asymptotic distribution of the logarithm of GLR test statistics at a fixed significance level is used to select spatial SHPs for the reference pixel. Subsequently, the filtering is implemented by estimation of the final CCM from original SAR scattering vector over SHP pixels, and all filtered target information channels including intensity, interferometric phase, and coherence can be explicitly derived from the final CCM. The effectiveness of the proposed CCM-MTF method is validated by experiments on both simulated and real multitemporal SAR images. Both qualitative and quantitative comparisons between CCM-MTF and four state-of-the-art SAR filters are carried out to demonstrate its advantages in terms of speckle suppression as well as detail preservation for all the three information channels.

Shan Liu;Lei Wang;Hongxing Liu;Haibin Su;Xiaolu Li;Wenfeng Zheng; "Deriving Bathymetry From Optical Images With a Localized Neural Network Algorithm," vol.56(9), pp.5334-5342, Sept. 2018. We present a localized neural network algorithm for water depth estimation from optical remote sensing images. Our new model is called a locally adaptive back-propagation neural network (LABPNN). In an LABPNN, the neural networks were trained at regularly distributed normative sites. For each unit of LABPNN, training data samples were identified by a specified search radius from the normative sites. Water depth was estimated by an ensemble of LABPNNs, with weights assigned inversely by their distances to the point of estimation. The water depth prediction accuracy from the LABPNN models doubled compared to the regular back-propagation network that uses all of the samples without considering nonstationarity. We also compared the LABPNN model with the regression-based inversion model with the localization feature. The prediction error of LABPNN is less by about 5% in the first case study, and 7% less in the second case study. It is because of the better performance of neural networks than that of the regression models when the sample data are relatively sparse. The experiments suggest that the LABPNN model is a viable solution to water depth retrieval from optical images.

Le Gan;Junshi Xia;Peijun Du;Jocelyn Chanussot; "Multiple Feature Kernel Sparse Representation Classifier for Hyperspectral Imagery," vol.56(9), pp.5343-5356, Sept. 2018. Multiple types of features, e.g., spectral, filtering, texture, and shape features, are helpful for hyperspectral image (HSI) classification tasks. Combining multiple features can describe the characteristics of pixels from different perspectives, and always results in better classification performance. Recently, multifeature combination learning has been widely employed to the multitask-learning-based representation-based model to obtain a multifeature representation vector. However, the linear sparse representation-based classifier (SRC) cannot handle the HSI with highly nonlinear distribution, and kernel sparse representation-based classifier (KSRC) can remedy the drawback of linear SRC. By adopting nonlinear mapping, the samples in kernel space are often of high or even infinite dimensionality. In this paper, we integrate kernel principal component analysis into multifeature-based KSRC and propose a novel multiple feature kernel sparse representation-based classifier (namely, MFKSRC) for hyperspectral imagery. More specifically, spatial features, Gabor textures, local binary patterns, and difference morphological profiles are adopted and then each kind of feature is transformed nonlinearly into a new low-dimensional kernel space. The proposed framework can handle data with nonlinear distribution and add a dimensionality reduction stage in kernel space before optimizing the corresponding cost function. Experimental results on different HSIs demonstrate that the proposed MFKSRC algorithm outperforms the state-of-the-art classifiers.

Zhuangsheng Zhu;Chi Li;Xiangyang Zhou; "An Accurate High-Order Errors Suppression and Cancellation Method for High-Precision Airborne POS," vol.56(9), pp.5357-5367, Sept. 2018. Synthetic aperture radar imaging requires precise position information provided by position and orientation system (POS), especially sensitive to high-order errors of position information, whereas high-order errors of global positioning system (GPS) are normally ignored in POS solution procedure. For high-precision POS, high-order errors become a significant error source with decisive effects on the accuracy of POS. In this paper, we propose a GPS high-order position errors suppression and cancellation method, which includes the traditional GPS/inertia navigation system (INS) integrated method used as the high-frequency filter, and the low-frequency filter is based on least squares support vector machine. The high- and low-frequency dual filters are constructed to suppress and eliminate high-order error. In verifying our dual-rate hybrid filtering method, POS flight experiments were conducted in November and October up to December in 2015, with results showing that the high-order position accuracy of high-precision POS (0.01°/h gyro drift) is better than 0.006 m, and the proposed method has a better performance compared with other methods.

Jianwei Fan;Yan Wu;Ming Li;Wenkai Liang;Yice Cao; "SAR and Optical Image Registration Using Nonlinear Diffusion and Phase Congruency Structural Descriptor," vol.56(9), pp.5368-5379, Sept. 2018. The registration of synthetic aperture radar (SAR) and optical images is a challenging task due to the potential nonlinear intensity differences between the two images. In this paper, a novel image registration method, which combines nonlinear diffusion and phase congruency structural descriptor (PCSD), is proposed for the registration of SAR and optical images. First, to reduce the influence of speckle noise on feature extraction, a uniform nonlinear diffusion-based Harris (UND-Harris) feature extraction method is designed. The UND-Harris detector is developed based on nonlinear diffusion, feature proportion, and block strategy, and explores many more well-distributed feature points with potential of being correctly matched. Then, according to the property that structural features are less sensitive to modality variation, a novel structural descriptor, namely, the PCSD, is constructed to robustly describe the attributes of the extracted points. The proposed PCSD is built on a PC structural image in a grouping manner, which effectively increases the discriminability and robustness of the final structural descriptor. Experimental results conducted on SAR and optical image pairs demonstrate that the proposed method is more robust against speckle noise and nonlinear intensity differences and improves the registration accuracy effectively.

Gui Gao;Sheng Gao;Juan He;Gaosheng Li; "Ship Detection Using Compact Polarimetric SAR Based on the Notch Filter," vol.56(9), pp.5380-5393, Sept. 2018. Compact polarimetric data exploitation, especially in hybrid-polarimetric (HP) mode, is currently attracting increasing interest due to the new generation of synthetic aperture radar (SAR) systems. Recently, it has been demonstrated that the notch filter is useful for ship detection in either full- or dual-polarization (DP)-mode SAR images. In this paper, the notch filter investigation is further extended to HP SAR architecture for ship detection on the ocean surface. First, a version of the notch filter that is suitable for HP SAR is proposed based on the definition of the corresponding feature partial scattering vector from the covariance matrix of the HP SAR. Subsequently, a novel model characterizing the statistics of the notch distance of sea clutter in the HP mode is developed. Based on the statistical model, the threshold of constant false-alarm rate (CFAR) ship detection is theoretically and analytically derived, which allows the automatic and adaptive implementation for ship detection in varying sea backgrounds in practical applications. Experiments on the HP SAR data emulated from full-polarization L-band Aerospace Exploration Agency Advanced Land Observation Satellite Phased-Array type L-band SAR and C-band RADARSAT-2 SAR measurements validate not only the soundness of the proposed CFAR detection but also the high accuracy of the presented model in fitting HP SAR data. Furthermore, the notch filter and its CFAR realization provide the same benchmark for the comparison of the detectability of HP and conventional linear DP SAR data. Preliminary findings suggest that the detection performance of HP SAR is superior to that of DP SAR in ship observation. Therefore, the proposed CFAR method based on the notch filter provides a promising technique for the detection of ships using HP SAR data.

Gui Gao;Sheng Gao;Juan He;Gaosheng Li; "Adaptive Ship Detection in Hybrid-Polarimetric SAR Images Based on the Power–Entropy Decomposition," vol.56(9), pp.5394-5407, Sept. 2018. Based on its advantages, compact polarimetric (CP) synthetic aperture radar (SAR) is considered to be a good option for earth observations. This paper proposes an adaptive method of ship detection in CP SAR images operating in the hybrid-polarimetric (HP) mode. First, according to the analysis of scattering between ships and sea background, a novel decomposition approach, named power–entropy (PE) decomposition, is developed. Based on this approach, two components of the scattering power, the high-entropy scattering amplitude (HESA) and low-entropy scattering amplitude, are separated. We demonstrate that the HESA component is an effective physical quantity indicating the difference between the ship and its background and hence can potentially be used for ship detection using HP SAR data. The generalized Gamma distribution (<inline-formula> <tex-math notation="LaTeX">$text{G}Gamma text{D}$ </tex-math></inline-formula>) is found suitable for the characterization of HESA statistics of sea clutter with a wide range of homogeneity. As a result, the adaptive constant false-alarm rate (CFAR) technique based on the HESA detector is proposed. Experiments performed using HP measurements emulated from L-band ALOS-PALSAR and C-band RADARSAT-2 full polarimetric data validate the soundness and superiority of the proposed CFAR method based on the HESA detector. Both the theoretical proof and experimental results show that the HESA improves the ship–sea contrast (or the signal–clutter ratio) more than popular detectors, such as the entropy and SPAN. Moreover, the <inline-formula> <tex-math notation="LaTeX">$text{G}Gamma text{D}$ </tex-math></inline-formula> is a versatile model for the description of the statistical behavior of both the HESA and comparable detectors.

Xiaofei Yang;Yunming Ye;Xutao Li;Raymond Y. K. Lau;Xiaofeng Zhang;Xiaohui Huang; "Hyperspectral Image Classification With Deep Learning Models," vol.56(9), pp.5408-5423, Sept. 2018. Deep learning has achieved great successes in conventional computer vision tasks. In this paper, we exploit deep learning techniques to address the hyperspectral image classification problem. In contrast to conventional computer vision tasks that only examine the spatial context, our proposed method can exploit both spatial context and spectral correlation to enhance hyperspectral image classification. In particular, we advocate four new deep learning models, namely, 2-D convolutional neural network (2-D-CNN), 3-D-CNN, recurrent 2-D CNN (R-2-D-CNN), and recurrent 3-D-CNN (R-3-D-CNN) for hyperspectral image classification. We conducted rigorous experiments based on six publicly available data sets. Through a comparative evaluation with other state-of-the-art methods, our experimental results confirm the superiority of the proposed deep learning models, especially the R-3-D-CNN and the R-2-D-CNN deep learning models.

Daniel S. Plotnick;Timothy M. Marston; "Utilization of Aspect Angle Information in Synthetic Aperture Images," vol.56(9), pp.5424-5432, Sept. 2018. Synthetic aperture sonar and synthetic aperture radar involve the creation of high-resolution images of a scene via scattered signals recorded at different locations. Each pixel of the reconstructed image includes information obtained from multiple aspects due to the changing position of the sources/receivers. In this paper, the aspect-dependent scattering at each pixel is exploited to provide additional information about the scene; this paper presents a framework for converting and utilizing multiaspect data, as well as several examples. For sonar data, as is presented here, the aspect dependence may be leveraged to separate objects of interest from the background, to understand the local bathymetry, or for visualizing acoustic shadowing in full circular synthetic aperture sonar images. Several examples of images of the seafloor containing objects of interest are presented for both circular and linear apertures. In addition, the aspect dependence of low-frequency elastic scattering from objects may be used to understand the underlying scattering physics, which is of potential use in fields such as target recognition and nondestructive testing; a laboratory example is presented.

Yawei Wang;Jian Peng;Xiaoning Song;Pei Leng;Ralf Ludwig;Alexander Loew; "Surface Soil Moisture Retrieval Using Optical/Thermal Infrared Remote Sensing Data," vol.56(9), pp.5433-5442, Sept. 2018. Surface soil moisture (SSM) plays significant roles in various scientific fields, including agriculture, hydrology, meteorology, and ecology. However, the spatial resolutions of microwave SSM products are too coarse for regional applications. Most current optical/thermal infrared SSM retrieval models cannot directly estimate the quantitative volumetric soil water content without establishing empirical relationships between ground-based SSM measurements and satellite-derived proxies of SSM. Therefore, in this paper, SSM is estimated directly from 5-km-resolution Chinese Geostationary Meteorological Satellite FY-2E data based on an elliptical-new SSM retrieval model developed from the synergistic use of diurnal cycles of land surface temperature (LST) and net surface shortwave radiation (NSSR). The elliptical-original model was constructed for bare soil and did not consider the impacts of different fractional vegetation cover (FVC) conditions. To optimize the elliptical-original model for regional-scale SSM estimates, it is improved in this paper by considering the influence of FVC, which is based on a dimidiate pixel model and a Moderate Resolution Imaging Spectroradiometer normalized difference vegetation index product. A preliminary validation of the model is conducted based on ground measurements from the counties of Maqu, Luqu, and Ruoergai in the source area of the Yellow River. A correlation coefficient (R) of 0.620, a root-mean-square error (RMSE) of 0.146 m3/m3, and a bias of 0.038 m3/m3 were obtained when comparing the in situ measurements with the FY-2E-derived SSM using the elliptical-original model. In contrast, the FY-2E-derived SSM using the elliptical-new model exhibited greater consistency with the ground measurements, as evidenced by an R of 0.845, an RMSE of 0.064 m3/m3, and a bias of 0.017 m3/m3. To provide accurate SSM estimates, high-accuracy FVC, LST, and NS- R data are required. To complement the point-scale validation conducted here, cross-comparisons with other existing SSM products will be conducted in the future studies.

Giuseppe Scarpa;Sergio Vitale;Davide Cozzolino; "Target-Adaptive CNN-Based Pansharpening," vol.56(9), pp.5443-5457, Sept. 2018. We recently proposed a convolutional neural network (CNN) for remote sensing image pansharpening obtaining a significant performance gain over the state of the art. In this paper, we explore a number of architectural and training variations to this baseline, achieving further performance gains with a lightweight network that trains very fast. Leveraging on this latter property, we propose a target-adaptive usage modality that ensures a very good performance also in the presence of a mismatch with respect to the training set and even across different sensors. The proposed method, published online as an off-the-shelf software tool, allows users to perform fast and high-quality CNN-based pansharpening of their own target images on general-purpose hardware.

Xinhua Mao;Xueli He;Danqi Li; "Knowledge-Aided 2-D Autofocus for Spotlight SAR Range Migration Algorithm Imagery," vol.56(9), pp.5458-5470, Sept. 2018. With continuous improvement of the synthetic aperture radar (SAR) resolution, the 2-D defocus effect in SAR image resulting from uncompensated motion error has made autofocus become a new challenging problem. Conventional 2-D autofocus approaches assume that the 2-D phase error is absolutely unknown and estimate them directly. Due to high dimensionality of the unknown parameters, these approaches often suffer from high computational burden and low estimate accuracy. In this paper, we analyze the effect of range migration algorithm (RMA) processing on the 2-D echo phase, and reveal the analytical structure of the residual 2-D phase error in RMA imagery. Then, by exploiting the derived prior knowledge on the phase error structure, an accurate and efficient 2-D autofocus approach is proposed. In the new method, only 1-D error, e.g., azimuth phase error, or residual range cell migration, is required to be estimated directly, while the 2-D phase error is computed directly from the estimated 1-D error by exploiting the analytical structure of the 2-D phase error. The experimental results clearly demonstrate the effectiveness and robustness of the proposed method.

Guo-Ping Hu;Kwing Lam Chan;Yong-Chun Zheng;Ao-Ao Xu; "A Rock Model for the Cold and Hot Spots in the Chang’E Microwave Brightness Temperature Map," vol.56(9), pp.5471-5480, Sept. 2018. Thermal anomaly spots (both hot and cold) have been found in the global 37-GHz brightness temperature (TB) map of the moon based on the Chang’E (CE) microwave radiometer measurements. To explain their origin, a rock model is proposed to simulate the TB variation against latitude along the profile of a fresh crater in a single track way, which is selected to highlight the topographic effect and avoid any modification to the data. A mixed upper layer made up of rock and soil (regolith and dust) was employed into our previous multilayer model. The thermal properties (thermal conductivity and heat capacity) of the mixture layer are presumed to be linear with the fraction of rocks. Given that high-frequency (37 GHz) measurements are chosen, only the meter size and larger rocks of the upper mixed layer are considered to avoid scattering effects. Several fresh craters poor/rich in ilmenite are selected as testing sites. Despite uncertainties in parameters such as rock abundance (RA), and iron and titanium abundances, three conclusions can be reached from these cases: 1) RA has a significant effect on both the TB value and TB variation trend against latitude; its contribution over some craters may be as high as 15 K; 2) the simulations based on our rock model fit the CE observations better than those when rocks are not included; and 3) the rock and ilmenite contributions could be the main cause for the cold and hot spots found in the CE microwave map.

Bin-Lin Hu;Jing Zhang;Kai-Qin Cao;Shi-Jing Hao;De-Xin Sun;Yin-Nian Liu; "Research on the Etalon Effect in Dispersive Hyperspectral VNIR Imagers Using Back-Illuminated CCDs," vol.56(9), pp.5481-5494, Sept. 2018. Dispersive hyperspectral visible and near-infrared (VNIR) imagers using back-illuminated (BI) charge-coupled devices (CCDs) suffer from interference fringes in the near-infrared band, known as the etalon effect. This effect causes a signal modulation which can become increasingly serious (±25% or more) when the spectral resolution gets higher than 5 nm, bringing huge difficulties for subsequent processing and quantitative applications of the hyperspectral data. A mathematical model to describe the fringe pattern is established by taking both the system parameters and the interference principle of multilayer thin films into account. Then, the model is used to simulate the distribution of interference fringes as a function of wavelength, and the simulated results are verified with the measured data from a VNIR grating-based hyperspectral imager. It turns out that the model is able to describe the interference fringes accurately. In addition, quantitative analysis of the influence of related factors on interference fringes is carried out, and a single-layer model is introduced for comparison, providing the theoretical foundation for the subsequent correction of interference fringes. This paper provides an important reference for the design and application of dispersive hyperspectral VNIR imagers using BI-CCDs.

Zhenwei Shi;Tianyang Shi;Min Zhou;Xia Xu; "Collaborative Sparse Hyperspectral Unmixing Using <inline-formula> <tex-math notation="LaTeX">$l_{0}$ </tex-math></inline-formula> Norm," vol.56(9), pp.5495-5508, Sept. 2018. Sparse unmixing has been applied on hyperspectral imagery popularly in recent years. It assumes that every observed signature is a linear combination of just a few spectra (end-members) from a known spectral library. However, solving the sparse unmixing problem directly (using <inline-formula> <tex-math notation="LaTeX">${l_{0}}$ </tex-math></inline-formula> norm to control the sparsity of solution at a low level) is NP-hard. Most related works focus on convex relaxation methods, but the sparsity and accuracy of results cannot be well guaranteed. Under these circumstances, this paper proposes a novel algorithm termed collaborative sparse hyperspectral unmixing using <inline-formula> <tex-math notation="LaTeX">${l_{0}}$ </tex-math></inline-formula> norm (CSUnL0), which aims at solving <inline-formula> <tex-math notation="LaTeX">${l_{0}}$ </tex-math></inline-formula> problem directly. First, it introduces a row-hard-threshold function. The row-hard-threshold function makes it possible to combine <inline-formula> <tex-math notation="LaTeX">${l_{0}}$ </tex-math></inline-formula> norm, instead of its approximate norms, with alternating direction method of multipliers. Compared with the convex relaxation methods, the <inline-formula> <tex-math notation="LaTeX">${l_{0}}$ </tex-math></inline-formula> norm constraint guarantees sparser and more accurate results. Moreover, the antinoise ability of CSUnL0 also gets improved. Second, CSUnL0 uses <inline-formula> <tex-math notation="LaTeX">${l_{2}}$ </tex-math></inline-formula> norm of each end-members’ abundance across the whole map as a collaborative constraint, which can take advantage of the hyperspectral data’s subspace property. The experimental results indicate that <inline-formula> <tex-math notation="LaTeX">${l_{0}}$ </tex-math></inline-formula> norm contributes to acquiring a more sparser solution and helps CSUnL0 to enhance calculation accuracy.

Zhiqiang Xiao;Shunlin Liang;Rui Sun; "Evaluation of Three Long Time Series for Global Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) Products," vol.56(9), pp.5509-5524, Sept. 2018. The fraction of absorbed photosynthetically active radiation (FAPAR) is a critical input parameter in many climate and ecological models. Long time series of global FAPAR products are required for many applications, such as vegetation productivity, carbon budget calculations, and global change studies. Three long time series of global FAPAR products have been existing since the 1980s: Global LAnd Surface Satellite (GLASS) Advanced Very High Resolution Radiometer (AVHRR), National Centers for Environmental Information (NCEI) AVHRR, and third-generation Global Inventory Monitoring and Modeling System (GIMMS3g). Currently, no intercomparison studies exist that have evaluated these FAPAR products to understand their differences for effective applications. In this paper, these three long time series of global FAPAR products are first intercompared to evaluate their spatial and temporal consistencies, and then compared with FAPAR values derived from high-resolution reference maps of VAlidation of Land European Remote sensing Instruments sites. Our results demonstrate that the GLASS AVHRR FAPAR product is spatially complete, whereas the NCEI AVHRR and GIMMS3g FAPAR products contain many missing pixels, especially in rainforest regions and in middle- and high-latitude zones of the Northern Hemisphere. The GLASS AVHRR, NCEI AVHRR, and GIMMS3g FAPAR products are generally consistent in their spatial patterns. However, a relatively large discrepancy among these FAPAR products is observed in tropical forest regions and around 55°N–65°N. In latitudes between 15°N and 25°N, the mean GIMMS3g FAPAR values are clearly larger than the mean GLASS AVHRR and NCEI AVHRR FAPAR values during July–October each year. The GLASS AVHRR FAPAR product provides smooth FAPAR temporal profiles, whereas the NCEI AVHRR and GIMMS3g FAPAR products showed fluctuating trajectories, especially- during the growing seasons. All three FAPAR products show high agreement coefficients (ACs) in vegetation regions with obvious seasonal variations and low ACs in tropical forest regions and sparsely vegetated areas. A comparison of these FAPAR products with the FAPAR values derived from high-resolution reference maps demonstrates that the GLASS AVHRR FAPAR product has the best performance [root mean square deviation (RMSD) = 0.0819 and bias = 0.0043], followed by the NCEI AVHRR FAPAR product (RMSD = 0.1061 and bias = 0.0371), and then finally, the GIMMS3g FAPAR product (RMSD = 0.1152 and bias = 0.0248).

Nina Hoareau;Antonio Turiel;Marcos Portabella;Joaquim Ballabrera-Poy;Jur Vogelzang; "Singularity Power Spectra: A Method to Assess Geophysical Consistency of Gridded Products—Application to Sea-Surface Salinity Remote Sensing Maps," vol.56(9), pp.5525-5536, Sept. 2018. The Soil Moisture and Ocean Salinity (SMOS) and Aquarius satellite missions have produced the first sea-surface salinity (SSS) maps from space. The quality of the retrieved SSS must be assessed, in terms of its validation against sparse ground truth, but also in terms of its ability to detect and characterize geophysical processes, such as mesoscale features. Such characterization is sometimes elusive due to the presence of noise and processing artifacts that continue to affect state-of-the-art remote sensing SSS maps. A new method, based on singularity analysis, is proposed to contribute to the assessment of the geophysical characteristics of such maps. Singularity analysis can be used to directly assess the spatial consistency of the SSS fields and to improve the estimation of the wavenumber spectra slope through a new method, the singularity power spectra (SPS). To demonstrate the SPS performance and utility, we applied SPS to different gridded SSS maps, such as SMOS and Aquarius high-level products, the output of a numerical simulation, in situ reanalysis, and climatology, as well as to other sea-surface temperature products for reference. The singularity analysis and SPS methods reveal that both the SMOS level 4 and the Aquarius combined active passive products are both able to describe the geometry of the existing geophysical structures and provide consistent spectral slopes. This paper demonstrates that beyond the remaining sources of uncertainty in remote sensing SSS products, valuable dynamical information on the ocean state can be extracted from these SSS products.

Sid-Ahmed Boukabara;Kevin Garrett; "Tropospheric Moisture Sounding Using Microwave Imaging Channels: Application to GCOM-W1/AMSR2," vol.56(9), pp.5537-5549, Sept. 2018. Traditionally, space-based microwave imagers have been used exclusively for providing total atmospheric water vapor along with hydrometeor and surface parameters, using a combination of window channels. These channels operate away from the strong water vapor absorption lines used for atmospheric moisture sounding. Nevertheless, they are sensitive to the vertical distribution of moisture in the atmosphere. This is due to the fact that the water vapor absorption continuum varies monotonically, and the wing effect of the strong absorption lines has a nonconstant effect, across the microwave spectrum. Therefore, the transmittances of window channels are different, allowing a varying level of penetration into the atmospheric column. To demonstrate the capability of imagers to operate as sounders, we introduce penetration depth, which defines the vertical sensitivity of window channels to moisture. In addition, we evaluate the degree to which data from spaceborne microwave imagers, specifically the Global Change Observation Mission-Water 1 (GCOM-W1) Advanced Microwave Scanning Radiometer-2 (AMSR2) sensor, can be used to perform sounding of tropospheric moisture. The capability is assessed using numerical weather prediction (NWP) analysis as the reference. In simulations using a regression approach, the performance of AMSR2 moisture sounding matches that of Suomi-NPP ATMS up to 700 hPa, with standard deviations of 10%–20%. AMSR2 outperforms other pure moisture sounders (Microwave Humidity Sounder and Sondeur Atmosphérique du Profil d’Humidité Intertropicale par Radiométrie) up to 500 hPa, before degrading rapidly. The performances using real observations and a physical algorithm are consistent with those found in simulations. These results highlight new applications for microwave imagers like GCOM-W1 AMSR2, including in Nowcasting and NWP, w- ich are heavily reliant on lower tropospheric moisture information.

Masato Ohki;Masanobu Shimada; "Large-Area Land Use and Land Cover Classification With Quad, Compact, and Dual Polarization SAR Data by PALSAR-2," vol.56(9), pp.5550-5557, Sept. 2018. In this paper, we demonstrated the possibility of performing land use and land cover (LULC) classification over a wide area by an L-band polarimetric synthetic aperture radar (SAR). In previous studies, there has been scant LULC classification by polarimetric SAR data over a wide area. We used satellite-based SAR data with an area of ca. 320 000 km2 obtained by the Phased Array type L-band SAR (PALSAR)-2 phase array. We performed the LULC classification using full polarimetry (FP), compact polarimetry (CP), and dual polarimetry (DP) data by PALSAR-2 and compared their classification accuracy. Our results show FP to be the most accurate. The CP and the DP have the advantages of large-scale coverage and compact data volume but is slightly less accurate than the FP. To further improve accuracy of the classification process, texture analysis, observation date information, and feature elimination are effective. We determined the classification accuracy for seven classes to be 73.4% (the kappa coefficient is 0.668). We found the rice paddy, forest, grass, and urban areas to be sufficiently accurate (84.5%) for practical application. We compared the obtained classification map with an existing LULC map to grasp the LULC changes induced by a recent disaster and successfully detected the damage areas of the disaster. These results indicate the possibility of large-scale LULC monitoring by an L-band polarimetric SAR, which can acquire images rapidly without being affected by weather.

Yongqiang Tang;Xuebing Yang;Wensheng Zhang;Guoping Zhang; "Radar and Rain Gauge Merging-Based Precipitation Estimation via Geographical–Temporal Attention Continuous Conditional Random Field," vol.56(9), pp.5558-5571, Sept. 2018. An accurate, high-resolution precipitation estimation based on rain gauge and radar observations is essential in various meteorological applications. Although numerous studies have demonstrated the effectiveness of merging two information sources rather than using separate sources, approaches that simultaneously consider the local radar reflectivity, the neighborhood rain gauge observations, and the temporal information are much less common. In this paper, we present a new framework for real-time quantitative precipitation estimation (QPE). By formulating the QPE as a continuous conditional random field (CCRF) learning problem, the spatiotemporal correlations of precipitation can be explored more thoroughly. Based on the CCRF, we further improve the accuracy of the precipitation estimation by introducing geographical and temporal attention. Specifically, we first present a data-driven weighting scheme to merge the first law of geography into the proposed framework, and hence, the neighborhood sample closer to the estimated grid can receive more attention. Second, the temporal attention penalizes the similarity between two adjacent timestamps via the discrepancy of two-view estimates, which can model the local temporal consistency and tolerate some drastic changes. A sufficient evaluation is conducted on 11 rainfall processes that occurred in 2015, and the results confirm the advantage of our proposal for real-time precipitation estimation.

Laurent Picard;Alexandre Baussard;Isabelle Quidu;Gilles Le Chenadec; "Seafloor Description in Sonar Images Using the Monogenic Signal and the Intrinsic Dimensionality," vol.56(9), pp.5572-5587, Sept. 2018. In a mine warfare context, the performance of automatic target recognition (ATR) algorithms depends on the environment. Globally, minelike textures and regions with high clutter density increase the false alarm rate. Thus, the environment must be considered as information that can be used to define robust ATR or at least to give a level of confidence in the results according to the seafloor environment. Previous works dealing with this objective have led to the description of the seafloor in terms of anisotropy and complexity. Following these approaches, in this paper, we propose a new definition of these features for describing the seafloor in sonar images. It is based on the monogenic representation of the images and the continuous intrinsic dimensionality (ciD). The monogenic signal, which is the multidimensional extension of the analytic signal, provides an orthogonal separation between energetic, geometric, and structural information of the image in a multiscale framework. The ciD provides information on 2-D geometric structures in the image. The resulting method leads to continuous values giving a confidence degree in relation to three features: the homogeneity, the anisotropy, and the complexity. Several data sets from different sonar systems are used to show the potential of our approach. They also show the ability of the method to deal with different image sources without changing or recalibrating the number of parameters.

* "IEEE Transactions on Geoscience and Remote Sensing information for authors," vol.56(9), pp.C3-C3, Sept. 2018.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "IEEE Transactions on Geoscience and Remote Sensing institutional listings," vol.56(9), pp.C4-C4, Sept. 2018.* Presents a listing of institutional institutions relevant for this issue of the publication.

IEEE Geoscience and Remote Sensing Letters - new TOC (2018 September 20) [Website]

* "Front Cover," vol.15(9), pp.C1-C1, Sept. 2018.* Presents the front cover for this issue of the publication.

* "IEEE Geoscience and Remote Sensing Letters publication information," vol.15(9), pp.C2-C2, Sept. 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Table of contents," vol.15(9), pp.1305-1306, Sept. 2018.* Presents the table of contents for this issue of the publication.

Xiaoyu Zhu;Jincai Li;Min Zhu;Zhuhui Jiang;Yinglun Li; "An Evaporation Duct Height Prediction Method Based on Deep Learning," vol.15(9), pp.1307-1311, Sept. 2018. An evaporation duct is a particular atmospheric duct that is crucial for marine vessel communication, and one of the most significant indexes to assess it is evaporation duct height (EDH). In this letter, we propose a new method based on a multilayer perceptron (MLP), a classic network in deep learning, to predict the EDH. After so many experiments, the structure of MLP for EDH estimation is chosen as five hidden layer with rectified liner unit activation function. Since this method is essentially some kind of regression, the mean-squared error is chosen as the loss function. To accelerate the training convergence, we choose a self-adaptive optimization scheme called adaptive moment estimation. There are some field observation data sets obtained from some sea areas in northern hemisphere, by which the MLP is trained, to predict the EDH. For single-regional prediction, we use the same input data sets with the Paulus–Jeske (P-J) model, one of the ideal operational models, to predict the EDH. In comparison with the P-J model, the prediction accuracy using our method is significantly escalated in all experimental sea areas, which reveals the efficiency of our method. By cross prediction in distinct sea areas, the consistency between the method and the theory is verified, and this letter yields a new approach of evaporation duct prediction.

Xinwei Chen;Weimin Huang;Guowei Yao; "Wind Speed Estimation From X-Band Marine Radar Images Using Support Vector Regression Method," vol.15(9), pp.1312-1316, Sept. 2018. A support vector regression (SVR)-based method for estimating wind speed from X-band marine radar images is proposed. The dependence of histogram pattern of radar images on wind speed and rain condition is first observed. Then, the feature vectors based on bin values of histograms are extracted and trained using an SVR algorithm. Radar images and anemometer data collected from several periods in a sea trial of the east coast of Canada are used for model training and testing. Experimental results show that compared with the ensemble empirical mode decomposition-based methods, the accuracy of wind speed estimation is improved with a reduction of about 0.14 m/s for rain-free images and 0.11 m/s for rain-contaminated images in root mean square error. Moreover, the proposed method also shows high efficiency by greatly reducing the computational time.

Younglo Lee;Sangwook Park;Chul-Jin Cho;Bonhwa Ku;SangHo Lee;Hanseok Ko; "Man-Made Radio Frequency Interference Suppression for Compact HF Surface Wave Radar," vol.15(9), pp.1317-1321, Sept. 2018. High-frequency surface wave radar (HFSWR) suffers from a man-made interference because its amplitude is high enough to mask the Bragg scattering signal. Although several methods have been proposed for resolving this problem, they are inapplicable to compact HFSWR due to their antenna structures. This letter proposes an effective method of suppressing man-made radio frequency interference for compact HFSWR. The proposed method is composed of man-made interference detection and suppression by using regression based on probabilistic signal model. The proposed method is demonstrated in comparison with conventional methods in terms of root-mean-square error in experiments using synthetic and real data. The results show that the proposed method outperforms other methods in both simulated and practical situations.

Zezong Chen;Fei Xie;Chen Zhao;Chao He; "Radio Frequency Interference Cancelation in High-Frequency Surface Wave Radar Using Orthogonal Projection Filtering," vol.15(9), pp.1322-1326, Sept. 2018. The characteristics of radio frequency interference (RFI) are analyzed for high-frequency surface wave radar (HFSWR), which adopts a linear frequency-modulated interrupted continuous wave. If RFI enters the receiver, it contaminates the positive and negative frequency range bins simultaneously, whereas the negative frequency range bins contain the RFI and noise only. Moreover, RFI is always spatially structured over a short duration. Based on the above characteristics, a new orthogonal projection filtering algorithm is proposed for RFI cancelation. The proposed method uses the information of antenna channel and sweep dimensions in negative frequency range bins to estimate the interference covariance matrix. To suppress RFI, the signals of interest in the positive range bins are projected onto the interference subspace, which is obtained by eigendecomposition of the covariance matrix. Experimental results suggest that the proposed method can achieve effective RFI cancelation by using data collected by a multifrequency HFSWR.

Bochen Zhang;Chisheng Wang;Xiaoli Ding;Wu Zhu;Songbo Wu; "Correction of Ionospheric Artifacts in SAR Data: Application to Fault Slip Inversion of 2009 Southern Sumatra Earthquake," vol.15(9), pp.1327-1331, Sept. 2018. Interferometric synthetic aperture radar (InSAR) is one of the most popular geodetic techniques for studying earthquake-related crustal displacements. Satellite SAR signals interact with the ionosphere when they travel through it during the synthetic aperture time. The condition of the ionosphere and its variation can significantly affect spaceborne InSAR measurements. In this letter, we use the Advanced Land Observation Satellite Phase Array-Type L-band SAR data from the 2009 southern Sumatra earthquake to evaluate the effects of the ionospheric artifacts on the slip distribution inversion of earthquake. The split-spectrum method is used to estimate and correct the ionospheric artifacts in the InSAR results. This letter shows that the long-wavelength ionospheric artifacts in the coseismic interferograms can be effectively mitigated. The slip distribution of the earthquake derived from the interferograms corrected for the ionospheric artifacts is presented. The slip distribution pattern and the magnitude of the slip are significantly refined after correcting the ionospheric artifacts.

Jiangfeng Guo;Ranhong Xie; "An Inversion of NMR Echo Data Based on a Normalized Iterative Hard Thresholding Algorithm," vol.15(9), pp.1332-1336, Sept. 2018. The inversion of nuclear magnetic resonance (NMR) echo data requires solving the discrete Fredholm integral equation of the first kind, which is an ill-posed problem. In this letter, a surrogate objective function of an NMR inversion without an explicit regularization term based on least-squares fitting is introduced to avoid the process of choosing a regularization parameter, and a normalized iterative hard thresholding algorithm is proposed to solve the surrogate objective function. Furthermore, the inverted <inline-formula> <tex-math notation="LaTeX">$T_{2}$ </tex-math></inline-formula> spectra of the proposed method, the truncated singular value decomposition method, the Butler–Reeds–Dawson method, and the least-squares QR decomposition method are compared using numerical simulation examples. The results show that the proposed method is superior to the other methods because the peaks of the inverted <inline-formula> <tex-math notation="LaTeX">$T_{2}$ </tex-math></inline-formula> spectra with a shorter relaxation time are the most similar to the model at a low signal-to-noise ratio and the root-mean-square errors of the inverted <inline-formula> <tex-math notation="LaTeX">$T_{2}$ </tex-math></inline-formula> spectra are the lowest. Finally, we process the NMR experimental data of tight sandstone using the four methods and verify the effectiveness of the proposed method for solving the NMR echo data inversion problem.

Guangtan Huang;Xiaohong Chen;Cong Luo;Xiangyang Li; "Application of Optimal Transport to Exact Zoeppritz Equation AVA Inversion," vol.15(9), pp.1337-1341, Sept. 2018. Since most information on S-wave velocity and density exists in the middle to large angle range of seismic data, traditional inversion methods based on the Zoeppritz approximation have difficulty in obtaining satisfactory results. Therefore, as a high-accuracy amplitude varied with angle (AVA) inversion, exact Zoeppritz (EZ) equation inversion has aroused a lot of attention in recent years. As for any other nonlinear inversion, iterative convergence and error are the important problems. In this letter, based on a Bayesian framework, we introduce optimal transport into EZ equation AVA inversion. Then, the limited-memory Broyden–Fletcher–Goldfarb–Shanno method is adopted to solve the regularization-constrained least-square argument function to obtain the inversion results, including P-wave velocity, S-wave velocity, and density. We compare this method with a conventional method, which is based on an L2 norm or weighted L2 norm as a residual method in the model test. The results show that the proposed method not only reduces the error of the results to be smaller than L2 norm, but it also improves the convergence rate.

Guochang Liu;Yang Liu;Chao Li;Xiaohong Chen; "Weighted Multisteps Adaptive Autoregression for Seismic Image Denoising," vol.15(9), pp.1342-1346, Sept. 2018. We devised a new filtering technique for random noise attenuation by weighted multistep adaptive autoregression (WMAAR). We first obtain a series of denoised results by means of different steps adaptive AR, and then we sum these results with different weights. The adaptive AR coefficients are obtained by solving a global regularized least squares problem, in which regularization is used to control the smoothness of these coefficients. The adaptive AR can estimate seismic events with varying slopes since AR coefficients have temporal and spatial variabilities. We derive the weights from the normalized power of local similarity by comparing the result of the nearest step with the ones of other steps. The application of these weights makes the proposed algorithm more effective in fault information conservation. The proposed WMAAR can be implemented both in the frequency–space and in time–space domains. Multidimensional synthetic and field seismic data examples demonstrate that, compared with conventional methods in frequency–space or time–space domain, multistep adaptive AR is more effective in suppressing random noise and preserving effective signals, especially for complex geological structure (e.g., faults).

Ruben Fernandez-Beltran;Juan M. Haut;Mercedes E. Paoletti;Javier Plaza;Antonio Plaza;Filiberto Pla; "Multimodal Probabilistic Latent Semantic Analysis for Sentinel-1 and Sentinel-2 Image Fusion," vol.15(9), pp.1347-1351, Sept. 2018. Probabilistic topic models have recently shown a great potential in the remote sensing image fusion field, which is particularly helpful in land-cover categorization tasks. This letter first studies the application of probabilistic latent semantic analysis (pLSA) and latent Dirichlet allocation to remote sensing synthetic aperture radar (SAR) and multispectral imaging (MSI) unsupervised land-cover categorization. Then, a novel pLSA-based image fusion approach is presented, which pursues to uncover multimodal feature patterns from SAR and MSI data in order to effectively fuse and categorize Sentinel-1 and Sentinel-2 remotely sensed data. Experiments conducted over two different data sets reveal the advantages of the proposed approach for unsupervised land-cover categorization tasks.

Tao Zhan;Maoguo Gong;Xiangming Jiang;Shuwei Li; "Log-Based Transformation Feature Learning for Change Detection in Heterogeneous Images," vol.15(9), pp.1352-1356, Sept. 2018. With the rapid development of remote sensing technology, how to accurately detect changes that have occurred on the land surface has been a critical task, particularly when images come from different satellite sensors. In this letter, we propose an unsupervised change detection method for heterogeneous synthetic aperture radar (SAR) and optical images based on the logarithmic transformation feature learning framework. First, the logarithmic transformation is applied to the SAR image that aims to achieve similar statistical distribution properties as the optical image. Then, high-level feature representations can be learned from the transformed image pair via joint feature extraction, which are used to select reliable samples for training a neural network classifier. When it is trained well, a robust change map can be obtained, thus identifying changed regions accurately. The experimental results on three real heterogeneous data sets demonstrate the effectiveness and superiority of the proposed method compared with other existing state-of-the-art approaches.

Yeo-Sun Yoon;Yunseog Hong;Sungchol Kim; "Simple Strategies to Build Random Compressive Sensing Matrices in Step-Frequency Radars," vol.15(9), pp.1357-1361, Sept. 2018. By applying compressive sensing, we can obtain radar range profile using only part of the frequency bins in step-frequency radars. While the probability of success in obtaining the range profile depends on which frequency bins are used, it seems that there is no general and simple method to choose the set of frequency bins for better probability of success. Although random selection is said to be good enough, it would be better if we select those in some strategic ways. This letter deals with frequency selection strategies when deterministic ways are not applicable and the super-resolution range profile is necessary. The proposed strategies are tested with simulations and real measurement data set. The results show that the proposed method is better in achieving the probability of exact recovery than casual random selections.

Lihuan Huo;Guisheng Liao;Zhiwei Yang;Qingjun Zhang; "An Efficient Calibration Algorithm for Large Aperture Array Position Errors in a GEO SAR," vol.15(9), pp.1362-1366, Sept. 2018. Geosynchronous orbit synthetic aperture radar (GEO SAR) plays an important role in wide-area surveillance and the continuous coverage of areas containing targets of interest. However, the relative position errors of large aperture arrays will distort the antenna pattern, which significantly degrades the target detection performance in a GEO SAR system. To address this issue, most of the conventional calibration methods are focused on the independent errors, without consideration of the parametric error model, which may increase the position estimation errors. To solve this problem, an efficient calibration algorithm for position errors in a GEO SAR is proposed in this letter. For the large antenna arrays, the parametric error model is first established. Then, the calibration method is performed to estimate the parameters of the position error model. Based on this, the accurate position errors can be obtained by the least-squares algorithm. Compared with the conventional methods, the target detection performance of a GEO SAR system can be significantly improved after the precise array position error compensation by the proposed algorithm. Moreover, the proposed algorithm transforms the estimation from 3-D positions to the finite parameters of the error model, which can considerably decrease the computational complexity and obtain a more accurate estimation of the position errors simultaneously. Several simulated results are presented to validate the proposed algorithm for the position error correction in a GEO SAR system.

Alessandro Parizzi;Wael Abdel Jaber; "Estimating Strain and Rotation From Wrapped SAR Interferograms," vol.15(9), pp.1367-1371, Sept. 2018. This letter aims to discuss a general framework that allows the direct interpretation of the wrapped differential synthetic aperture radar interferometry phase in terms of surface strain <inline-formula> <tex-math notation="LaTeX">$S$ </tex-math></inline-formula> and rotation <inline-formula> <tex-math notation="LaTeX">$R$ </tex-math></inline-formula> components. The methodology is demonstrated showing the estimation of strain and rotation components of a glacier flow using three TerraSAR-X interferometric geometries (ascending right-looking, descending right-looking, and descending left-looking). Finally, since the left-looking geometry can be difficult to obtain on a regular basis, the surface parallel flow assumption is extended to the phase gradients inversion in order to reduce the amount of necessary geometries from three to two.

Meng Yang;Chunsheng Guo; "Ship Detection in SAR Images Based on Lognormal <inline-formula> <tex-math notation="LaTeX">$rho$ </tex-math></inline-formula>-Metric," vol.15(9), pp.1372-1376, Sept. 2018. Information geometry emerged from the study of the geometrical structure of a manifold of probability distributions. It defines a Riemannian metric uniquely, which is Fisher information metric. However, the classical Fisher metric method has a limitation that it does not overcome the problem of inhomogeneous and nonstationary clutter for ship detection in synthetic aperture radar (SAR) images. By combing lognormal model and the Riemannian geometry, this letter presents a modified Fisher metric (lognormal <inline-formula> <tex-math notation="LaTeX">$rho$ </tex-math></inline-formula>-metric) based on information geometry. Experiments show that lognormal metric for <inline-formula> <tex-math notation="LaTeX">$rho$ </tex-math></inline-formula> in the ship detection from SAR images can be tuned to increase the performance of improving contrast between the object and background, and reducing false alarms.

Chengliang Zhong;Xiaodong Mu;Xiangchen He;Bichao Zhan;Ben Niu; "Classification for SAR Scene Matching Areas Based on Convolutional Neural Networks," vol.15(9), pp.1377-1381, Sept. 2018. The selection of scene matching areas is a difficult problem in the field of matching guidance. Compared with the traditional methods of matching feature extraction and pattern classification, this letter applies convolutional neural networks (CNN) to the extraction of synthetic aperture radar (SAR) scene matching regions for the first time. First of all, we match the SAR images of the same land taken by satellites from different angles and in different phases, and then automatically label the matching suitability of the images as the output of the network according to the matching results. Next, the digital elevation model data reflecting the elevation information and the SAR image grayscale information are fused as the input to the network. Finally, CNN is used to automatically extract the matching features and classify the suitability of the SAR images. The proposed method avoids the steps of extracting features manually and improves the classification performance of SAR scene matching area. Compared with the support vector machine method, the classification accuracy increases from 86.1% to 93.3%.

Qian-Ru Wei;Da-Zheng Feng; "Antistretch Edge Detector for SAR Images," vol.15(9), pp.1382-1386, Sept. 2018. In this letter, an antistretch edge detector has been proposed for synthetic aperture radar (SAR) images. Traditional detectors using anisotropy edge detection filters often incur severe edge stretch. If isotropy edge detection filters are used, the detectors usually have poor edge resolution. Hence, we skillfully fuse an anisotropy edge detection filter with an isotropy one. By embedding the fused filter into the routine ratio-based SAR edge detector, an antistretch edge detector is proposed. Benefiting from the fused edge detection filter, the proposed edge detector can obtain a good antistretch ability and keep a high edge resolution. A theoretical analysis indicates that the computational complexity of the proposed edge detector is close to conventional ratio-based edge detectors. The fusion operation will not affect the constant false alarm rate property. The receiver-operating-characteristic curves are used to objectively evaluate the proposed detector. Experimental studies show that the proposed detector has a lower false positive rate than the majority of detectors using only anisotropy filters or isotropy filters. Furthermore, the experimental results on simulated and real-world SAR images show that the proposed antistretch edge detector can obtain an accurate edge map.

Sourabh Paul;Umesh C. Pati; "A Block-Based Multifeature Extraction Scheme for SAR Image Registration," vol.15(9), pp.1387-1391, Sept. 2018. In this letter, a block-based multifeature extraction scheme is proposed to register the synthetic aperture radar (SAR) images. With appropriate modifications, the scale-invariant feature transform (SIFT) and the SAR-SIFT operators are used to extract two types of features including texture points and corner points from the SAR images. The input images are divided into a certain number of blocks and the two types of features are extracted from each of the blocks for the uniform distribution of the features. A novel scheme is presented to obtain these features in the same proportion from the input images. The proposed method has the advantages of proper controllability of the number of extracted features and the uniform distribution of the features. A correct match identification by local searching algorithm is proposed to significantly increase the number of correct matches between the SAR images. Experiments on three pairs of multimodal and multitemporal SAR images demonstrate the effectiveness of the proposed method.

Weiping Ni;Long Ma;Weidong Yan;Han Zhang;Junzheng Wu; "Background Context-Aware-Based SAR Image Saliency Detection," vol.15(9), pp.1392-1396, Sept. 2018. Saliency, the distinctive parts of an image, has shown good potential for many applications (e.g., image interpretation and target detection). In this letter, a novel saliency detection method based on background context-aware is proposed for synthetic aperture radar (SAR) images. According to a statistical analysis of SAR image characteristics, several reference background patches (RBPs) are selected. Then, the dissimilarities between the current patch and the RBPs are used to calculate the local saliency, which is further enhanced for the final saliency under the multiscale framework. Experimental results demonstrate the effectiveness of the proposed method, which outperforms some state-of-the-art methods.

Odysseas Pappas;Alin Achim;David Bull; "Superpixel-Level CFAR Detectors for Ship Detection in SAR Imagery," vol.15(9), pp.1397-1401, Sept. 2018. Synthetic aperture radar (SAR) is one of the most widely employed remote sensing modalities for large-scale monitoring of maritime activity. Ship detection in SAR images is a challenging task due to inherent speckle, discernible sea clutter, and the little exploitable shape information the targets present. Constant false alarm rate (CFAR) detectors, utilizing various sea clutter statistical models and thresholding schemes, are near ubiquitous in the literature. Very few of the proposed CFAR variants deviate from the classical CFAR topology; this letter proposes a modified topology, utilizing superpixels (SPs) in lieu of rectangular sliding windows to define CFAR guardbands and background. The aim is to achieve better target exclusion from the background band and reduced false detections. The performance of this modified SP-CFAR algorithm is demonstrated on TerraSAR-X and SENTINEL-1 images, achieving superior results in comparison to classical CFAR for various background distributions.

Xinzheng Zhang;Yijian Wang;Dong Li;Zhiying Tan;Shujun Liu; "Fusion of Multifeature Low-Rank Representation for Synthetic Aperture Radar Target Configuration Recognition," vol.15(9), pp.1402-1406, Sept. 2018. In this letter, we propose a synthetic aperture radar (SAR) target configuration recognition algorithm based on the fusion of multifeature low-rank representations (LRRs). First, Gabor, principal component analysis, and wavelet features are extracted for the SAR training set and test set, respectively. Second, with the LRR model, each feature of the test samples is represented by those of the training set, leading to the corresponding coefficient matrix. Then, the preliminary prediction labels of all features of the test sample are obtained according to the LRR coefficients. Third, in order to further improve the confidence of recognition and reduce the instability of the algorithm, a two-stage decision fusion strategy is adopted to obtain the final prediction labels. The first stage utilizes a vote fusion for the recognition results of multiaspect neighborhood test samples for each feature pattern, which exploits the strong correlation of these neighborhood samples. Furthermore, the second stage fuses the three results obtained in the first stage through Bayesian inference. Bayesian inference is widely used in decision fusion, which can improve the confidence of results by about 3%. Experiments on the moving and stationary target acquisition and recognition data set demonstrate the effectiveness and superiority of the proposed algorithm.

Alejandro Mestre-Quereda;Juan M. Lopez-Sanchez;J. David Ballester-Berman;Pablo J. Gonzalez;Andrew Hooper;Tim J. Wright; "Evaluation of the Multilook Size in Polarimetric Optimization of Differential SAR Interferograms," vol.15(9), pp.1407-1411, Sept. 2018. The interferometric coherence is a measure of the correlation between two SAR images and constitutes a commonly used estimator of the phase quality. Its estimation requires a spatial average within a 2-D window, usually named as multilook. The multilook processing allows reducing noise at the expenses of a resolution loss. In this letter, we analyze the influence of the multilook size while applying a polarimetric optimization of the coherence. The same optimization algorithm has been carried out with different multilook sizes and also with the nonlocal SAR filter filter, which has the advantage of preserving the original resolution of the interferogram. Our experiments have been carried out with a single pair of quad-polarimetric RADARSAT-2 images mapping the Mount Etna’s volcanic eruption of May 2008. Results obtained with this particular data set show that the coherence is increased notably with respect to conventional channels when small multilook sizes are employed, especially over low-vegetated areas. Conversely, very decorrelated areas benefit from larger multilook sizes but do not exhibit an additional improvement with the polarimetric optimization.

Raghu G. Raj;Christopher T. Rodenbeck;Ronald D. Lipps;Robert W. Jansen;Thomas L. Ainsworth; "A Multilook Processing Approach to 3-D ISAR Imaging Using Phased Arrays," vol.15(9), pp.1412-1416, Sept. 2018. This letter introduces novel processing structures enabling the formation of 3-D inverse synthetic aperture radar (ISAR) images from phased arrays. Unlike previous approaches to 3-D ISAR imaging, this approach uses a spatio-sensor multilook processing procedure that takes better advantage of both 1) the range-Doppler structure of the target and 2) the multiple phase center structure of the processing array. Simulation at both X-band and W-band demonstrates the new 3-D imaging algorithm provides robust interferometric calculations for height estimation under noisy sensing conditions.

Bing Tu;Chengle Zhou;Wenlan Kuang;Longyuan Guo;Xianfeng Ou; "Hyperspectral Imagery Noisy Label Detection by Spectral Angle Local Outlier Factor," vol.15(9), pp.1417-1421, Sept. 2018. This letter presents the hyperspectral imagery (HSI) noisy label detection using a spectral angle and the local outlier factor (SALOF) algorithm. The noisy label is caused by a mislabeled training pixel, and thus, noisy training samples mixed with correct and incorrect labels are formed in the supervised classification. The LOF algorithm is first used in the noisy label detection of the HSI to improve the supervised classification accuracy. The proposed method SALOF mainly includes the following steps. First, <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> nearest neighbors of different training samples of each class are calculated based on the spectral angle mapper. Second, the reachability distance and local reachability density of all training samples are obtained. Third, the LOF is determined among different classes of training samples. Then, a segmentation threshold of the LOF is established to achieve an abnormal probability of these training samples. Finally, the support vector machines are applied to measure the detection efficiency of the proposed method. The experiments performed on the Kennedy Space Center data set demonstrate that the proposed method can effectively detect noisy labels.

Lei Pan;Heng-Chao Li;Yong-Jian Sun;Qian Du; "Hyperspectral Image Reconstruction by Latent Low-Rank Representation for Classification," vol.15(9), pp.1422-1426, Sept. 2018. To effectively reduce the spectral variation that degrades classification performance, a novel low-rank subspace recovery method based on latent low-rank representation (LatLRR) is proposed for hyperspectral images in this letter. Different from the robust principal component analysis, LatLRR focuses on exploring the low-rank property from the perspective of row space and column space simultaneously through the low-rank regularization on their corresponding coefficient matrix. Following that, the self-expressiveness-based reconstruction is adopted to recover the intrinsic data from row and column spaces. More accurate subspace structure can be successfully preserved both in spectral domain and spatial domain; meanwhile, the robustness to noise is improved. Experimental results on two hyperspectral data sets demonstrate the effectiveness of the proposed method.

Yuanchao Su;Andrea Marinoni;Jun Li;Javier Plaza;Paolo Gamba; "Stacked Nonnegative Sparse Autoencoders for Robust Hyperspectral Unmixing," vol.15(9), pp.1427-1431, Sept. 2018. As an unsupervised learning tool, autoencoder has been widely applied in many fields. In this letter, we propose a new robust unmixing algorithm that is based on stacked nonnegative sparse autoencoders (NNSAEs) for hyperspectral data with outliers and low signal-to-noise ratio. The proposed stacked autoencoders network contains two main steps. In the first step, a series of NNSAE is used to detect the outliers in the data. In the second step, a final autoencoder is performed for unmixing to achieve the endmember signatures and abundance fractions. By taking advantage from nonnegative sparse autoencoding, the proposed approach can well tackle problems with outliers and low noise-signal ratio. The effectiveness of the proposed method is evaluated on both synthetic and real hyperspectral data. In comparison with other unmixing methods, the proposed approach demonstrates competitive performance.

Ming Yang;Changhe Li;Jing Guan;Xuesong Yan; "A Supervised-Learning <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>-Norm Distance Metric for Hyperspectral Remote Sensing Image Classification," vol.15(9), pp.1432-1436, Sept. 2018. Hyperspectral remote sensing images present rich information on the characteristics of different physical materials. Utilizing the rich information, classifiers can distinguish these different materials. The minimum distance technique, which is commonly used in classification, is sensitive to the distance metric, especially in high-dimensional space. In this letter, we study the effect of the <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>-norm distance metric on the minimum distance technique and propose a supervised-learning <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>-norm distance metric to optimize the value of <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>. In the experimental study, we take the minimum distance and the <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula>-nearest neighbor classifiers as examples to test the proposed supervised-learning <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>-norm distance metric. The results suggest that the supervised-learning <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>-norm distance metric can improve the performance of a classifier for hyperspectral remote sensing image classification.

Keshav D. Singh; "Automated Spectral Mapping and Subpixel Classification in the Part of Thar Desert Using EO-1 Satellite Hyperion Data," vol.15(9), pp.1437-1440, Sept. 2018. To evaluate the application potential of hyperspectral imaging, the automated spectral mapping and subpixel classification is used to classify the EO-1 satellite Hyperion data. Once the endmembers are extracted from the conventional hyperspectral processing steps (namely, atmospheric correction, noise reduction, pixel purity, and n-dimensional visualization), these endmembers are resolved for constituent rock minerals. Subsequently, the image is classified for mineralogy and lithology, and validated with the ground-truth data (prevailing geological map).

Chun Wu;Qianqing Qin;Guorui Ma;Zhitao Fu;Bin Wu; "Improved Altitude Spatial Resection Algorithm for Oblique Photogrammetry," vol.15(9), pp.1441-1445, Sept. 2018. As a fundamental problem, spatial resection based on collinearity equations has been a subject of study since the beginning of photogrammetry. Owing to the nonlinearity of collinearity equations, the standard Newton–Raphson method requires approximate values for unknown parameters. Therefore, the method is inapplicable to modern oblique photogrammetry. In this letter, we propose a new altitude spatial resection algorithm that combines closed-form solutions and iterative solutions. In this algorithm, a set of homotopy algorithm solutions is first selected as the initial value on the basis of the minimum sum of the squared errors. Quasi-Newton method is then used for nonlinear iteration. Experimental results obtained from different data sets indicate excellent performance compared with serial implement.

Keiller Nogueira;Samuel G. Fadel;Ícaro C. Dourado;Rafael de O. Werneck;Javier A. V. Muñoz;Otávio A. B. Penatti;Rodrigo T. Calumby;Lin Tzy Li;Jefersson A. dos Santos;Ricardo da S. Torres; "Exploiting ConvNet Diversity for Flooding Identification," vol.15(9), pp.1446-1450, Sept. 2018. Flooding is the world’s most costly type of natural disaster in terms of both economic losses and human causalities. A first and essential procedure toward flood monitoring is based on identifying the area most vulnerable to flooding, which gives authorities relevant regions to focus. In this letter, we propose several methods to perform flooding identification in high-resolution remote sensing images using deep learning. Specifically, some proposed techniques are based upon unique networks, such as dilated and deconvolutional ones, whereas others were conceived to exploit diversity of distinct networks in order to extract the maximum performance of each classifier. The evaluation of the proposed methods was conducted in a high-resolution remote sensing data set. Results show that the proposed algorithms outperformed the state-of-the-art baselines, providing improvements ranging from 1% to 4% in terms of the Jaccard Index.

Grant J. Scott;Kyle C. Hagan;Richard A. Marcum;James Alex Hurt;Derek T. Anderson;Curt H. Davis; "Enhanced Fusion of Deep Neural Networks for Classification of Benchmark High-Resolution Image Data Sets," vol.15(9), pp.1451-1455, Sept. 2018. Accurate land cover classification and detection of objects in high-resolution electro-optical remote sensing imagery (RSI) have long been a challenging task. Recently, important new benchmark data sets have been released which are suitable for land cover classification and object detection research. Here, we present state-of-the-art results for four benchmark data sets using a variety of deep convolutional neural networks (DCNN) and multiple network fusion techniques. We achieve 99.70%, 99.66%, 97.74%, and 97.30% classification accuracies on the PatternNet, RSI-CB256, aerial image, and RESISC-45 data sets, respectively, using the Choquet integral with a novel data-driven optimization method presented in this letter. The relative reduction in classification errors achieved by this data driven optimization is 25%–45% compared with the single best DCNN results.

Zs. Koma;M. Rutzinger;M. Bremer; "Automated Segmentation of Leaves From Deciduous Trees in Terrestrial Laser Scanning Point Clouds," vol.15(9), pp.1456-1460, Sept. 2018. Recent improvements in topographic LiDAR technology allow the detailed characterization of individual trees at both branch and leaf scale, providing more accurate information to support phenological and ecological research. However, an effective methodology to map single leaves in 3-D is still missing. This letter presents a point cloud segmentation approach for single leaf detection and the derivation of selected morphological features (i.e., leaf area (LA), maximal leaf length, width, and slope) using terrestrial laser scanning. The developed approach consists of 1) filtering noise points; 2) region growing segmentation; 3) separating leaf and nonleaf segments; and 4) calculating leaf-morphological features. For the evaluation of the workflow, two deciduous trees were scanned. A Selection of leaves of the specified trees was randomly harvested during the field campaign for comparison. A qualitative comparison analysis was carried out between the area of the harvested leaves and the leaf area (LA) derived from 3-D point cloud segmentation. In addition, a sensitivity analysis investigated the effect of the segmentation parameterization. This step revealed that the proposed segmentation algorithm is robust when using an optimum subset of parameter values. However, the determination of leaf outlines is limited due to the orientation of leaves to the scanner, shadow effects, and the inhomogeneity of the point cloud. The results underline the potential of region growing segmentation of point clouds for providing accurate information on single leaves and vegetation structure in more detail. This facilitates improvements in applications such as estimating water balance, biomass, or leaf area (LA) index.

Michal Petlicki; "Subglacial Topography of an Icefall Inferred From Repeated Terrestrial Laser Scanning," vol.15(9), pp.1461-1465, Sept. 2018. Information about ice thickness and subglacial topography is a key element for ice dynamics modeling and, consequently, for better understanding of current response of glaciers to climatic forcing. While there has been a substantial progress in the measurements of glacial surface, ice thickness measurements still suffer from limited coverage and strong interpolation. Furthermore, there are some glaciated areas where typical remote sensing techniques provide only limited data due to unfavorable climatic conditions and steep topography. A perfect example of such a feature is icefalls dominating landscapes of the maritime Antarctic. In order to close that gap, this letter presents a simple method of inferring the subglacial topography based on repeated terrestrial laser scanning of the ice surface and inverse shallow ice approximation modeling. Emerald Icefalls, King George Island, were surveyed twice within eight-day period, allowing to derive ice surface velocity field by feature tracking analysis and to estimate ice thickness and the subglacial topography. Calculated ice thickness is low, with the mean value of 72±18 m. Therefore, in line with former studies, the overall ice flux of Emerald Icefalls is small despite their relatively high surface ice flow velocities.

Preston Hartzell;Ziyue Dang;Zhigang Pan;Craig Glennie; "Radiometric Evaluation of an Airborne Single Photon Lidar Sensor," vol.15(9), pp.1466-1470, Sept. 2018. Lidar intensity is correlated with illuminated target physical properties, particularly target reflectance, making it a valuable quantity for applications, such as land cover classification, data registration, structural damage detection, and qualitative point cloud interpretation. In contrast to traditional linear-mode lidar (LML) hardware, single photon lidar (SPL) detectors produce a binary response to impinging photons and therefore do not provide an intensity measure for each detected return. This is a significant drawback but can be addressed by computing a measure of local point cloud density for each point. Since the arrival and detection of single photon are governed by statistics such that the observations of brighter surfaces are more probable to generate a detection event than darker surfaces, a local point cloud density metric can be used as a proxy for traditional LML intensity. We define the relationship between target reflectance and photon detection probability and compare the predicted relationship with empirical observations of ground reflectance and an estimate of detection probability generated from local point cloud density. We also examine a pulsewidth measure provided by the SPL sensor used for this letter, as well as the influence of neighborhood radius on variance in the probability estimates and a filtered version of the hardware-supplied pulsewidth.

Ken B. Cooper;Raquel Rodriguez Monje;Luis Millán;Matthew Lebsock;Simone Tanelli;Jose V. Siles;Choonsup Lee;Andrew Brown; "Erratum to “Atmospheric Humidity Sounding Using Differential Absorption Radar Near 183 GHz”," vol.15(9), pp.1471-1471, Sept. 2018. In [1, Fig. 5], the assumed atmospheric conditions were incorrectly stated for the curve showing the millimeter-wave propagation model of excess attenuation at frequencies with respect to 193 GHz. Rather than being for 91% relative humidity (RH) and 289 K, as was stated in [1], the model curve in [1, Fig. 5] was for 50% RH and 289 K. (The experimental measurements are unaffected.) The figure is reproduced here as Fig. 1(a), now with correct labeling.

Dongdong Zeng;Ming Zhu; "Erratum to “Multiscale Fully Convolutional Network for Foreground Object Detection in Infrared Videos”," vol.15(9), pp.1472-1472, Sept. 2018. The architecture of the proposed multiscale fully convolutional network (MFCN) in our paper [1] is mainly derived from a salient object method [2] and a semantic segmentation method [3]. We missed these two reference papers in the original paper. The MFCN is an encoder-decoder architecture; the output has the same size with the input. For the encoder part, we use the pretrained VGG16 network, and for the decoder part, we upsample the features with deconvolution operations. The contrast layer for feature extraction is derived from the salient detection method [2].

* "IEEE Geoscience and Remote Sensing Letters information for authors," vol.15(9), pp.C3-C3, Sept. 2018.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "IEEE Geoscience and Remote Sensing Letters Institutional Listings," vol.15(9), pp.C4-C4, Sept. 2018.* Presents a listing of institutional institutions relevant for this issue of the publication.

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing - new TOC (2018 September 20) [Website]

* "Frontcover," vol.11(9), pp.C1-C1, Sept. 2018.* Presents the front cover for this issue of the publication.

* "IEEE Geoscience and Remote Sensing Societys," vol.11(9), pp.C2-C2, Sept. 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Table of Contents," vol.11(9), pp.2977-2978, Sept. 2018.* Presents the table of contents for this issue of the publication.

Lanjie Zhang;Xiaobin Yin;Zhenzhan Wang;Hao Liu;Mingsen Lin; "Preliminary Analysis of the Potential and Limitations of MICAP for the Retrieval of Sea Surface Salinity," vol.11(9), pp.2979-2990, Sept. 2018. A new payload concept has been proposed: the microwave imager combined active/passive (MICAP). MICAP is a combined one-dimensional microwave interferometric radiometer that operates at 1.4, 6.9, 18.7, and 23.8 GHz and L-band (1.26 GHz) scatterometer. It has the capability to simultaneously remotely sense sea surface salinity (SSS), sea surface temperature (SST), and wind speed. MICAP will be a candidate payload onboard the Ocean Salinity Satellite led by the State Oceanic Administration of China to monitor SSS and reduce geophysical errors caused by surface roughness and SST. To provide an “all-weather” estimation of SSS with high accuracy from space, the errors of the simultaneous retrieval of multiparameters using MICAP are analyzed, and noise levels and stability requirements of instruments are estimated. Preliminary analysis shows that MICAP can provide SSS with an accuracy of 1 psu for single measurement and 0.1 psu over the global ocean for 200 × 200 km resolution pixels and one month at middle and low latitudes with default instrument noises (0.1 K, 0.3 K and 0.3 K for the L, C, and K band radiometers, respectively, and 0.1 dB for the L-band scatterometer) while uncertainties of the drift corrections are less than the radiometer sensitivities.

Xiaolin Bian;Yun Shao;Shiang Wang;Wei Tian;Xiaochen Wang;Chunyan Zhang; "Shallow Water Depth Retrieval From Multitemporal Sentinel-1 SAR Data," vol.11(9), pp.2991-3000, Sept. 2018. The Sentinel-1 constellation can provide numerous high-resolution C-band synthetic aperture radar (SAR) data with long-term continuity and freely, thus showing a cost-effective solution for the coastal monitoring at high or moderate spatial resolutions. The major goal is to improve estimates of shallow water depth for SAR applications. We present an algorithm that is based on the linear dispersion relation between water depth and swell parameters like swell wavelength, direction, and period to estimate shallow water depth using multitemporal SAR data with a short repeating cycle. This is accomplished via circular convolution and Kalman filter that provides both the estimates and a measure of their uncertainty at each location. The introduced algorithm is tested on four Sentinel-1 interferometric wide swath (IW) mode SAR images over the coastal region of Fujian Province, China. The retrieved water depth both from multitemporal SAR images and different single SAR images show general agreement with water depth from an official electronic navigational chart. All comparisons indicate that the proposed method is feasible and multitemporal SAR data have great potential in bathymetric surveying.

Onur Yuzugullu;Esra Erten;Irena Hajnsek; "Assessment of Paddy Rice Height: Sequential Inversion of Coherent and Incoherent Models," vol.11(9), pp.3001-3013, Sept. 2018. This paper investigates the evolution of canopy height of rice fields for a complete growth cycle. For this purpose, copolar interferometric Synthetic Aperture Radar (Pol-InSAR) time series data were acquired during the large across-track baseline (<inline-formula><tex-math notation="LaTeX">$>$</tex-math></inline-formula>1 km) science phase of the TanDEM-X mission. The height of rice canopies is estimated by three different model-based approaches. The first approach evaluates the inversion of the Random Volume over Ground (RVoG) model. The second approach evaluates the inversion of a metamodel-driven electromagnetic backscattering model by including a priori morphological information. The third approach combines the previous two processes. The validation analysis was carried out using the Pol-InSAR and ground measurement data acquired between May and September in 2015 over rice fields located in Ipsala district of Edirne, Turkey. The results of presented height estimation algorithms demonstrated the advantage of Pol-InSAR data. The combined RvoG model and EM metamodel height estimation approach provided rice canopy heights with errors less than 20 cm for the complete growth cycle.

Hannah Joerg;Matteo Pardini;Irena Hajnsek;Konstantinos P. Papathanassiou; "Sensitivity of SAR Tomography to the Phenological Cycle of Agricultural Crops at X-, C-, and L-band," vol.11(9), pp.3014-3029, Sept. 2018. Understanding the impact of soil and plant parameter changes in agriculture on Synthetic Aperture Radar (SAR) measurements is of great interest when it comes to monitor the temporal evolution of agricultural crops by means of SAR. In this regard, specific transitions between phenological stages in corn, barley, and wheat have been identified associated to certain dielectric and geometric changes, based on a time series of fully polarimetric multibaseline SAR data and in situ measurements. The data have been acquired in the frame of DLR's CROPEX campaign on six dates between May and July in 2014. The experiments reported in this paper address the sensitivity of X-, C-, and L-band to phenological transitions exploiting the availability of multiple baselines on each acquisition date. The application of tomographic techniques enables the estimation of the three-dimensional (3-D) backscatter distribution and the separation of ground and volume scattering components. Tomographic parameters have been derived at different frequencies, namely the center of mass of the profiles of the total and of the volume-only 3-D backscatter, and the ground and volume powers. Their sensitivity and ability to detect changes occurring on the ground and in the vegetation volume have been evaluated focusing on the added value provided by the 3-D resolution at the different frequencies and polarizations available.

Mohammad Rezaee;Masoud Mahdianpari;Yun Zhang;Bahram Salehi; "Deep Convolutional Neural Network for Complex Wetland Classification Using Optical Remote Sensing Imagery," vol.11(9), pp.3030-3039, Sept. 2018. The synergistic use of spatial features with spectral properties of satellite images enhances thematic land cover information, which is of great significance for complex land cover mapping. Incorporating spatial features within the classification scheme have been mainly carried out by applying just low-level features, which have shown improvement in the classification result. By contrast, the application of high-level spatial features for classification of satellite imagery has been underrepresented. This study aims to address the lack of high-level features by proposing a classification framework based on convolutional neural network (CNN) to learn deep spatial features for wetland mapping using optical remote sensing data. Designing a fully trained new convolutional network is infeasible due to the limited amount of training data in most remote sensing studies. Thus, we applied fine tuning of a pre-existing CNN. Specifically, AlexNet was used for this purpose. The classification results obtained by the deep CNN were compared with those based on well-known ensemble classifiers, namely random forest (RF), to evaluate the efficiency of CNN. Experimental results demonstrated that CNN was superior to RF for complex wetland mapping even by incorporating the small number of input features (i.e., three features) for CNN compared to RF (i.e., eight features). The proposed classification scheme is the first attempt, investigating the potential of fine-tuning pre-existing CNN, for land cover mapping. It also serves as a baseline framework to facilitate further scientific research using the latest state-of-art machine learning tools for processing remote sensing data.

Frank Ahern;Brian Brisco;Kevin Murnaghan;Philip Lancaster;Donald K. Atwood; "Insights Into Polarimetric Processing for Wetlands From Backscatter Modeling and Multi-Incidence Radarsat-2 Data," vol.11(9), pp.3040-3050, Sept. 2018. We have observed unexpected results using the Freeman–Durden (FD) and other polarimetric decompositions in Radarsat-2 quad-pol data from many swamps in Eastern Ontario. In particular, the decompositions reported minimal backscatter from the double-bounce mechanism in a situation where there was compelling evidence that double-bounce backscatter contributed substantially to the return. This led to a hypothesis that the FD and similar models give erroneous results because of the physics of Fresnel reflection of wood, a lossy dielectric material, that makes up the vertical reflecting surfaces in swamps. We found some support for this hypothesis in the literature, and now report on an extensive theoretical and observational investigation. This work has shown that the Freeman-Durden decomposition, and other decompositions that use the same logic, will often mistake double-bounce backscatter as single-bounce backscatter in wetlands. This is a consequence of the fundamental physics of Fresnel reflection. It is important for users to be aware of this pitfall. Double-bounce backscatter from natural surfaces can be identified without recourse to polarimetric decomposition. The simplest, and most reliable, indicator of double-bounce backscatter is a high return in HH polarization. Double-bounce backscatter will generally produce higher return in HH than any other scattering mechanism. If both HH and VV polarizations are available, a high HH/VV intensity ratio is also a strong indicator of double-bounce backscatter. Additional modeling efforts are expected to provide further insights that can lead to improved applications of polarimetric data.

Xiaofeng Wang;Chaowei Zhou;Xiaoming Feng;Changwu Cheng;Bojie Fu; "Testing the Efficiency of Using High-Resolution Data From GF-1 in Land Cover Classifications," vol.11(9), pp.3051-3061, Sept. 2018. High-resolution remote sensing plays an important role in the study of subtle changes on the Earth's surface. The newly orbiting Chinese GF-1 satellites are designed to observe the Earth surface on a regional scale; however, the satellite efficiency requires further investigation. In this paper, the efficiency of using GF-1 01 satellite images to monitor a complex surface is tested by considering supplementary information and different land cover classification methods. Our work revealed that the GF-1 satellite observations can efficiently detect land cover fragments. When the support vector machine method is applied, the overall classification accuracy based on multisource data reaches 90.5%. The “salt and pepper phenomenon” is effectively reduced in classification images. These results also indicate that the accuracy of the GF-1 image classification is superior to the results when using the same method with the Landsat 8 and Sentinel-2A images, with the overall classification accuracy increasing by 23.6% and 13.6%. Our study suggests that GF-1 satellite observations are suitable for land cover studies on complex land surfaces. This approach can benefit various related fields such as land resource surveys, ecological assessments, environmental evaluations, and so forth.

Luciana O. Pereira;Corina C. Freitas;Sidnei J. S. Sant´Anna;Mariane S. Reis; "Evaluation of Optical and Radar Images Integration Methods for LULC Classification in Amazon Region," vol.11(9), pp.3062-3074, Sept. 2018. The main objective of this study is to evaluate different methods to integrate (fusion and combination) Synthetic Aperture Radar (SAR) Advanced Land Observing Satellite (ALOS) Phased Arrayed L-band SAR (PALSAR-1) (Fine Beam Dual mode-FDB) and LANDSAT images in order to identify those which lead to higher accuracy of land-use and land-cover (LULC) mapping in an agricultural frontier region in Amazon. One method used to integrate the multipolarized information in SAR images before the fusion process was also evaluated. In this method, the first principal component (PC1) of SAR data was used. Color compositions of fused data that presented better LULC classification were visually analyzed. Considering the proposed objective, the following fusion methods must be highlighted: Ehlers, Wavelet á trous, Intensity, Hue and Saturation (IHS), and selective principal component analysis (SPC). These latter three methods presented good results when processed using PC1 from ALOS/PALSAR-1 FBD backscatter filtered image or three SAR extracted and selected features. These results corroborate with the applicability of the proposed method for SAR data information integration. Distinct methods better discriminate different LULC classes. In general, densely forested classes were better characterized by the Ehlers_TM6 fusion method, in which at least the polarization HV was used. Intermediate and initial regeneration classes were better discriminated using SPC-fused data with PC1 of ALOS/PALSAR-1 FBD data. Bare soil and pasture classes were better discriminated in optical features and the PC1 of ALOS/PALSAR-1 FBD data fused by the IHS method. Soybean with approximately 40 days from seeding was better discriminated in image classification obtained from ALOS/PALSAR-1 FBD image.

Declan D. G. Radford;Matthew J. Cracknell;Michael J. Roach;Grace V. Cumming; "Geological Mapping in Western Tasmania Using Radar and Random Forests," vol.11(9), pp.3075-3087, Sept. 2018. Mineral exploration and geological mapping of highly prospective areas in western Tasmania, southern Australia, is challenging due to steep topography, dense vegetation, and limited outcrop. Synthetic aperture radar (SAR) can potentially penetrate vegetation canopies and assist geological mapping in this environment. This study applies manual and automated lithological classification methods to airborne polarimetric TopSAR and geophysical data in the Heazlewood region, western Tasmania. Major discrepancies between classification results and the existing geological map generated fieldwork targets that led to the discovery of previously unmapped rock units. Manual analysis of radar image texture was essential for the identification of lithological boundaries. Automated pixel-based classification of radar data using Random Forests achieved poor results despite the inclusion of textural information derived from gray level co-occurrence matrices. This is because the majority of manually identified features within the radar imagery result from geobotanical and geomorphological relationships, rather than direct imaging of surficial lithological variations. Inconsistent relationships between geology and vegetation or geology and topography limit the reliability of TopSAR interpretations for geological mapping in this environment. However, Random Forest classifications, based on geophysical data and validated against manual interpretations, were accurate (∼90%) even when using limited training data (∼0.15% of total data). These classifications identified a previously unmapped region of mafic–ultramafic rocks, the presence of which was verified through fieldwork. This study validates the application of machine learning for geological mapping in remote and inaccessible localities but also highlights the limitations of SAR data in thickly vegetated terrain.

Hongbo Jiang;Qiang Li;Qisong Jiao;Xin Wang;Lie Wu; "Extraction of Wall Cracks on Earthquake-Damaged Buildings Based on TLS Point Clouds," vol.11(9), pp.3088-3096, Sept. 2018. Earthquakes often induce collapse or cause extreme damage to large areas of buildings. One of the most important requirements for earthquake emergency operations is staying up-to-date on the extent of structural damage in earthquake-stricken areas. Terrestrial laser scanning (TLS) technology can directly obtain the coordinates of mass points while maintaining a high measurement accuracy, thereby providing the means to directly extract quantitative information from surface cracks on damaged buildings. In this paper, we present a framework for extracting wall cracks from high-density TLS point clouds. We first differentiate wall points from nonwall points using the TLS data. Then, a planar triangulation modeling method is used to construct a triangular irregular network (TIN) dataset, after which a raster surface is generated using an inverse distance weighting point cloud rasterization method based on the crack width. Then, cracks are extracted based on their shape features. We extract six sets of wall cracks from a damaged building wall in Beichuan County as an example of employing the above-mentioned method; the damage was caused by the Wenchuan earthquake. Quantitative calculations reveal that the extraction accuracy of the proposed method is greater than 91% and that the rate of leakage detection is less than 10%. In addition, the main limiting factor of the extraction accuracy is the crack width, that is, a wider crack will result in a higher extraction accuracy. In addition, the crack connectivity and leakage rate are negatively correlated, that is, a higher connectivity corresponds to a lower rate of missed extractions.

Zhiguo Meng;Shuo Hu;Tianxing Wang;Cui Li;Zhanchuan Cai;Jinsong Ping; "Passive Microwave Probing Mare Basalts in Mare Imbrium Using CE-2 CELMS Data," vol.11(9), pp.3097-3104, Sept. 2018. To evaluate the availability of the volcanism study, the microwave sounder (Chinese lunar exploration project, a microwave sounder, CELMS) data from Change'E -2 satellite were employed in this study to compare with the geologic results derived from optical data and radar data of Mare Imbrium, which is presented as the long duration and the last extensive phase of lunar volcanism. First, the normalized brightness temperature (<inline-formula> <tex-math notation="LaTeX">$T_{B}$</tex-math></inline-formula>) is generated to eliminate its strong latitude-dependent effect, which implies a good correlation with the titanium abundance. Moreover, difference between <inline-formula> <tex-math notation="LaTeX">$T_{B}$</tex-math></inline-formula> of the same frequency at noon and that at midnight ( <inline-formula><tex-math notation="LaTeX">${text{d}}T_{B}$</tex-math></inline-formula>) is deduced to evaluate the absorption features of the lunar regolith. According to the <inline-formula><tex-math notation="LaTeX"> ${text{d}}T_{B}$</tex-math></inline-formula> change with frequency, a new geologic perspective is given to Mare Imbrium. The new interpretation map largely agrees with optical results in the regions with high (FeO + TiO2) abundance (FTA), and with the interpretation maps by radar data in the regions with low FTA. The statistical results validate the three-phase volcanism in Mare Imbrium. This study also hints the special importance of the CELMS data in understanding the basaltic volcanism of the Moon.

Roghayeh Shamshiri;Hossein Nahavandchi;Mahdi Motagh; "Persistent Scatterer Analysis Using Dual-Polarization Sentinel-1 Data: Contribution From VH Channel," vol.11(9), pp.3105-3112, Sept. 2018. The regular acquisition and relatively short revisit time of Sentinel-1 satellite improve the capability of a persistent scatterer interferometric synthetic aperture radar (PS-InSAR) as a suitable geodetic method of choice for measuring ground surface deformation in space and time. The SAR instrument aboard the Sentinel-1 satellite supports operation in dual polarization (HH–HV, VV–VH), which can be used to increase the spatial density of measurement points through the polarimetric optimization method. This study evaluates the improvement in displacement mapping by incorporating the information obtained from the VH channel of Sentinel-1 data into the PS-InSAR analysis. The method that has shown great success with different polarimetric data performs a search over the available polarimetric space in order to find a linear combination of polarization states, which yields the optimum PS selection criterion using the amplitude dispersion index (ADI) criterion. We applied the method to a dataset of 50 dual-polarized (VV–VH) Sentinel-1 images over Trondheim city in Norway. The results show overall increase of about 186% and 78% in the number of PS points with respect to the conventional channels of VH and VV, respectively. The study concludes that, using the ADI optimization, we can incorporate information from the VH channel into the PS-InSAR analysis, which otherwise is lost due to its low amplitude.

Zhongle Ren;Biao Hou;Zaidao Wen;Licheng Jiao; "Patch-Sorted Deep Feature Learning for High Resolution SAR Image Classification," vol.11(9), pp.3113-3126, Sept. 2018. Synthetic aperture radar (SAR) image classification is a fundamental process for SAR image understanding and interpretation. The traditional SAR classification methods extract shallow and handcrafted features, which cannot subtly depict the abundant modal information in high resolution SAR image. Inspired by deep learning, an effective feature learning tool, a novel method called patch-sorted deep neural network (PSDNN) to implement unsupervised discriminative feature learning is proposed. First, the randomly selected patches are measured and sorted by the meticulously designed patch-sorted strategy, which adopts instance-based prototypes learning. Then the sorted patches are delivered to a well-designed dual-sparse autoencoder to obtain desired weights in each layer. Convolutional neural network is followed to extract high-level spatial and structural features. At last, the features are fed to a linear support vector machine to generate predicted labels. The experimental results in three broad SAR images of different satellites confirm the effectiveness and generalization of our method. Compared with three traditional feature descriptors and four unsupervised deep feature descriptors, the features learned in PSDNN appear powerful discrimination and the PSDNN achieves desired classification accuracy and a good visual appearance.

Zhaocheng Wang;Lan Du;Hongtao Su; "Superpixel-Level Target Discrimination for High-Resolution SAR Images in Complex Scenes," vol.11(9), pp.3127-3143, Sept. 2018. Traditional synthetic aperture radar (SAR) target discrimination methods are implemented at the chip-level, which may have good discrimination performance in simple scenes but may lose the effectiveness in complex scenes. To improve the discrimination performance in complex scenes, this paper proposes a superpixel-level target discrimination method directly in high-resolution SAR images. The proposed discrimination method mainly contains three stages. First, based on the superpixel-level target detection results, we describe each superpixel via the multilevel and multidomain feature descriptor, which can reflect the differences between targets and clutter comprehensively. Second, we employ the support vector machine as the discriminator to obtain the discriminated target superpixels. Finally, we cluster the discriminated target superpixels and extract the target chips from the original SAR image based on the clustering results. The experimental results based on the miniSAR real SAR data show that the proposed discrimination method has about 25% higher F1-score than the traditional discrimination methods.

Shasha Mo;Jianwei Niu;Yanfei Wang; "A Novel Approach Based on the Instrumental Variable Method With Application to Airborne Synthetic Aperture Radar Imagery," vol.11(9), pp.3144-3154, Sept. 2018. Airborne synthetic aperture radar (SAR) system is an essential tool for modern remote sensing applications. The aircraft is easily affected by the atmospheric turbulence, leading to deviations from the ideal track. To enable high-resolution imagery, a navigation system is usually mounted on the aircraft. Due to the limitation of the navigation system's accuracy, motion errors estimated from the SAR raw data are needed. In this paper, a novel motion compensation algorithm, which is based on the instrumental variables (IV) method, is proposed. We call this IV-based algorithm the IVA algorithm. In this algorithm, double-derivative motion errors are estimated without modeling the random disturbances to be a zero-mean Gaussian distribution and to be created from mutually independent noise, which makes it more robust and accurate in focusing SAR images. Before the motion error estimation, a Savitzky–Golay filter is performed to reduce the phase estimation errors, in which the phase is obtained by the phase gradient autofocus algorithm. Finally, the estimated motion errors are used to compensate the received signal with the range-dependent model. The IVA algorithm is validated by using real airborne SAR data, and experimental results show that the proposed algorithm achieve an excellent performance in airborne SAR systems.

Aifang Liu;Fan Wang;Hui Xu;Liechen Li; "N-SAR: A New Multichannel Multimode Polarimetric Airborne SAR," vol.11(9), pp.3155-3166, Sept. 2018. Recent years have seen a surging interest in several novel tools pertaining to SAR, such as multichannel high-resolution and wide-swath (HRWS) SAR, multibaseline interferometric SAR (InSAR), multisubband SAR, polarimetric SAR (PolSAR), and polarimetric SAR interferometry (PolInSAR). We believe that these new approaches to SAR have valuable scientific applications. Here, we present a new experimental airborne SAR system named “N-SAR” (SAR of the Nanjing Research Institute of Electronic Technology) that can fulfill new requirements and is scalable to allow rapid development of modern SARs. As a dual-antenna airborne SAR system, the N-SAR system will be used to test new technologies and signal processing algorithms such as PolSAR and PolInSAR, multibaseline SAR interferometry, multichannel multichannel HRWS SAR/InSAR, and SAR ground moving target indicator. It will play a key role in evaluating the performance of current engineering-oriented SAR systems by using several new operational modes for scientific purposes. In this paper, we provide a conceptual description of the general system design features, instrument design, and capabilities of the N-SAR system. To meet the requirements of different experiments, a novel operational mode, named the alternating bistatic multipolarized mode based on the N-SAR system, is presented. A series of flight tests that started in April 2017 and will be carried out over the next few years are shown. Several preliminary experimental results pertaining to the calibration of PolSAR, multichannel SAR imaging, and interferometry are presented in this paper as an early validation of the capabilities of the N-SAR system.

Huajian Xu;Zhiwei Yang;Pengyuan He;Guisheng Liao;Min Tian;Penghui Huang; "A Multifeature Autosegmentation-Based Approach for Inshore Ambiguity Identification and Suppression With Azimuth Multichannel SAR Systems," vol.11(9), pp.3167-3178, Sept. 2018. To address the problems of location and suppression of inshore azimuth ambiguous clutter for the azimuth multichannel synthetic aperture radar (SAR) systems, a multifeature autosegmentation-based approach is developed in this paper. This proposed method can segment a SAR image automatically according to the distinctions among main land clutter, ambiguous land clutter, and sea clutter in the features of interferogram's phase and magnitude. First, the finite mixture clutter model for a multilook covariance matrix (MLCM) is built, where the off-diagonal elements of the MLCM contain the information of magnitude and interferogram's phase between azimuth channels. Then, SAR image autosegmentation is carried out by using the expectation maximum algorithm with combination of the aforementioned mixture model, and the isolated points that are segmented incorrectly can be eliminated via exploiting the Markov random field smoothing technique. Finally, azimuth ambiguous clutter can be suppressed by means of the clutter covariance matrix, which is constructed by the training samples of segmented ambiguities. The experiments on simulated data and real data measured by TerraSAR-X demonstrate that the proposed approach can obtain the more accurate position information and good cancellation performance for the azimuth ambiguous clutter, without the accurate system parameters and the information of the sources account for azimuth ambiguities.

Alim Samat;Claudio Persello;Sicong Liu;Erzhu Li;Zelang Miao;Jilili Abuduwaili; "Classification of VHR Multispectral Images Using ExtraTrees and Maximally Stable Extremal Region-Guided Morphological Profile," vol.11(9), pp.3179-3195, Sept. 2018. Pixel-based contextual classification methods, including morphological profiles (MPs), extended MPs, attribute profiles (APs), and MPs with partial reconstruction (MPPR), have shown the benefits of using geometrical features extracted from very-high resolution (VHR) images. However, the structural element sequence or the attribute filters that are necessarily adopted in the above solutions always result in computationally inefficient and redundant high-dimensional features. To solve the second problem, we introduce maximally stable extremal regions (MSER) guided MPs (MSER_MPs) and MSER_MPs(M), which contains mean pixel values within regions, to foster effective and efficient spatial feature extraction. In addition, the extremely randomized decision tree (ERDT) and its ensemble version, ExtraTrees, are introduced and investigated. An extremely randomized rotation forest (ERRF) is proposed by simply replacing the conventional C4.5 decision tree in a rotation forest (RoF) with an ERDT. Finally, the proposed spatial feature extractors, ERDT, ExtraTrees, and ERRF are evaluated for their ability to classify three VHR multispectral images acquired over urban areas, and compared against C4.5, Bagging(C4.5), random forest, support vector machine, and RoF in terms of classification accuracy and computational efficiency. The experimental results confirm the superior performance of MSER_MPs(M) and MSER_MPs compared to MPPR and MPs, respectively, and ExtraTrees is better for spectral-spatial classification of VHR multispectral images using the original spectra stacked with MSER_MPs(M) features.

Yong Yang;Lei Wu;Shuying Huang;Yingjun Tang;Weiguo Wan; "Pansharpening for Multiband Images With Adaptive Spectral–Intensity Modulation," vol.11(9), pp.3196-3208, Sept. 2018. The pansharpening algorithm often faces an imbalance between spatial sharpness and spectral preservation, resulting in spectral and intensity inhomogeneities in the fused image. In this paper, to overcome this problem, we present a robust pansharpening method for multiband images with adaptive spectral–intensity modulation. In this method, we propose an adaptive spectral modulation coefficient (ASMC) and an adaptive intensity modulation coefficient (AIMC) to modulate the spectral and spatial information in the fused image, respectively. Among these coefficients, the ASMC is constructed based on two aspects: first, the details extracted from the panchromatic (PAN) and multispectral (MS) images; and second, the spectral relationship between each MS band. The AIMC is calculated by assessing the correlation and standard deviation between the PAN image and each MS band. Finally, we propose a mathematically linear model to combine ASMC and AIMC to achieve the fused image. Various remote-sensing satellite images were used in the evaluations. Experimental results indicate that the proposed method achieves outstanding performance in balancing spatial and spectral information and outperforms several state-of-the-art fusion methods in terms of both full-reference and no-reference metrics, and on visual inspection.

Huifang Shen;Biao Hou;Zaidao Wen;Licheng Jiao; "Structural-Correlated Self-Examples Based Superresolution of Single Remote Sensing Image," vol.11(9), pp.3209-3223, Sept. 2018. Image superresolution methods are of great importance to image analysis and interpretation and have been intensively studied and widely applied. The main research works on single-image superresolution are how to construct the training image database and how to learn the mapping relationship between low- and high-resolution images. Considering only a single image, a novel super-resolution method for self-examples learning without depending on any external training images is proposed in this paper. The training self-examples are extracted from the gradually degraded versions of the testing image and their corresponding interpolated counterparts to build internal high- and low-resolution training databases. Inspired by the concept of “coarse-to-fine,” the upscaling process is performed gradually as well. The algorithm includes two steps during each upscaling procedure. For each low-resolution patch, the first step is to find structural-correlated patches by sparse representation throughout the training database to learn global linear mapping function between low- and high-resolution image patches without any assumption on the data, and the second step takes the advantage of sparse representation as a local constraint on super-resolution result. At each upscaling procedure, iterative back projection is applied to guarantee the consistency of the estimated image. Moreover, the internal training database will be updated according to the newly generated upscaled image. Experiments show that the proposed algorithm can achieve good performance on peak signal-to-noise ratio and structural similarity index and produce excellent visual effects compared with other super-resolution methods.

Ya'nan Zhou;Yuehong Chen;Li Feng;Xin Zhang;Zhanfeng Shen;Xiaocheng Zhou; "Supervised and Adaptive Feature Weighting for Object-Based Classification on Satellite Images," vol.11(9), pp.3224-3234, Sept. 2018. Object-based image analysis (OBIA) technique has been representing an evolving paradigm of remote sensing application, along with more high-resolution satellite images available. However, too many derived features from segmented objects also present a new challenge to OBIA applications. In this paper, we present a supervised and adaptive method for ranking and weighting features for object-based classification. The core of this method is the feature weight maps for each land type resulted from prior thematic maps and their corresponding satellite images of study areas. Specifically, first, satellite images to be classified are segmented using an adaptive multiscale algorithm, and the multiple (spectral, shape, and texture) features of segmented objects are calculated. Second, we extract distance maps and feature weight vectors for each land type from the prior thematic maps and corresponding satellite images, to generate feature weight maps. Third, a feature-weighted classifier with the feature weight maps, is applied on the segmented objects to generate classification maps. Finally, the classification result is evaluated. This approach is applied on a Sentinel-2 multispectral satellite image and a Google Map image to produce objected-based classification maps, compared with the traditional feature selection algorithms. The experimental results illustrate that the proposed method is practically efficient to select important features and improve classification performance.

Li Yan;Ruixi Zhu;Yi Liu;Nan Mo; "TrAdaBoost Based on Improved Particle Swarm Optimization for Cross-Domain Scene Classification With Limited Samples," vol.11(9), pp.3235-3251, Sept. 2018. Scene classification is usually based on supervised learning, but collecting instances is expensive and time-consuming. TrAdaBoost has achieved great success in transferring source instances to target images, but it has problems, such as the excessive focus on instances harder to classify, the rapid convergence speed of the source instances, and the weight mismatch caused by the big gap between the number of source and target instances, leading to decreased classification accuracy. In this paper, in order to address these problems, classical particle swarm optimization (PSO) is modified to select the optimal feature subspace for classifying the “harder” and “easier” instances by reducing unimportant dimensions. A modified correction factor is proposed by considering the classification accuracy of the instances from both domains, to decrease the convergence speed. Iterative selective TrAdaBoost is also proposed to reduce the weight mismatch by removing the indiscriminate source instances. The experimental results obtained with three benchmark data sets confirm that the proposed method outperforms most of the previous methods of scene classification with limited target samples. It is also proved that modified PSO for optimal feature subspace selection, the modified correction factor, and iterative selective TrAdaBoost are all effective in improving the classification accuracy, giving improvements of 3.6%, 4.3%, and 2.7%, and these three contributions together increase the classification accuracy by about 8% in total.

Bo Yu;Lu Yang;Fang Chen; "Semantic Segmentation for High Spatial Resolution Remote Sensing Images Based on Convolution Neural Network and Pyramid Pooling Module," vol.11(9), pp.3252-3261, Sept. 2018. Semantic segmentation provides a practical way to segment remotely sensed images into multiple ground objects simultaneously, which can be potentially applied to multiple remote sensed related aspects. Current classification algorithms in remotely sensed images are mostly limited by different imaging conditions, the multiple ground objects are difficult to be separated from each other due to high intraclass spectral variances and interclass spectral similarities. In this study, we propose an end-to-end framework to semantically segment high-resolution aerial images without postprocessing to refine the segmentation results. The framework provides a pixel-wise segmentation result, comprising convolutional neural network structure and pyramid pooling module, which aims to extract feature maps at multiple scales. The proposed model is applied to the ISPRS Vaihingen benchmark dataset from the ISPRS 2D Semantic Labeling Challenge. Its segmentation results are compared with previous state-of-the-art method UZ_1, UPB and three other methods that segment images into objects of all the classes (including clutter/background) based on true orthophoto tiles, and achieve the highest overall accuracy of 87.8% over the published performances, to the best of our knowledge. The results validate the efficiency of the proposed model in segmenting multiple ground objects from remotely sensed images simultaneously.

Renlong Hang;Qingshan Liu; "Dimensionality Reduction of Hyperspectral Image Using Spatial Regularized Local Graph Discriminant Embedding," vol.11(9), pp.3262-3271, Sept. 2018. Dimensionality reduction (DR) is an important preprocessing step for hyperspectral image (HSI) classification. Recently, graph-based DR methods have been widely used. Among various graph-based models, the local graph discriminant embedding (LGDE) model has shown its effectiveness due to the complete use of label information. Besides spectral information, an HSI also contains rich spatial information. In this paper, we propose a regularization method to incorporate the spatial information into the LGDE model. Specifically, an oversegmentation method is first employed to divide the original HSI into nonoverlapping superpixels. Then, based on the observation that pixels in a superpixel often belong to the same class, intraclass graphs are constructed to describe such spatial information. Finally, the constructed superpixel-level intraclass graphs are used as a regularization term, which can be naturally incorporated into the LGDE model. Besides, to sufficiently capture the nonlinear property of an HSI, the linear LGDE model is further extended into its kernel counterpart. To demonstrate the effectiveness of the proposed method, experiments have been established on three widely used HSIs acquired by different hyperspectral sensors. The obtained results show that the proposed method can achieve higher classification performance than many state-of-the-art graph embedding models, and the kernel extension model can further improve the classification performance.

Zeynep Gündoğar;Behçet Uğur Töreyin;Metin Demiralp; "Tridiagonal Folmat Enhanced Multivariance Products Representation Based Hyperspectral Data Compression," vol.11(9), pp.3272-3278, Sept. 2018. Hyperspectral imaging features an important issue in remote sens ing and applications. Requirement to collect high volumes of hyper spectral data in remote sensing algorithms poses a compression prob lem. To this end, many techniques or algorithms have been develop ed and continues to be improved in scientific literature. In this paper, we propose a recently developed lossy compression method whi ch is called tridiagonal folded matrix enhanced multivariance prod ucts representation (TFEMPR). This is a specific multidimensional array decomposition method using a new mathematical concept called “folded matrix” and provides binary decomposi tion for multidimensional arrays. Beside the method a comparati ve analysis of compression algorithms is presented in this paper by means of compression performances. Compression performance of TFEMPR is compared with the state-art-methods such as compressive -projection principal component analysis, matching pursu it and block compressed sensing algorithms, etc., via average peak signal-to-noise ratio. Experiments with AVIRIS data set indicate a superior reconstructed image quality for the propo sed technique in comparison to state-of-the-art hyperspectral data compression methods.

Mark Berman;Zhipeng Hao;Glenn Stone;Yi Guo; "An Investigation Into the Impact of Band Error Variance Estimation on Intrinsic Dimension Estimation in Hyperspectral Images," vol.11(9), pp.3279-3296, Sept. 2018. There have been a significant number of recent papers about hyperspectral imaging, which propose various methods for estimating the number of materials/endmembers in hyperspectral images. This is sometimes called the “intrinsic” dimension (ID) of the image. Estimation of the error variance in each spectral band is a critical first step in ID estimation. The estimated error variances can then be used to preprocess (e.g., whiten) the data, prior to ID estimation. A range of variance estimation methods have been advocated in the literature. We investigate the impact of five variance estimation methods (three using spatial information and two using spectral information) on five ID estimation methods, with the aid of four different, but semirealistic, sets of simulated hyperspectral images. Our findings are as follows: first, for all four sets, the two spectral variance estimation methods significantly outperform the three spatial methods; second, when used with the spectral variance estimation methods, two of the ID estimation methods (called random matrix theory and NWHFC) consistently outperform the other three ID estimation methods; third, the better spectral variance estimation method sometimes gives negative variance estimates; fourth, we introduce a simple correction that guarantees positivity; and fifth, we give a fast algorithm for its computation.

Jie Feng;Liguo Liu;Xianghai Cao;Licheng Jiao;Tao Sun;Xiangrong Zhang; "Marginal Stacked Autoencoder With Adaptively-Spatial Regularization for Hyperspectral Image Classification," vol.11(9), pp.3297-3311, Sept. 2018. Stacked autoencoder (SAE) provides excellent performance for image processing under sufficient training samples. However, the collection of training samples is difficult in hyperspectral images. Insufficient training samples easily make SAE overfit and limit the application of SAE to hypersepctral images. To address this problem, a novel marginal SAE with adaptively-spatial regularization (ARMSAE) is proposed for hyperspectral image classification. First, a superpixel segmentation method is used to divide the image into many homogenous regions. Then, at the pretraining stage, an adaptively-shaped spatial regularization is introduced to extract contextual information of samples in the homogenous regions. It sufficiently utilizes unlabeled adjacent samples to alleviate the lack of training samples. At the fine-tuning stage, the marginal samples based on geometrical property are selected to tune the ARMSAE network. The fine-tuning exploits margin strategy to alleviate the inaccurate statistical estimation caused by insufficient training samples. Finally, the label of each test sample is determined by all the samples locating in the same homogenous region. Experimental results on hyperspectral images demonstrate the proposed method provides encouraging classification performance compared with several related state-of-the-art methods.

Shutao Li;Qiaobo Hao;Xudong Kang;Jón Atli Benediktsson; "Gaussian Pyramid Based Multiscale Feature Fusion for Hyperspectral Image Classification," vol.11(9), pp.3312-3324, Sept. 2018. In this paper, we propose a segmented principal component analysis (SPCA) and Gaussian pyramid decomposition based multiscale feature fusion method for the classification of hyperspectral images. First, considering the band-to-band cross correlations of objects, the SPCA method is utilized for the spectral dimension reduction of the hyperspectral image. Then, the dimension-reduced image is decomposed into several Gaussian pyramids to extract the multiscale features. Next, the SPCA method is performed again to compute the fused SPCA based Gaussian pyramid features (SPCA-GPs). Finally, the performance of the SPCA-GPs is evaluated using the support vector machine classifier. Experiments performed on three widely used hyperspectral images show that the proposed SPCA-GPs method outperforms several compared classification methods in terms of classification accuracies and computational cost.

Yuan Fang;Linlin Xu;Junhuan Peng;Honglei Yang;Alexander Wong;David A. Clausi; "Unsupervised Bayesian Classification of a Hyperspectral Image Based on the Spectral Mixture Model and Markov Random Field," vol.11(9), pp.3325-3337, Sept. 2018. Typical unsupervised classification of hyperspectral imagery (HSI) uses a Gaussian mixture model to determine intensity similarity of pixels. However, the existence of mixed pixels in HSI tends to reduce the effectiveness of the similarity measure and leads to large classification errors. Since a semantic class is always dominated by a particular endmember, a mixed pixel can be better classified by identifying the dominant endmember. By exploiting the spectral mixture model (SMM) that describes the endmember-abundance pattern of mixed pixels, the discriminative ability of HSI can be enhanced. A Bayesian classification approach is presented for spatial–spectral HSI classification, where the data likelihood is built upon the SMM, and the label prior is based on a Markov random field (MRF). The new approach has three key characteristics. First, instead of using intensity similarity, the new approach uses the abundance-endmember pattern of each pixel and classifies a pixel by its dominant endmember. Second, to integrate the SMM into a Bayesian framework, a data likelihood is designed based on the SMM to reflect the influence of the dominant endmember on the conditional distribution of the mixed pixel given the class label. Third, the resulting maximum a posteriori problem is solved by the expectation–maximization (EM) algorithm, in which the E-step adopts a graph-cut approach to estimate the class labels, and the M-step adopts a purified-means approach to estimate the endmembers. Experiments on both simulated and real HSIs demonstrate that the proposed method can exploit the spatial–spectral information of HSI to achieve high accuracy in unsupervised classification of HSI.

Artem V. Nikonorov;Maksim V. Petrov;Sergei A. Bibikov;Pavel Y. Yakimov;Viktoriya V. Kutikova;Yuriy V. Yuzifovich;Andrey A. Morozov;Roman V. Skidanov;Nikolay L. Kazanskiy; "Toward Ultralightweight Remote Sensing With Harmonic Lenses and Convolutional Neural Networks," vol.11(9), pp.3338-3348, Sept. 2018. In this paper, we describe our advances in manufacturing a 256-layer 7-μm thick harmonic lens with 150 and 300 mm focal distances combined with color correction, deconvolution, and a feedforwarding deep learning neural network capable of producing images approaching photographic visual quality. While reconstruction of images taken with diffractive optics was presented in previous works, this paper is the first to use deep neural networks during the restoration step. The level of imaging quality we achieved with our imaging system can facilitate the emergence of ultralightweight remote sensing cameras for nano- and pico-satellites, and for aerial remote sensing systems onboard small UAVs and solar-powered airplanes.

Wei Li;Cheng Wang;Dawei Zai;Pengdi Huang;Weiquan Liu;Chenglu Wen;Jonathan Li; "A Volumetric Fusing Method for TLS and SFM Point Clouds," vol.11(9), pp.3349-3357, Sept. 2018. A terrestrial laser scanning (TLS) point cloud acquired from a given ground view is incomplete because of severe occlusion and self-occlusion. The models reconstructed by aligning the cross-source point clouds [TLS and structure-from-motion (SFM) point clouds] provide a more complete large-scale outdoor scene. However, because of differences in nonrigid deformation, stratified redundancy of alignment is inevitable and ubiquitous. Therefore, this paper presents a volumetric fusing method for cross-source three-dimensional reconstructions. To eliminate the stratification of aligned cross-source point clouds, we propose a graph-cuts method with boundary constraints for blending the two cross-source point clouds. Then, to reduce the gaps that exist in the blending results, we develop a progressive migration method combined with the local average direction of normal vectors to smooth the unconnected boundary. Finally, experimental results demonstrate the effectiveness of eliminating stratification with the proposed blending algorithm, and the progressive migration method achieves a smooth connection in the boundary of the blended point clouds.

Xinqu Chen;Chengming YE;Jonathan Li;Michael A. Chapman; "Quantifying the Carbon Storage in Urban Trees Using Multispectral ALS Data," vol.11(9), pp.3358-3365, Sept. 2018. This paper presents a new method for quantifying the carbon storage in urban trees using multispectral airborne laser scanning (ALS) data. This method takes the full advantage of multispectral ALS range and intensity data and shows the feasibility of quantifies the carbon storage in urban trees. Our method consists of four steps: multispectral ALS data processing, vegetation isolation, dendrometric parameters estimation, and carbon storage modeling. Our results suggest that ALS-based dendrometric parameter estimation and allometric models can yield consistent performance and accurate estimation. Citywide carbon storage estimation is derived in this paper for the Town of Whitchurch–Stouffville, Ontario, Canada, by extrapolating the values within the study area to the entire city based on the specific proportion of each land cover type. The proposed method reveals the potential of multispectral ALS data in land cover mapping and carbon storage estimation at individual-tree level.

Vyron Christodoulou;Yaxin Bi;George Wilkie; "A Fuzzy Shape-Based Anomaly Detection and Its Application to Electromagnetic Data," vol.11(9), pp.3366-3379, Sept. 2018. The problem of data analytics in real-world electromagnetic (EM) applications poses a lot of algorithmic constraints. The process of big datasets, the requirement of prior knowledge, unknown location of anomalies, and variable length patterns are all issues that need to be addressed. In this application, we address those issues by proposing a fuzzy shape-based method with anomaly detection. This method is evaluated against 12 benchmark datasets of different kinds of anomalies and provides promising results based on the use of a new performance metric that takes into account the distance between the predicted and actual anomalies. Real-world EM data from the Earth's magnetic field are provided by the SWARM satellite constellation relating to regions in China, Greece and Peru. The seismic events that occurred in those regions are compared against the SWARM data. Moreover, three other methods: GrammarViz, HOT-SAX, and CUSUM-EWMA are also applied to further investigate the possible linkages of EM anomalies with seismic events. The findings further our understanding of real-world data analytics in EM data and seismicity. Some proposals regarding the limitations of available data for the real-world datasets are also presented.

* "Become a published author in 4 to 6 weeks," vol.11(9), pp.3380-3380, Sept. 2018.* Advertisement, IEEE.

* "IEEE Geoscience and Remote Sensing Societys," vol.11(9), pp.C3-C3, Sept. 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Institutional Listings," vol.11(9), pp.C4-C4, Sept. 2018.* Presents a listing of institutional institutions relevant for this issue of the publication.

IEEE Geoscience and Remote Sensing Magazine - new TOC (2018 September 20) [Website]

* "Front Cover," vol.6(2), pp.C1-C1, June 2018.* Presents the front cover for this issue of the publication.

* "GRSM Call for Papers," vol.6(2), pp.C2-C2, June 2018.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "Table of Contents," vol.6(2), pp.1-2, June 2018.* Presents the table of contents for this issue of the publication.

* "Staff Listing," vol.6(2), pp.2-2, June 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

James L. Garrison; "In This Issue [From the Editor[Name:_blank]]," vol.6(2), pp.3-3, June 2018. Presents the introductory editorial for this issue of the publication.

Adriano Camps; "Overview of Current Activities: China and Elsewhere [President's Message[Name:_blank]]," vol.6(2), pp.4-6, June 2018. Presents the President’s message for this issue of the publication.

Luiz Gonzaga;Mauricio Roberto Veronez;Gabriel Lanzer Kannenberg;Demetrius Nunes Alves;Leonardo Gomes Santana;Jean Luca de Fraga;Leonardo Campos Inocencio;Lais Vieira de Souza;Fernando Marson;Fabiane Bordin;Francisco M.W. Tognoli;Kim Senger;Caroline Lessio Cazarin; "A Multioutcrop Sharing and Interpretation System: Exploring 3-D Surface and Subsurface Data," vol.6(2), pp.8-16, June 2018. Over the last two decades, rapid technological evolution has brought several advanced solutions for applications in various knowledge areas. Geotechnologies have allowed geoscientists to acquire vast amounts of spatialized digital data, producing equally large data sets. Light detection and ranging (lidar) technology, especially terrestrial laser scanners (TLSs), and the more recent reconstruction techniques that use multiple digital images have enabled geoscientists to improve the quality of their analyses and interpretations through digital outcrop models (DOMs) [1], [18]-[20]. The DOMs, also called virtual outcrops (VOs) [2], are a digital representation of data collected from surfaces that can be inspected, handled, and interpreted. From a geological point of view, outcrops provide an intermediary approach between the kilometric and millimetric work scales, and field data acquisition is often underexplored. Therefore, DOMs have become an intriguing way of integrating surface and subsurface data. This intermediary work scale allows for the acquisition of considerably more data using remote sensors, such as laser scanners, and photogrammetry algorithms, which produce a quality that is similar if not superior to standard field equipment. The high accuracy of the DOMs facilitates the recognition and interpretation of structures and features within a realistic three-dimensional (3-D) scenario [1], [3], [4].

Paolo de Matthaeis;Roger Oliva;Yan Soldo;Sandra Cruz-Pol; "Spectrum Management and Its Importance for Microwave Remote Sensing [Technical Committees[Name:_blank]]," vol.6(2), pp.17-25, June 2018. Fair and efficient management of the radio-frequency (RF) spectrum as used by different scientific and commercial services is becoming more and more challenging, and this can have a considerable impact on microwave remote sensing. Here, we provide an overview of those spectrum management aspects relevant to the remote-sensing community, along with an introduction to the actors and processes involved in spectrum regulation, including the World Radiocommunication Conference (WRC).

David Le Vine; "Distinguished Lecturers Available for 2018 [Distinguished Lecturer Program[Name:_blank]]," vol.6(2), pp.26-27, June 2018. Presents information on the GRSS 2018 Distinguished Lecturers Program.

* "BGDDS 2018," vol.6(2), pp.27-27, June 2018.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "CISS 2018," vol.6(2), pp.27-27, June 2018.* Presents information on the CISS 2018 Conference.

* "[Calendar[Name:_blank]]," vol.6(2), pp.28-30, June 2018.* Presents the GRSS society's calendar of upcoming events and meetings.

* "PIERS 2018," vol.6(2), pp.30-30, June 2018.* Presents information on the PIERS 2018 Conference.

* "Call for GRSL EiC," vol.6(2), pp.30-30, June 2018.* Presents a call for a new Editor-in-Chief for GRSL.

* "RSCL," vol.6(2), pp.C3-C3, June 2018.* Presents information on the Remote Sensing Code Library.

Topic revision: r6 - 22 May 2015, AndreaVaccari
 
banner.png
©2018 University of Virginia. Privacy Statement
Virginia Image and Video Analysis - School of Engineering and Applied Science - University of Virginia
P.O. Box 400743 - Charlottesville, VA - 22904 - E-Mail viva.uva@gmailREMOVEME.com