Relevant TOCs

IEEE Transactions on Image Processing - new TOC (2018 July 16) [Website]

Jingtian Zhang;Hubert P. H. Shum;Jungong Han;Ling Shao; "Action Recognition From Arbitrary Views Using Transferable Dictionary Learning," vol.27(10), pp.4709-4723, Oct. 2018. Human action recognition is crucial to many practical applications, ranging from human-computer interaction to video surveillance. Most approaches either recognize the human action from a fixed view or require the knowledge of view angle, which is usually not available in practical applications. In this paper, we propose a novel end-to-end framework to jointly learn a view-invariance transfer dictionary and a view-invariant classifier. The result of the process is a dictionary that can project real-world 2D video into a view-invariant sparse representation, and a classifier to recognize actions with an arbitrary view. The main feature of our algorithm is the use of synthetic data to extract view-invariance between 3D and 2D videos during the pre-training phase. This guarantees the availability of training data, and removes the hassle of obtaining real-world videos in specific viewing angles. Additionally, for better describing the actions in 3D videos, we introduce a new feature set called the 3D dense trajectories to effectively encode extracted trajectory information on 3D videos. Experimental results on the IXMAS, N-UCLA, i3DPost and UWA3DII data sets show improvements over existing algorithms.

Wenliang Qiu;Xinbo Gao;Bing Han; "Eye Fixation Assisted Video Saliency Detection via Total Variation-Based Pairwise Interaction," vol.27(10), pp.4724-4739, Oct. 2018. As human visual attention is naturally biased toward foreground objects in a scene, it can be used to extract salient objects in video clips. In this paper, we proposed a weakly supervised learning-based video saliency detection algorithm utilizing eye fixations information from multiple subjects. Our main idea is to extend eye fixations to saliency regions step by step. First, visual seeds are collected using multiple color space geodesic distance-based seed region mapping with filtered and extended eye fixations. This operation helps raw fixation points spread to the most likely salient regions, namely, visual seed regions. Second, in order to seize the essential scene structure from video sequences, we introduce the total variance-based pairwise interaction model to learn the potential pairwise relationship between foreground and background within a frame or across video frames. In this vein, visual seed regions eventually grow into salient regions. Compared with previous approaches the generated saliency maps have two most outstanding properties: <inline-formula> <tex-math notation="LaTeX">$integrity$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$purity$ </tex-math></inline-formula>, which are conductive to segment the foreground and significant to the follow-up tasks. Extensive quantitative and qualitative experiments on various video sequences demonstrate that the proposed method outperforms the state-of-the-art image and video saliency detection algorithms.

Tianyi Zhao;Baopeng Zhang;Ming He;Wei Zhang;Ning Zhou;Jun Yu;Jianping Fan; "Embedding Visual Hierarchy With Deep Networks for Large-Scale Visual Recognition," vol.27(10), pp.4740-4755, Oct. 2018. In this paper, a layer-wise mixture model (LMM) is developed to support hierarchical visual recognition, where a Bayesian approach is used to automatically adapt the visual hierarchy to the progressive improvements of the deep network along the time. Our LMM algorithm can provide an end-to-end approach for jointly learning: 1) the deep network for achieving more discriminative deep representations for object classes and their inter-class visual similarities; 2) the tree classifier for recognizing large numbers of object classes hierarchically; and 3) the visual hierarchy adaptation for achieving more accurate assignment and organization of large numbers of object classes. By learning the tree classifier, the deep network and the visual hierarchy adaptation jointly in an end-to-end manner, our LMM algorithm can achieve higher accuracy rates on hierarchical visual recognition. Our experiments are carried on ImageNet1K and ImageNet10K image sets, which have demonstrated that our LMM algorithm can achieve very competitive results on the accuracy rates as compared with the baseline methods.

Luo Jiang;Juyong Zhang;Bailin Deng;Hao Li;Ligang Liu; "3D Face Reconstruction With Geometry Details From a Single Image," vol.27(10), pp.4756-4770, Oct. 2018. 3D face reconstruction from a single image is a classical and challenging problem with wide applications in many areas. Inspired by recent works in face animation from RGB-D or monocular video inputs, we develop a novel method for reconstructing 3D faces from unconstrained 2D images using a coarse-to-fine optimization strategy. First, a smooth coarse 3D face is generated from an example-based bilinear face model by aligning the projection of 3D face landmarks with 2D landmarks detected from the input image. Afterward, using local corrective deformation fields, the coarse 3D face is refined using photometric consistency constraints, resulting in a medium face shape. Finally, a shape-from-shading method is applied on the medium face to recover fine geometric details. Our method outperforms the state-of-the-art approaches in terms of accuracy and detail recovery, which is demonstrated in extensive experiments using real-world models and publicly available data sets.

Junhong Min;Kyong Hwan Jin;Michael Unser;Jong Chul Ye; "Grid-Free Localization Algorithm Using Low-Rank Hankel Matrix for Super-Resolution Microscopy," vol.27(10), pp.4771-4786, Oct. 2018. Localization microscopy, such as STORM/PALM, can reconstruct super-resolution images with a nanometer resolution through the iterative localization of fluorescence molecules. Recent studies in this area have focused mainly on the localization of densely activated molecules to improve temporal resolutions. However, higher density imaging requires an advanced algorithm that can resolve closely spaced molecules. Accordingly, sparsity-driven methods have been studied extensively. One of the major limitations of existing sparsity-driven approaches is the need for a fine sampling grid or for Taylor series approximation which may result in some degree of localization bias toward the grid. In addition, prior knowledge of the point-spread function (PSF) is required. To address these drawbacks, here we propose a true grid-free localization algorithm with adaptive PSF estimation. Specifically, based on the observation that sparsity in the spatial domain implies a low rank in the Fourier domain, the proposed method converts source localization problems into Fourier-domain signal processing problems so that a truly grid-free localization is possible. We verify the performance of the newly proposed method with several numerical simulations and a live-cell imaging experiment.

Ismael Serrano;Oscar Deniz;Jose Luis Espinosa-Aranda;Gloria Bueno; "Fight Recognition in Video Using Hough Forests and 2D Convolutional Neural Network," vol.27(10), pp.4787-4797, Oct. 2018. While action recognition has become an important line of research in computer vision, the recognition of particular events, such as aggressive behaviors, or fights, has been relatively less studied. These tasks may be extremely useful in several video surveillance scenarios, such as psychiatric wards, prisons, or even in personal camera smartphones. Their potential usability has led to a surge of interest in developing fight or violence detectors. One of the key aspects in this case is efficiency, that is, these methods should be computationally fast. “Handcrafted” spatio-temporal features that account for both motion and appearance information can achieve high accuracy rates, albeit the computational cost of extracting some of those features is still prohibitive for practical applications. The deep learning paradigm has been recently applied for the first time to this task too, in the form of a 3D convolutional neural network that processes the whole video sequence as input. However, results in human perception of other’s actions suggest that, in this specific task, motion features are crucial. This means that using the whole video as input may add both redundancy and noise in the learning process. In this paper, we propose a hybrid “handcrafted/learned” feature framework which provides better accuracy than the previous feature learning method, with similar computational efficiency. The proposed method is compared to three related benchmark data sets. The method outperforms the different state-of-the-art methods in two of the three considered benchmark data sets.

Yehuda Dar;Michael Elad;Alfred M. Bruckstein; "Optimized Pre-Compensating Compression," vol.27(10), pp.4798-4809, Oct. 2018. In imaging systems, following acquisition, an image/video is transmitted or stored and eventually presented to human observers using different and often imperfect display devices. While the resulting quality of the output image may severely be affected by the display, this degradation is usually ignored in the preceding compression. In this paper, we model the sub-optimality of the display device as a known degradation operator applied on the decompressed image/video. We assume the use of a standard compression path, and augment it with a suitable pre-processing procedure, providing a compressed signal intended to compensate the degradation without any post-filtering. Our approach originates from an intricate rate-distortion problem, optimizing the modifications to the input image/video for reaching best end-to-end performance. We address this seemingly computationally intractable problem using the alternating direction method of multipliers approach, leading to a procedure in which a standard compression technique is iteratively applied. We demonstrate the proposed method for adjusting HEVC image/video compression to compensate post-decompression visual effects due to a common type of displays. Particularly, we use our method to reduce motion-blur perceived while viewing video on LCD devices. The experiments establish our method as a leading approach for preprocessing high bit-rate compression to counterbalance a post-decompression degradation.

Guangming Shi;Tao Huang;Weisheng Dong;Jinjian Wu;Xuemei Xie; "Robust Foreground Estimation via Structured Gaussian Scale Mixture Modeling," vol.27(10), pp.4810-4824, Oct. 2018. Recovering the background and foreground parts from video frames has important applications in video surveillance. Under the assumption that the background parts are stationary and the foreground are sparse, most of existing methods are based on the framework of robust principal component analysis (RPCA), i.e., modeling the background and foreground parts as a low-rank and sparse matrices, respectively. However, in realistic complex scenarios, the conventional <inline-formula> <tex-math notation="LaTeX">$ell _{1}$ </tex-math></inline-formula> norm sparse regularizer often fails to well characterize the varying sparsity of the foreground components. How to select the sparsity regularizer parameters adaptively according to the local statistics is critical to the success of the RPCA framework for background subtraction task. In this paper, we propose to model the sparse component with a Gaussian scale mixture (GSM) model. Compared with the conventional <inline-formula> <tex-math notation="LaTeX">$ell _{1}$ </tex-math></inline-formula> norm, the GSM-based sparse model has the advantages of jointly estimating the variances of the sparse coefficients (and hence the regularization parameters) and the unknown sparse coefficients, leading to significant estimation accuracy improvements. Moreover, considering that the foreground parts are highly structured, a structured extension of the GSM model is further developed. Specifically, the input frame is divided into many homogeneous regions using superpixel segmentation. By characterizing the set of sparse coefficients in each homogeneous region with the same GSM prior, the local dependencies among the sparse coefficients can be effectively exploited, leading to further improvements for background subtraction. Experimental results on several challenging scenarios show that the proposed method performs much better than most of existing background subtraction methods in terms of both performance and speed.

Weixiang Hong;Junsong Yuan; "Fried Binary Embedding: From High-Dimensional Visual Features to High-Dimensional Binary Codes," vol.27(10), pp.4825-4837, Oct. 2018. Most existing binary embedding methods prefer compact binary codes (<inline-formula> <tex-math notation="LaTeX">$b$ </tex-math></inline-formula>-dimensional) to avoid high computational and memory cost of projecting high-dimensional visual features (<inline-formula> <tex-math notation="LaTeX">$d$ </tex-math></inline-formula>-dimensional, <inline-formula> <tex-math notation="LaTeX">$b < d$ </tex-math></inline-formula>). We argue that long binary codes (<inline-formula> <tex-math notation="LaTeX">$b sim mathcal {O}(d)$ </tex-math></inline-formula>) are critical to fully utilize the discriminative power of high-dimensional visual features, and can achieve better results in various tasks such as approximate nearest neighbor search. Generating long binary codes involves large projection matrix and high-dimensional matrix-vector multiplication, thus is memory and compute intensive. We propose Fried binary embedding (FBE) and Supervised Fried Binary Embedding (SuFBE), to tackle these problems. FBE is suitable for most of the practical applications in which the labels of training data are not given, while SuFBE can significantly boost the accuracy in the cases that the training labels are available. The core idea is to decompose the projection matrix using adaptive Fastfood transform, which is the multiplication of several structured matrices. As a result, FBE and SuFBE can reduce the computational complexity from <inline-formula> <tex-math notation="LaTeX">$mathcal {O}(d^{2})$ </tex-math></inline-formula> to <inline-formula> <tex-math notation="LaTeX">$mathcal {O}(dlog d)$ </tex-math></inline-formula>, and memory cost from <inline-formula> <tex-math notation="LaTeX">$mathcal {O}(d^{2})$ </tex-math></inline-formula> to <inline-formula> <tex-math notation="LaTeX">$mathcal {O}(d)$ </tex-math></inline-formula>, respectively. More importantly, by using the structured matrices, FBE and SuFBE can well regulate projection matrix by reducing its tunable param- ters and lead to even better accuracy than using either unconstrained projection matrix (like ITQ) or sparse matrix such as SP and SSP with the same long code length. Experimental comparisons with state-of-the-art methods over various visual applications demonstrate both the efficiency and performance advantages of FBE and SuFBE.

Xing Wei;Qingxiong Yang;Yihong Gong;Narendra Ahuja;Ming-Hsuan Yang; "Superpixel Hierarchy," vol.27(10), pp.4838-4849, Oct. 2018. Superpixel segmentation has been one of the most important tasks in computer vision. In practice, an object can be represented by a number of segments at finer levels with consistent details or included in a surrounding region at coarser levels. Thus, a superpixel segmentation hierarchy is of great importance for applications that require different levels of image details. However, there is no method that can generate all scales of superpixels accurately in real time. In this paper, we propose the superhierarchy algorithm which is able to generate multi-scale superpixels as accurately as the state-of-the-art methods but with one to two orders of magnitude speed-up. The proposed algorithm can be directly integrated with recent efficient edge detectors to significantly outperform the state-of-the-art methods in terms of segmentation accuracy. Quantitative and qualitative evaluations on a number of applications demonstrate that the proposed algorithm is accurate and efficient in generating a hierarchy of superpixels.

Yaqing Wang;Quanming Yao;James T. Kwok;Lionel M. Ni; "Scalable Online Convolutional Sparse Coding," vol.27(10), pp.4850-4859, Oct. 2018. Convolutional sparse coding (CSC) improves sparse coding by learning a shift-invariant dictionary from the data. However, most existing CSC algorithms operate in the batch mode and are computationally expensive. In this paper, we alleviate this problem by online learning. The key is a reformulation of the CSC objective so that convolution can be handled easily in the frequency domain, and much smaller history matrices are needed. To solve the resultant optimization problem, we use the alternating direction method of multipliers (ADMMs), and its subproblems have efficient closed-form solutions. Theoretical analysis shows that the learned dictionary converges to a stationary point of the optimization problem. Extensive experiments are performed on both the standard CSC benchmark data sets and much larger data sets such as the ImageNet. Results show that the proposed algorithm outperforms the state-of-the-art batch and online CSC methods. It is more scalable, has faster convergence, and better reconstruction performance.

Jing Liu;Guangtao Zhai;Anan Liu;Xiaokang Yang;Xibin Zhao;Chang Wen Chen; "IPAD: Intensity Potential for Adaptive De-Quantization," vol.27(10), pp.4860-4872, Oct. 2018. Display devices at bit depth of 10 or higher have been mature but the mainstream media source is still at bit depth of eight. To accommodate the gap, the most economic solution is to render source at low bit depth for high bit-depth display, which is essentially the procedure of de-quantization. Traditional methods, such as zero-padding or bit replication, introduce annoying false contour artifacts. To better estimate the least-significant bits, later works use filtering or interpolation approaches, which exploit only limited neighbor information, cannot thoroughly remove the false contours. In this paper, we propose a novel intensity potential (IP) field to model the complicated relationships among pixels. The potential value decreases as the spatial distance to the field source increases and the potentials from different field sources are additive. Based on the proposed IP field, an adaptive de-quantization procedure is then proposed to convert low-bit-depth images to high-bit-depth ones. To the best of our knowledge, this is the first attempt to apply potential field for natural images. The proposed potential field preserves local consistency and models the complicated contexts well. Extensive experiments on natural, synthetic, and high-dynamic range image data sets validate the efficiency of the proposed IP field. Significant improvements have been achieved over the state-of-the-art methods on both the peak signal-to-noise ratio and the structural similarity.

Byeong-Ju Han;Jae-Young Sim; "Glass Reflection Removal Using Co-Saliency-Based Image Alignment and Low-Rank Matrix Completion in Gradient Domain," vol.27(10), pp.4873-4888, Oct. 2018. The images taken through glass often capture a target transmitted scene as well as undesired reflected scenes. In this paper, we propose a novel reflection removal algorithm using multiple glass images taken from slightly different camera positions. We first find co-saliency maps for input multiple glass images based on the center prior assumption, and then align multiple images reliably with respect to the transmitted scene by selecting feature points with high co-saliency values. The gradients of the transmission images are consistent while the gradients of the reflection images are varying across the aligned multiple glass images. Based on this observation, we compute gradient reliability such that the pixels belonging to consistent salient edges of the transmission image are assigned high reliability values. We restore the gradients of the transmission images and suppress the gradients of the reflection images by formulating a low-rank matrix completion problem in gradient domain. Finally, we reconstruct desired transmission images from the restored transmission gradients. Experimental results show that the proposed algorithm removes the reflection artifacts from glass images faithfully and outperforms the existing methods on challenging glass images with diverse characteristics.

Jie Chen;Junhui Hou;Yun Ni;Lap-Pui Chau; "Accurate Light Field Depth Estimation With Superpixel Regularization Over Partially Occluded Regions," vol.27(10), pp.4889-4900, Oct. 2018. Depth estimation is a fundamental problem for light field photography applications. Numerous methods have been proposed in recent years, which either focus on crafting cost terms for more robust matching, or on analyzing the geometry of scene structures embedded in the epipolar-plane images. Significant improvements have been made in terms of overall depth estimation error; however, current state-of-the-art methods still show limitations in handling intricate occluding structures and complex scenes with multiple occlusions. To address these challenging issues, we propose a very effective depth estimation framework which focuses on regularizing the initial label confidence map and edge strength weights. Specifically, we first detect partially occluded boundary regions (POBR) via superpixel-based regularization. Series of shrinkage/reinforcement operations are then applied on the label confidence map and edge strength weights over the POBR. We show that after weight manipulations, even a low-complexity weighted least squares model can produce much better depth estimation than the state-of-the-art methods in terms of average disparity error rate, occlusion boundary precision-recall rate, and the preservation of intricate visual features.

Qing Song;Pamela C. Cosman; "Luminance Enhancement and Detail Preservation of Images and Videos Adapted to Ambient Illumination," vol.27(10), pp.4901-4915, Oct. 2018. When images and videos are displayed on a mobile device in bright ambient illumination, fewer details can be perceived than in the dark. The detail loss in dark areas of the images/videos is usually more severe. The reflected ambient light and the reduced sensitivity of viewer’s eyes are the major factors. We propose two tone mapping operators to enhance the contrast and details in images/videos. One is content independent and thus can be applied to any image/video for the given device and the given ambient illumination. The other tone mapping operator uses simple statistics of the content. Display contrast and human visual adaptation are considered to construct the tone mapping operators. Both operators can be solved efficiently. Subjective tests and objective measurement show the improved quality achieved by the proposed methods.

Chun-Rong Huang;Wei-Cheng Wang;Wei-An Wang;Szu-Yu Lin;Yen-Yu Lin; "USEAQ: Ultra-Fast Superpixel Extraction via Adaptive Sampling From Quantized Regions," vol.27(10), pp.4916-4931, Oct. 2018. We present a novel and highly efficient superpixel extraction method called ultra-fast superpixel extraction via adaptive sampling from quantized regions (USEAQ) to generate regular and compact superpixels in an image. To reduce the computational cost of iterative optimization procedures adopted in most recent approaches, the proposed USEAQ for superpixel generation works in a one-pass fashion. It first performs joint spatial and color quantizations and groups pixels into regions. It then takes into account the variations between regions, and adaptively samples one or a few superpixel candidates for each region. It finally employs maximum a posteriori estimation to assign pixels to the most spatially consistent and perceptually similar superpixels. It turns out that the proposed USEAQ is quite efficient, and the extracted superpixels can precisely adhere to boundaries of objects. Experimental results show that USEAQ achieves better or equivalent performance compared with the state-of-the-art superpixel extraction approaches in terms of boundary recall, undersegmentation error, achievable segmentation accuracy, the average miss rate, average undersegmentation error, and average unexplained variation, and it is significantly faster than these approaches. The source code of USEAQ is available at

Yongwei Nie;Tan Su;Zhensong Zhang;Hanqiu Sun;Guiqing Li; "Corrections to “Dynamic Video Stitching via Shakiness Removing”," vol.27(10), pp.4932-4932, Oct. 2018. In [1], the biographies of Hanqiu Sun and Guiqing Li included incorrect information. The correct biographies are as follows.

Youjiang Xu;Yahong Han;Richang Hong;Qi Tian; "Sequential Video VLAD: Training the Aggregation Locally and Temporally," vol.27(10), pp.4933-4944, Oct. 2018. As characterizing videos simultaneously from spatial and temporal cues has been shown crucial for the video analysis, the combination of convolutional neural networks and recurrent neural networks, i.e., recurrent convolution networks (RCNs), should be a native framework for learning the spatio-temporal video features. In this paper, we develop a novel sequential vector of locally aggregated descriptor (VLAD) layer, named SeqVLAD, to combine a trainable VLAD encoding process and the RCNs architecture into a whole framework. In particular, sequential convolutional feature maps extracted from successive video frames are fed into the RCNs to learn soft spatio-temporal assignment parameters, so as to aggregate not only detailed spatial information in separate video frames but also fine motion information in successive video frames. Moreover, we improve the gated recurrent unit (GRU) of RCNs by sharing the input-to-hidden parameters and propose an improved GRU-RCN architecture named shared GRU-RCN (SGRU-RCN). Thus, our SGRU-RCN has a fewer parameters and a less possibility of overfitting. In experiments, we evaluate SeqVLAD with the tasks of video captioning and video action recognition. Experimental results on Microsoft Research Video Description Corpus, Montreal Video Annotation Dataset, UCF101, and HMDB51 demonstrate the effectiveness and good performance of our method.

Yue Lv;Wengang Zhou;Qi Tian;Shaoyan Sun;Houqiang Li; "Retrieval Oriented Deep Feature Learning With Complementary Supervision Mining," vol.27(10), pp.4945-4957, Oct. 2018. Deep convolutional neural networks (CNNs) have been widely and successfully applied in many computer vision tasks, such as classification, detection, semantic segmentation, and so on. As for image retrieval, while off-the-shelf CNN features from models trained for classification task are demonstrated promising, it remains a challenge to learn specific features oriented for instance retrieval. Witnessing the great success of low-level SIFT feature in image retrieval and its complementary nature to the semantic-aware CNN feature, in this paper, we propose to embed the SIFT feature into the CNN feature with a Siamese structure in a learning-based paradigm. The learning objective consists of two kinds of loss, i.e., similarity loss and fidelity loss. The first loss embeds the image-level nearest neighborhood structure with the SIFT feature into CNN feature learning, while the second loss imposes that the CNN feature with the updated CNN model preserves the fidelity of that from the original CNN model solely trained for classification. After the learning, the generated CNN feature inherits the property of the SIFT feature, which is well oriented for image retrieval. We evaluate our approach on the public data sets, and comprehensive experiments demonstrate the effectiveness of the proposed method.

Ke Nai;Zhiyong Li;Guiji Li;Shanquan Wang; "Robust Object Tracking via Local Sparse Appearance Model," vol.27(10), pp.4958-4970, Oct. 2018. In this paper, we propose a novel local sparse representation-based tracking framework for visual tracking. To deeply mine the appearance characteristics of different local patches, the proposed method divides all local patches of a candidate target into three categories, which are stable patches, valid patches, and invalid patches. All these patches are assigned different weights to consider the different importance of the local patches. For stable patches, we introduce a local sparse score to identify them, and discriminative local sparse coding is developed to decrease the weights of background patches among the stable patches. For valid patches and invalid patches, we adopt local linear regression to distinguish the former from the latter. Furthermore, we propose a weight shrinkage method to determine weights for different valid patches to make our patch weight computation more reasonable. Experimental results on public tracking benchmarks with challenging sequences demonstrate that the proposed method performs favorably against other state-of-the-art tracking methods.

Xiaohong Liu;Lei Chen;Wenyi Wang;Jiying Zhao; "Robust Multi-Frame Super-Resolution Based on Spatially Weighted Half-Quadratic Estimation and Adaptive BTV Regularization," vol.27(10), pp.4971-4986, Oct. 2018. Multi-frame image super-resolution focuses on reconstructing a high-resolution image from a set of low-resolution images with high similarity. Combining image prior knowledge with fidelity model, the Bayesian-based methods have been considered as an effective technique in super-resolution. The minimization function derived from maximum a posteriori probability (MAP) is composed of a fidelity term and a regularization term. In this paper, based on the MAP estimation, we propose a novel initialization method for super-resolution imaging. For the fidelity term in our proposed method, the half-quadratic estimation is used to choose error norm adaptively instead of using fixed <inline-formula> <tex-math notation="LaTeX">$L_{1}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$L_{2}$ </tex-math></inline-formula> norms. Besides, a spatial weight matrix is used as a confidence map to scale the estimation result. For the regularization term, we propose a novel regularization method based on adaptive bilateral total variation (ABTV). Both the fidelity term and the ABTV regularization guarantee the robustness of our framework. The fidelity term is mainly responsible for dealing with misregistration, blur, and other kinds of large errors, while the ABTV regularization aims at edge preservation and noise removal. The proposed scheme is tested on both synthetic data and real data. The experimental results illustrate the superiority of our proposed method in terms of edge preservation and noise removal over the state-of-the-art algorithms.

Jing Cui;Ruiqin Xiong;Xinfeng Zhang;Shiqi Wang;Shanshe Wang;Siwei Ma;Wen Gao; "Hybrid All Zero Soft Quantized Block Detection for HEVC," vol.27(10), pp.4987-5001, Oct. 2018. Transform and quantization account for a considerable amount of computation time in video encoding process. However, there are a large number of discrete cosine transform coefficients which are finally quantized into zeros. In essence, blocks with all zero quantized coefficients do not transmit any information, but still occupy substantial unnecessary computational resources. As such, detecting all-zero block (AZB) before transform and quantization has been recognized to be an efficient approach to speed up the encoding process. Instead of considering the hard-decision quantization (HDQ) only, in this paper, we incorporate the properties of soft-decision quantization into the AZB detection. In particular, we categorize the AZB blocks into genuine AZBs (G-AZB) and pseudo AZBs (P-AZBs) to distinguish their origins. For G-AZBs directly generated from HDQ, the sum of absolute transformed difference-based approach is adopted for early termination. Regarding the classification of P-AZBs which are generated in the sense of rate-distortion optimization, the rate-distortion models established based on transform coefficients together with the adaptive searching of the maximum transform coefficient are jointly employed for the discrimination. Experimental results show that our algorithm can achieve up to 24.16% transform and quantization time-savings with less than 0.06% RD performance loss. The total encoder time saving is about 5.18% on average with the maximum value up to 9.12%. Moreover, the detection accuracy of larger TU sizes, such as <inline-formula> <tex-math notation="LaTeX">$16times 16$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$32times 32$ </tex-math></inline-formula> can reach to 95% on average.

Trung-Nghia Le;Akihiro Sugimoto; "Video Salient Object Detection Using Spatiotemporal Deep Features," vol.27(10), pp.5002-5015, Oct. 2018. This paper presents a method for detecting salient objects in videos, where temporal information in addition to spatial information is fully taken into account. Following recent reports on the advantage of deep features over conventional handcrafted features, we propose a new set of spatiotemporal deep (STD) features that utilize local and global contexts over frames. We also propose new spatiotemporal conditional random field (STCRF) to compute saliency from STD features. STCRF is our extension of CRF to the temporal domain and describes the relationships among neighboring regions both in a frame and over frames. STCRF leads to temporally consistent saliency maps over frames, contributing to accurate detection of salient objects’ boundaries and noise reduction during detection. Our proposed method first segments an input video into multiple scales and then computes a saliency map at each scale level using STD features with STCRF. The final saliency map is computed by fusing saliency maps at different scale levels. Our experiments, using publicly available benchmark datasets, confirm that the proposed method significantly outperforms the state-of-the-art methods. We also applied our saliency computation to the video object segmentation task, showing that our method outperforms existing video object segmentation methods.

Hai Min;Wei Jia;Yang Zhao;Wangmeng Zuo;Haibin Ling;Yuetong Luo; "LATE: A Level-Set Method Based on Local Approximation of Taylor Expansion for Segmenting Intensity Inhomogeneous Images," vol.27(10), pp.5016-5031, Oct. 2018. Intensity inhomogeneity is common in real-world images and inevitably leads to many difficulties for accurate image segmentation. Numerous level-set methods have been proposed to segment images with intensity inhomogeneity. However, most of these methods are based on linear approximation, such as locally weighted mean, which may cause problems when handling images with severe intensity inhomogeneities. In this paper, we view segmentation of such images as a nonconvex optimization problem, since the intensity variation in such an image follows a nonlinear distribution. Then, we propose a novel level-set method named local approximation of Taylor expansion (LATE), which is a nonlinear approximation method to solve the nonconvex optimization problem. In LATE, we use the statistical information of the local region as a fidelity term and the differentials of intensity inhomogeneity as an adjusting term to model the approximation function. In particular, since the first-order differential is represented by the variation degree of intensity inhomogeneity, LATE can improve the approximation quality and enhance the local intensity contrast of images with severe intensity inhomogeneity. Moreover, LATE solves the optimization of function fitting by relaxing the constraint condition. In addition, LATE can be viewed as a constraint relaxation of classical methods, such as the region-scalable fitting model and the local intensity clustering model. Finally, the level-set energy functional is constructed based on the Taylor expansion approximation. To validate the effectiveness of our method, we conduct thorough experiments on synthetic and real images. Experimental results show that the proposed method clearly outperforms other solutions in comparison.

Si Liu;Zhen Wei;Yao Sun;Xinyu Ou;Junyu Lin;Bin Liu;Ming-Hsuan Yang; "Composing Semantic Collage for Image Retargeting," vol.27(10), pp.5032-5043, Oct. 2018. Image retargeting has been applied to display images of any size via devices with various resolutions (e.g., cell phone and TV monitors). To fit an image with the target resolution, certain unimportant regions need to be deleted or distorted, and the key problem is to determine the importance of each pixel. Existing methods predict pixel-wise importance in a bottom-up manner via eye fixation estimation or saliency detection. In contrast, the proposed algorithm estimates the pixel-wise importance based on a top-down criterion where the target image maintains the semantic meaning of the original image. To this end, several semantic components corresponding to foreground objects, action contexts, and background regions are extracted. The semantic component maps are integrated by a classification guided fusion network. Specifically, the deep network classifies the original image as object or scene oriented, and fuses the semantic component maps according to classification results. The network output, referred to as the semantic collage with the same size as the original image, is then fed into any existing optimization method to generate the target image. Extensive experiments are carried out on the RetargetMe data set and S-Retarget database developed in this paper. Experimental results demonstrate the merits of the proposed algorithm over the state-of-the-art image retargeting methods.

Mai Xu;Tianyi Li;Zulin Wang;Xin Deng;Ren Yang;Zhenyu Guan; "Reducing Complexity of HEVC: A Deep Learning Approach," vol.27(10), pp.5044-5059, Oct. 2018. High efficiency video coding (HEVC) significantly reduces bit rates over the preceding H.264 standard but at the expense of extremely high encoding complexity. In HEVC, the quad-tree partition of the coding unit (CU) consumes a large proportion of the HEVC encoding complexity, due to the brute-force search for rate-distortion optimization (RDO). Therefore, this paper proposes a deep learning approach to predict the CU partition for reducing the HEVC complexity at both intra-and inter-modes, which is based on convolutional neural network (CNN) and long- and short-term memory (LSTM) network. First, we establish a large-scale database including substantial CU partition data for the HEVC intra- and inter-modes. This enables deep learning on the CU partition. Second, we represent the CU partition of an entire coding tree unit in the form of a hierarchical CU partition map (HCPM). Then, we propose an early terminated hierarchical CNN (ETH-CNN) for learning to predict the HCPM. Consequently, the encoding complexity of intra-mode HEVC can be drastically reduced by replacing the brute-force search with ETH-CNN to decide the CU partition. Third, an ETH-LSTM is proposed to learn the temporal correlation of the CU partition. Then, we combine the ETH-LSTM and the ETH-CNN to predict the CU partition for reducing the HEVC complexity at inter-mode. Finally, experimental results show that our approach outperforms the other state-of-the-art approaches in reducing the HEVC complexity at both intra- and inter-modes.

Weiming Hu;Baoxin Wu;Pei Wang;Chunfeng Yuan;Yangxi Li;Stephen Maybank; "Context-Dependent Random Walk Graph Kernels and Tree Pattern Graph Matching Kernels With Applications to Action Recognition," vol.27(10), pp.5060-5075, Oct. 2018. Graphs are effective tools for modeling complex data. Setting out from two basic substructures, random walks and trees, we propose a new family of context-dependent random walk graph kernels and a new family of tree pattern graph matching kernels. In our context-dependent graph kernels, context information is incorporated into primary random walk groups. A multiple kernel learning algorithm with a proposed <inline-formula> <tex-math notation="LaTeX">$l_{1,2}$ </tex-math></inline-formula>-norm regularization is applied to combine context-dependent graph kernels of different orders. This improves the similarity measurement between graphs. In our tree-pattern graph matching kernel, a quadratic optimization with a sparse constraint is proposed to select the correctly matched tree-pattern groups. This augments the discriminative power of the tree-pattern graph matching. We apply the proposed kernels to human action recognition, where each action is represented by two graphs which record the spatiotemporal relations between local feature vectors. Experimental comparisons with state-of-the-art algorithms on several benchmark data sets demonstrate the effectiveness of the proposed kernels for recognizing human actions. It is shown that our kernel based on tree-pattern groups, which have more complex structures and exploit more local topologies of graphs than random walks, yields more accurate results but requires more runtime than the context-dependent walk graph kernel.

Xi Peng;Jiashi Feng;Shijie Xiao;Wei-Yun Yau;Joey Tianyi Zhou;Songfan Yang; "Structured AutoEncoders for Subspace Clustering," vol.27(10), pp.5076-5086, Oct. 2018. Existing subspace clustering methods typically employ shallow models to estimate underlying subspaces of unlabeled data points and cluster them into corresponding groups. However, due to the limited representative capacity of the employed shallow models, those methods may fail in handling realistic data without the linear subspace structure. To address this issue, we propose a novel subspace clustering approach by introducing a new deep model—Structured AutoEncoder (StructAE). The StructAE learns a set of explicit transformations to progressively map input data points into nonlinear latent spaces while preserving the local and global subspace structure. In particular, to preserve local structure, the StructAE learns representations for each data point by minimizing reconstruction error with respect to itself. To preserve global structure, the StructAE incorporates a prior structured information by encouraging the learned representation to preserve specified reconstruction patterns over the entire data set. To the best of our knowledge, StructAE is one of the first deep subspace clustering approaches. Extensive experiments show that the proposed StructAE significantly outperforms 15 state-of-the-art subspace clustering approaches in terms of five evaluation metrics.

Dongdong Hou;Weiming Zhang;Yang Yang;Nenghai Yu; "Reversible Data Hiding Under Inconsistent Distortion Metrics," vol.27(10), pp.5087-5099, Oct. 2018. Recursive code construction (RCC), based on the optimal transition probability matrix (OTPM), approaching the rate-distortion bound of reversible data hiding (RDH) has been proposed. Using the existing methods, OTPM can be effectively estimated only for a consistent distortion metric, i.e., if the host elements at different positions share the same distortion metric. However, in many applications, the distortion metrics are position dependent and should thus be inconsistent. Inconsistent distortion metrics can usually be quantified as a multi-distortion metric. In this paper, we first formulate the rate-distortion problem of RDH under a multi-distortion metric and subsequently propose a general framework to estimate the corresponding OTPM, with which RCC is extended to approach the rate-distortion bound of RDH under the multi-distortion metric. We apply the proposed framework to two examples of inconsistent distortion metrics: RDH in color image and reversible steganography. The experimental results show that the proposed method can efficiently improve upon the existing techniques.

IEEE Transactions on Medical Imaging - new TOC (2018 July 16) [Website]

* "Table of contents," vol.37(7), pp.C1-C4, July 2018.* Presents the table of contents for this issue of the publication.

* "IEEE Transactions on Medical Imaging publication information," vol.37(7), pp.C2-C2, July 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Victor Solo;Jean-Baptiste Poline;Martin A. Lindquist;Sean L. Simpson;F. DuBois Bowman;Moo K. Chung;Ben Cassidy; "Connectivity in fMRI: Blind Spots and Breakthroughs," vol.37(7), pp.1537-1550, July 2018. In recent years, driven by scientific and clinical concerns, there has been an increased interest in the analysis of functional brain networks. The goal of these analyses is to better understand how brain regions interact, how this depends upon experimental conditions and behavioral measures and how anomalies (disease) can be recognized. In this paper, we provide, first, a brief review of some of the main existing methods of functional brain network analysis. But rather than compare them, as a traditional review would do, instead, we draw attention to their significant limitations and blind spots. Then, second, relevant experts, sketch a number of emerging methods, which can break through these limitations. In particular we discuss five such methods. The first two, stochastic block models and exponential random graph models, provide an inferential basis for network analysis lacking in the exploratory graph analysis methods. The other three addresses: network comparison via persistent homology, time-varying connectivity that distinguishes sample fluctuations from neural fluctuations, and network system identification that draws inferential strength from temporal autocorrelation.

Heng Huang;Xintao Hu;Yu Zhao;Milad Makkie;Qinglin Dong;Shijie Zhao;Lei Guo;Tianming Liu; "Modeling Task fMRI Data Via Deep Convolutional Autoencoder," vol.37(7), pp.1551-1561, July 2018. Task-based functional magnetic resonance imaging (tfMRI) has been widely used to study functional brain networks under task performance. Modeling tfMRI data is challenging due to at least two problems: the lack of the ground truth of underlying neural activity and the highly complex intrinsic structure of tfMRI data. To better understand brain networks based on fMRI data, data-driven approaches have been proposed, for instance, independent component analysis (ICA) and sparse dictionary learning (SDL). However, both ICA and SDL only build shallow models, and they are under the strong assumption that original fMRI signal could be linearly decomposed into time series components with their corresponding spatial maps. As growing evidence shows that human brain function is hierarchically organized, new approaches that can infer and model the hierarchical structure of brain networks are widely called for. Recently, deep convolutional neural network (CNN) has drawn much attention, in that deep CNN has proven to be a powerful method for learning high-level and mid-level abstractions from low-level raw data. Inspired by the power of deep CNN, in this paper, we developed a new neural network structure based on CNN, called deep convolutional auto-encoder (DCAE), in order to take the advantages of both data-driven approach and CNN's hierarchical feature abstraction ability for the purpose of learning mid-level and high-level features from complex, large-scale tfMRI time series in an unsupervised manner. The DCAE has been applied and tested on the publicly available human connectome project tfMRI data sets, and promising results are achieved.

Guotai Wang;Wenqi Li;Maria A. Zuluaga;Rosalind Pratt;Premal A. Patel;Michael Aertsen;Tom Doel;Anna L. David;Jan Deprest;Sébastien Ourselin;Tom Vercauteren; "Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning," vol.37(7), pp.1562-1573, July 2018. Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes (a.k.a. zero-shot learning). To address these problems, we propose a novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine tuning. We applied this framework to two applications: 2-D segmentation of multiple organs from fetal magnetic resonance (MR) slices, where only two types of these organs were annotated for training and 3-D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only the tumor core in one MR sequence was annotated for training. Experimental results show that: 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods.

Jérôme Baranger;Bastien Arnal;Fabienne Perren;Olivier Baud;Mickael Tanter;Charlie Demené; "Adaptive Spatiotemporal SVD Clutter Filtering for Ultrafast Doppler Imaging Using Similarity of Spatial Singular Vectors," vol.37(7), pp.1574-1586, July 2018. Singular value decomposition of ultrafast imaging ultrasonic data sets has recently been shown to build a vector basis far more adapted to the discrimination of tissue and blood flow than the classical Fourier basis, improving by large factor clutter filtering and blood flow estimation. However, the question of optimally estimating the boundary between the tissue subspace and the blood flow subspace remained unanswered. Here, we introduce an efficient estimator for automatic thresholding of subspaces and compare it to an exhaustive list of thirteen estimators that could achieve this task based on the main characteristics of the singular components, namely the singular values, the temporal singular vectors, and the spatial singular vectors. The performance of those fourteen estimators was tested in vitro in a large set of controlled experimental conditions with different tissue motion and flow speeds on a phantom. The estimator based on the degree of resemblance of spatial singular vectors outperformed all others. Apart from solving the thresholding problem, the additional benefit with this estimator was its denoising capabilities, strongly increasing the contrast to noise ratio and lowering the noise floor by at least 5 dB. This confirms that, contrary to conventional clutter filtering techniques that are almost exclusively based on temporal characteristics, efficient clutter filtering of ultrafast Doppler imaging cannot overlook space. Finally, this estimator was applied in vivo on various organs (human brain, kidney, carotid, and thyroid) and showed efficient clutter filtering and noise suppression, improving largely the dynamic range of the obtained ultrafast power Doppler images.

Seungeon Kim;Yongjin Chang;Jong Beom Ra; "Cardiac Motion Correction for Helical CT Scan With an Ordinary Pitch," vol.37(7), pp.1587-1596, July 2018. Cardiac X-ray computed tomography (CT) imaging is still challenging due to the cardiac motion during CT scanning, which leads to the presence of motion artifacts in the reconstructed image. In response, many cardiac X-ray CT imaging algorithms have been proposed, based on motion estimation (ME) and motion compensation (MC), to improve the image quality by alleviating the motion artifacts in the reconstructed image. However, these ME/MC algorithms are mainly based on an axial scan or a low-pitch helical scan. In this paper, we propose a ME/MC-based cardiac imaging algorithm for the data set acquired from a helical scan with an ordinary pitch of around 1.0 so as to obtain the whole cardiac image within a single scan of short time without ECG gating. In the proposed algorithm, a sequence of partial angle reconstructed (PAR) images is generated by using consecutive parts of the sinogram, each of which has a small angular span. Subsequently, an initial 4-D motion vector field (MVF) is obtained using multiple pairs of conjugate PAR images. The 4-D MVF is then refined based on an image quality metric so as to improve the quality of the motion-compensated image. Finally, a time-resolved cardiac image is obtained by performing motion-compensated image reconstruction by using the refined 4-D MVF. Using digital XCAT phantom data sets and a human data set commonly obtained via a helical scan with a pitch of 1.0, we demonstrate that the proposed algorithm significantly improves the image quality by alleviating motion artifacts.

Huazhu Fu;Jun Cheng;Yanwu Xu;Damon Wing Kee Wong;Jiang Liu;Xiaochun Cao; "Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation," vol.37(7), pp.1597-1605, July 2018. Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multi-label system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system. The experiments show that our M-Net system achieves state-of-the-art OD and OC segmentation result on ORIGA data set. Simultaneously, the proposed method also obtains the satisfactory glaucoma screening performances with calculated CDR value on both ORIGA and SCES datasets.

Hasan H. Eroğlu;Mehdi Sadighi;B. Murat Eyüboğlu; "Induced Current Magnetic Resonance Electrical Conductivity Imaging With Oscillating Gradients," vol.37(7), pp.1606-1617, July 2018. In this paper, induced current magnetic resonance electrical impedance tomography (ICMREIT) by means of current induction due to time-varying gradient fields of magnetic resonance imaging (MRI) systems is proposed. Eddy current and secondary magnetic flux density distributions are calculated for a numerical model composed of a z-gradient coil and a cylindrical conductor. An MRI pulse sequence is developed for the experimental evaluation of ICMREIT on a 3T MRI scanner. A relationship between the secondary magnetic flux density and the low-frequency (LF) MR phase is formulated. Characteristics of the LF phase, the eddy current, and the reconstructed conductivity distributions based on the simulated and the physical measurements are in agreement. Geometric shifts, which may contaminate the LF phase measurements, are not observed in the MR magnitude images. Low sensitivity of the LF phase measurements is a major limitation of ICMREIT towards clinical applications. The reconstructed conductivity images are rough estimates of true conductivity distribution of the experimental phantoms. Although the experimental results show that ICMREIT is safe and potentially applicable, its measurement sensitivity and reconstruction accuracy need to be optimized in order to improve the technique towards clinical applications.

Martin Villiger;Kenichiro Otsuka;Antonios Karanasos;Pallavi Doradla;Jian Ren;Norman Lippok;Milen Shishkov;Joost Daemen;Roberto Diletti;Robert-Jan van Geuns;Felix Zijlstra;Jouke Dijkstra;Gijs van Soest;Evelyn Regar;Seemantini K. Nadkarni;Brett E. Bouma; "Repeatability Assessment of Intravascular Polarimetry in Patients," vol.37(7), pp.1618-1625, July 2018. Intravascular polarimetry with polarization sensitive optical frequency domain imaging (PS-OFDI) measures polarization properties of the vessel wall and offers characterization of coronary atherosclerotic lesions beyond the cross-sectional image of arterial microstructure available to conventional OFDI. A previous study of intravascular polarimetry in cadaveric human coronary arteries found that tissue birefringence and depolarization provide valuable insight into key features of atherosclerotic plaques. In addition to various tissue components, catheter and sample motion can also influence the polarization of near infrared light as used by PS-OFDI. This paper aimed to evaluate the robustness and repeatability of imaging tissue birefringence and depolarization in a clinical setting. 30 patients scheduled for percutaneous coronary intervention at the Erasmus Medical Center underwent repeated PS-OFDI pullback imaging, using commercial imaging catheters in combination with a custom-built PS-OFDI console. We identified 274 matching cross sections among the repeat pullbacks to evaluate the reproducibility of the conventional backscatter intensity, the birefringence, and the depolarization signals at each spatial location across the vessel wall. Bland-Altman analysis revealed best agreement for the birefringence measurements, followed by backscatter intensity, and depolarization, when limiting the analysis to areas of meaningful birefringence. Pearson correlation analysis confirmed highest correlation for birefringence (0.86), preceding backscatter intensity (0.83), and depolarization (0.78). Our results demonstrate that intravascular polarimetry generates robust maps of tissue birefringence and depolarization in a clinical setting. This outcome motivates the use of intravascular polarimetry for future clinical studies that investigate polarization properties of arterial atherosclerosis.

Yonghyun Ha;Chang-Hoon Choi;N. Jon Shah; "Development and Implementation of a PIN-Diode Controlled, Quadrature-Enhanced, Double-Tuned RF Coil for Sodium MRI," vol.37(7), pp.1626-1631, July 2018. Sodium (23Na) MRI provides complementary cellular and metabolic information. However, the intrinsic MR sensitivity of 23Na is considerably lower compared with that of the proton, making it difficult to measure MR-detectable sodium signals. It is therefore important to maintain the signal-to-noise ratio (SNR) of the sodium signal as high as possible. Double-tuned coils are often employed in combinationwith a 1H coil, providing structural images and B0 shimming capability. The double-tuned coil design can be achieved with the use of two geometrically decoupled coils whose B1 field directions are perpendicular to each other. This can be used to design quadrature-driven, single-nucleus coils to improve SNR, and additionally, this coil can also be utilized as a linear-driven double-resonant mode. Here, we have developed and evaluateda quadrature-enhanced, double-tuned coil. The novel coil uses PIN-diode switches, inserted only in the loop coil, to shift the resonance frequency between 1H and 23Na so that 23Na signals can be acquired in quadrature and the capability of using 1H function remains. Consequently, the 23Na SNR values obtained with the double-tuned coil are nearly 33% and 17% higher in comparison with geometrically identical single-tuned coils. SNR plots also show the superiority of double-tuned coil in 23Na.

Won-Joon Do;Seung Hong Choi;Sung-Hong Park; "Simultaneous Variable-Slab Dual-Echo TOF MR Angiography and Susceptibility-Weighted Imaging," vol.37(7), pp.1632-1640, July 2018. In this paper, we propose a new 3-D dual-echo method for simultaneous multislab time-of-flight MR angiography (TOF MRA) and single-slab susceptibility-weighted imaging (SWI). The previous echo-specific k-space reordering scheme for compatible dual-echo arteriovenography (CODEA) was advanced to applying excitation RF pulses for multiple thin slabs and a single thick slab to the first (TOF MRA) and second (SWI) echoes, respectively. Single-slab CODEA and multislab CODEA (fixed-slab CODEAs) were additionally acquired as comparison reference to the proposed variable-slab CODEA. Parallel imaging was also tested for feasibility of accelerating the proposed method. TOF MRA and SWI from the proposed variable-slab CODEA were visually and quantitatively comparable to multislab TOF MRA and single-slab SWI, respectively, separately acquired from the fixed-slab CODEAs. The parallel imaging reduced the scan time from 10.3 to 5.6 min. Furthermore, the proposed variable-slab approach improved the vessel continuities at slab boundaries of TOF MRA for CODEA as well as for the conventional single echo method. The proposed variable-slab CODEA provided multislab TOF MRA and single-slab SWI simultaneously in a clinically reasonable scan time of ~5 min with minimal impact on image qualities, while suppressing slab boundary artifacts in TOF MRA.

Yushan Zheng;Zhiguo Jiang;Haopeng Zhang;Fengying Xie;Yibing Ma;Huaqiang Shi;Yu Zhao; "Histopathological Whole Slide Image Analysis Using Context-Based CBIR," vol.37(7), pp.1641-1652, July 2018. Histopathological image classification (HIC) and content-based histopathological image retrieval (CBHIR) are two promising applications for the histopathological whole slide image (WSI) analysis. HIC can efficiently predict the type of lesion involved in a histopathological image. In general, HIC can aid pathologists in locating high-risk cancer regions from a WSI by providing a cancerous probability map for the WSI. In contrast, CBHIR was developed to allow searches for regions with similar content for a region of interest (ROI) from a database consisting of historical cases. Sets of cases with similar content are accessible to pathologists, which can provide more valuable references for diagnosis. A drawback of the recent CBHIR framework is that a query ROI needs to be manually selected from a WSI. An automatic CBHIR approach for a WSI-wise analysis needs to be developed. In this paper, we propose a novel aided-diagnosis framework of breast cancer using whole slide images, which shares the advantages of both HIC and CBHIR. In our framework, CBHIR is automatically processed throughout the WSI, based on which a probability map regarding the malignancy of breast tumors is calculated. Through the probability map, the malignant regions in WSIs can be easily recognized. Furthermore, the retrieval results corresponding to each sub-region of the WSIs are recorded during the automatic analysis and are available to pathologists during their diagnosis. Our method was validated on fully annotated WSI data sets of breast tumors. The experimental results certify the effectiveness of the proposed method.

Ilwoo Lyu;Sun Hyung Kim;Neil D. Woodward;Martin A. Styner;Bennett A. Landman; "TRACE: A Topological Graph Representation for Automatic Sulcal Curve Extraction," vol.37(7), pp.1653-1663, July 2018. A proper geometric representation of the cortical regions is a fundamental task for cortical shape analysis and landmark extraction. However, a significant challenge has arisen due to the highly variable, convoluted cortical folding patterns. In this paper, we propose a novel topological graph representation for automatic sulcal curve extraction (TRACE). In practice, the reconstructed surface suffers from noise influences introduced during image acquisition/surface reconstruction. In the presence of noise on the surface, TRACE determines stable sulcal fundic regions by employing the line simplification method that prevents the sulcal folding pattern from being significantly smoothed out. The sulcal curves are then traced over the connected graph in the determined regions by the Dijkstra's shortest path algorithm. For validation, we used the state-of-the-art surface reconstruction pipelines on a reproducibility data set. The experimental results showed higher reproducibility and robustness to noise in TRACE than the existing method (Li et al. 2010) with over 20% relative improvement in error for both surface reconstruction pipelines. In addition, the extracted sulcal curves by TRACE were well-aligned with manually delineated primary sulcal curves. We also provided a choice of parameters to control quality of the extracted sulcal curves and showed the influences of the parameter selection on the resulting curves.

Saeed Rezajoo;Ahmad R. Sharafat; "Robust Estimation of Displacement in Real-Time Freehand Ultrasound Strain Imaging," vol.37(7), pp.1664-1677, July 2018. We present a novel and efficient approach for robust estimation of displacement in real-time strain imaging for freehand ultrasound elastography by utilizing pre- and post-deformation ultrasound images. We define a quality factor for image lines and find the line with the highest value of quality factor to serve as the seed line for generating the displacement map. We also develop an analytical framework for coarse-to-fine displacement estimation, obtain an initial estimate of the seed line's displacement with subsample precision, and propagate it to the entire image to obtain a high quality strain image. Our fast strategy for estimating the seed line's displacement enables us to enhance the robustness without sacrificing the speed by identifying a new seed line when the quality falls below a given threshold. This is more efficient than the existing approaches that utilize multiple seed lines to improve robustness. Simulations, phantom experiments, and clinical studies show high signal-to-noise-ratio and contrast-to-noise-ratio values in our method for a wide range of average strain levels (1%-10%). Phantom experiments also demonstrate that our method is robust against corrupt and decorrelated data. Our method is superior to the existing real-time methods as it can produce high-quality strain images for up to 10% average strain levels at the rate of 20 frames/s on conventional CPUs.

Alexis Arnaud;Florence Forbes;Nicolas Coquery;Nora Collomb;Benjamin Lemasson;Emmanuel L. Barbier; "Fully Automatic Lesion Localization and Characterization: Application to Brain Tumors Using Multiparametric Quantitative MRI Data," vol.37(7), pp.1678-1689, July 2018. When analyzing brain tumors, two tasks are intrinsically linked, spatial localization, and physiological characterization of the lesioned tissues. Automated data-driven solutions exist, based on image segmentation techniques or physiological parameters analysis, but for each task separately, the other being performed manually or with user tuning operations. In this paper, the availability of quantitative magnetic resonance (MR) parameters is combined with advanced multivariate statistical tools to design a fully automated method that jointly performs both localization and characterization. Non trivial interactions between relevant physiological parameters are captured thanks to recent generalized Student distributions that provide a larger variety of distributional shapes compared to the more standard Gaussian distributions. Probabilistic mixtures of the former distributions are then considered to account for the different tissue types and potential heterogeneity of lesions. Discriminative multivariate features are extracted from this mixture modeling and turned into individual lesion signatures. The signatures are subsequently pooled together to build a statistical finger print model of the different lesion types that captures lesion characteristics while accounting for inter-subject variability. The potential of this generic procedure is demonstrated on a data set of 53 rats, with 36 rats bearing 4 different brain tumors, for which 5 quantitative MR parameters were acquired.

Antonio R. Porras;Beatriz Paniagua;Scott Ensel;Robert Keating;Gary F. Rogers;Andinet Enquobahrie;Marius George Linguraru; "Locally Affine Diffeomorphic Surface Registration and Its Application to Surgical Planning of Fronto-Orbital Advancement," vol.37(7), pp.1690-1700, July 2018. Metopic craniosynostosis is a condition caused by the premature fusion of the metopic cranial suture. If untreated, it can result into brain growth restriction, increased intra-cranial pressure, visual impairment, and cognitive delay. Fronto-orbital advancement is the widely accepted surgical approach to correct cranial shape abnormalities in patients with metopic craniosynostosis, but the outcome of the surgery remains very dependent on the expertise of the surgeon because of the lack of objective and personalized cranial shape metrics to target during the intervention. We propose in this paper a locally affine diffeomorphic surface registration framework to create an optimal interventional plan personalized to each patient. Our method calculates the optimal surgical plan by minimizing cranial shape abnormalities, which are quantified using objective metrics based on a normative model of cranial shapes built from 198 healthy cases. It is guided by clinical osteotomy templates for fronto-orbital advancement, and it automatically calculates how much and in which direction each bone piece needs to be translated, rotated, and/or bent. Our locally affine framework models separately the transformation of each bone piece while ensuring the consistency of the global transformation. We used our method to calculate the optimal surgical plan for 23 patients, obtaining a significant reduction of malformations (p <; 0.001) between 40.38% and 50.85% in the simulated outcome of the surgery using different osteotomy templates. In addition, malformation values were within healthy ranges (p > 0.01).

Siyuan Zhang;Shaoqiang Shang;Yuqiang Han;Chunming Gu;Shan Wu;Sihao Liu;Gang Niu;Ayache Bouakaz;Mingxi Wan; "Ex Vivo and In Vivo Monitoring and Characterization of Thermal Lesions by High-Intensity Focused Ultrasound and Microwave Ablation Using Ultrasonic Nakagami Imaging," vol.37(7), pp.1701-1710, July 2018. The feasibility of ultrasonic Nakagami imaging to evaluate thermal lesions by high-intensity focused ultrasound and microwave ablation was explored in ex vivo and in vivo liver models. Dynamic changes of the ultrasonic Nakagami parameter in thermal lesions were calculated, and ultrasonic B-mode and Nakagami images were reconstructed simultaneously. The contrast-to-noise ratio (CNR) between thermal lesions and normal tissue was used to estimate the contrast resolution of the monitoring images. After thermal ablation, a bright hyper-echoic region appeared in the ultrasonic B-mode and Nakagami images, identifying the thermal lesion. During thermal ablation, mean values of Nakagami parameter showed an increasing trend from 0.72 to 1.01 for the ex vivo model and 0.54 to 0.72 for the in vivo model. After thermal ablation, mean CNR values of the ultrasonic Nakagami images were 1.29 dB (ex vivo) and 0.80 dB (in vivo), significantly higher (p <; 0.05) than those for B-mode images. Thermal lesion size, assessed using ultrasonic Nakagami images, shows a good correlation to those obtained from the gross-pathology images (for the exvivo model: length, r = 0.96; width, r = 0.90; for the in vivo model: length, r = 0.95; width, r = 0.85). This preliminary study suggests that ultrasonic Nakagami parameter may have a potential use in evaluating the formation of thermal lesions with better image contrast. Moreover, ultrasonic Nakagami imaging combined with B-mode imaging may be utilized as an alternative modality in developing monitoring systems for image-guided thermal ablation treatments.

Daoqiang Zhang;Jiashuang Huang;Biao Jie;Junqiang Du;Liyang Tu;Mingxia Liu; "Ordinal Pattern: A New Descriptor for Brain Connectivity Networks," vol.37(7), pp.1711-1722, July 2018. Brain connectivity networks based on magnetic resonance imaging (MRI) or functional MRI (fMRI) data provide a straightforward way to quantify the structural or functional systems of the brain. Currently, there are several network descriptors developed for representing and analyzing brain connectivity networks. However, most of them are designed for unweighted networks, regardless of the valuable weight information of edges, or do not take advantage of the ordinal relationship of weighted edges (even though they are designed for weighted networks). In this paper, we propose a new network descriptor (i.e., ordinal pattern that contains a sequence of weighted edges) for brain connectivity network analysis. Compared with previous network properties, the proposed ordinal patterns cannot only take advantage of the weight information of edges but also explicitly model the ordinal relationship of weighted edges in brain connectivity networks. We further develop an ordinal pattern-based learning framework for brain disease diagnosis using resting-state fMRI data. Specifically, we first construct a set of brain functional connectivity networks, where each network is corresponding to a particular subject. We then develop an algorithm to identify ordinal patterns that frequently appear in brain connectivity networks of patients and normal controls. We further perform discriminative ordinal pattern selection and extract feature representations for subjects based on the selected ordinal patterns, followed by a learning model for automated brain disease diagnosis. Experimental results on both Alzheimer's Disease Neuroimaging Initiative and attention deficit hyperactivity disorder-200 data sets demonstrate that our method outperforms the several state-of-the-art approaches in the tasks of disease classification and clinical score regression.

Yang Gao;Weidao Chen;Xiaotong Zhang; "Investigating the Influence of Spatial Constraints on Ultimate Receive Coil Performance for Monkey Brain MRI at 7 T," vol.37(7), pp.1723-1732, July 2018. The RF receive coil array has become increasingly vital in current MR imaging practice due to its extended spatial coverage, maintained high SNR, and improved capability of accelerating data acquisition. The performance of a coil array is intrinsically determined by the current patterns generated in coil elements as well as by the induced electromagnetic fields inside the object. Investigations of the ultimate performance constrained by a specific coil space, which defines all possible current patterns flowing within, offer the opportunity to evaluate coil-space parameters (i.e., coverage, coil-to-object distance, layer thickness, and coil element type) without the necessity of considering the realistic coil element geometry, coil elements layout, and number of receive channels in modeling. In this paper, to mimic 7-T monkey RF head coil design, seven hypothetical ultimate coil arrays with different coil-space configurations were mounted over a numerical macaque head model; by using Huygens's surface approximation method, the influences of coil-space design parameters were systematically investigated through evaluating the spatial constrained ultimate intrinsic SNR and ultimate g-factor. Moreover, simulations were also conducted by using four coil arrays with limited number of loop-only elements, in order to explore to what extent the ultimate coil performance can be achieved by using practical coil designs, and hence several guidelines in RF coil design for monkey brain imaging at 7 T have been tentatively concluded. It is believed that the present analysis will offer important implications in novel receive array design for monkey brain MR imaging at ultra-high field.

Yuan Gao;Kun Wang;Shixin Jiang;Yuhao Liu;Ting Ai;Jie Tian; "Corrections to “Bioluminescence Tomography Based on Gaussian Weighted Laplace Prior Regularization for Morphological Imaging of Glioma”," vol.37(7), pp.1733-1733, July 2018. The correct affiliation for Yuan Gao, Kun Wang, and Jie Tian is as follows:

* "EMBS Micro and Nanotechnology in Medicine Conference," vol.37(7), pp.1734-1734, July 2018.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "IEEE Life Sciences Conference," vol.37(7), pp.1735-1735, July 2018.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "IEEE International Symposium on Biomedical Imaging," vol.37(7), pp.1736-1736, July 2018.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "IEEE Transactions on Medical Imaging information for authors," vol.37(7), pp.C3-C3, July 2018.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

IET Image Processing - new TOC (2018 July 16) [Website]

Jian Ji;Fen Ren;Hua-Feng Ji;Ya-Feng Yao;Guo-Fei Hou; "Generalised non-locally centralised image de-noising using sparse dictionary," vol.12(7), pp.1072-1078, 7 2018. Recently, image de-noising algorithm based on sparse representation has received an increasing amount of attention. Such algorithms proposed a comprehensive sparse representation model, by solving the sparse coding problem and choosing the proper method for dictionary updating to achieve better de-noising results. Therefore, the construction of learning dictionary has become one of the key problems that limit the de-noising effectiveness. The non-locally centralised sparse representation de-noising algorithm uses principal component analysis method to achieve dictionary updating. Nevertheless, the instability of a single complete dictionary in sparse coding leads to erratic result in the process of the original image restoration. In this study, the authors present a new method named generalised non-locally centralised sparse representation algorithm. In the proposed method, the authors cluster the training patches extracted from a set of example images into subspaces, and then train dictionaries for subspaces by sparse analysis k-singular value decomposition dictionary, which is utilised to construct coded sub-block dictionary to avoid the instable results caused by a single dictionary. Experiments show that the improved method has better signal-to-noise ratio and de-noising effect compared with other methods.

Xianpeng Liang;De-Shuang Huang; "Image segmentation fusion using weakly supervised trace-norm multi-task learning method," vol.12(7), pp.1079-1085, 7 2018. In this study, the authors propose a new method to fuse multiple segmentations generated by different methods or same methods with different parameters. The proposed method has several contributions. First, they convert the image segmentation fusion problem into a weakly supervised learning problem. Thus, the information of superpixels can be used to guide the fusion process. Second, they treat the multiple segmentations as multiple closely related tasks and utilise multi-task learning method to evaluate the reliability of the segmentations. Third, they design a strategy to ensemble the evaluated segmentation maps to obtain the final segmentation. The experiment on the benchmark dataset MSRC demonstrates the superior performance of the proposed method on image foreground and background segmentations.

Weiling Cai;Ming Yang;Fengyi Song; "Image filtering method using trimmed statistics and edge preserving," vol.12(7), pp.1086-1094, 7 2018. Image filtering is to retain the details of the image as much as possible and meanwhile suppress the noise pollution to great extent. This study presents an image filtering using the truncated statistics and edge preserving. In the first step of our method, the alpha-trimmed filter is utilized to remove a variety of types of noises; in the second step, taking the image after alpha-trimmed filtering as a guide image, the local linear model between the guide image and the target image is established; in the third step, the obtained local linear model is further simplified to reduce the time complexity; and finally, using the relationship between image local variance and the global variance, the local linear model is modified to enhance the details of the image and meanwhile remove halo phenomenon. This method has three advantages: (i) it is flexible to deal with the images stained by various types of high-intensity noise; (ii) it is effective to keep the image details and profile information, and remove the halo phenomenon; and (iii) it runs in time linear in the image size, thus its computation complexity is low. Experimental results show that the proposed filter is robust and efficient.

Thangarajah Akilan;Qingming Jonathan Wu;Hui Zhang; "Effect of fusing features from multiple DCNN architectures in image classification," vol.12(7), pp.1102-1110, 7 2018. Automatic image classification has become a necessary task to handle the rapidly growing digital image usage. It has branched out many algorithms and adopted new techniques. Among them, feature fusion-based image classification methods rely on hand-crafted features traditionally. However, it has been proven that the bottleneck features extracted through pre-trained convolutional neural networks (CNNs) can improve the classification accuracy. Thence, this study analyses the effect of fusing such cues from multiple architectures without being tied to any hand-crafted features. First, the CNN features are extracted from three different pre-trained models, namely AlexNet, VGG-16, and Inception-V3. Then, a generalised feature space is formed by employing principal component reconstruction and energy-level normalisation, where the features from individual CNN are mapped into a common subspace and embedded using arithmetic rules to construct fused feature vectors (FFVs). This transformation play a vital role in creating a representation that is appearance invariant by capturing complementary information of different high-level features. Finally, a multi-class linear support vector machine is trained. The experimental results demonstrate that such multi-modal CNN feature fusion is well suited for image/object classification tasks, but surprisingly it has not been explored so far by the computer vision research community extensively.

Shagufta Yasmin;Stephen J. Sangwine; "Multi-directional colour edge detector using LQS convolution," vol.12(7), pp.1111-1116, 7 2018. A new linear colour image filter based on linear quaternion systems (LQSs) is introduced. It detects horizontal, vertical, left- and right-diagonal edges with a single LQS convolution mask. The proposed filter is a canonic minimal filter of four LQS filters, each with different angles of rotation combined parallel wise. Different angles of rotation are a key features of the new filter such that horizontal, vertical, left, and right-diagonal LQS filter masks rotate pixels through angles π/2, π5/2, 3π/2, and 7π/2, respectively. Although, the four LQS masks are combined parallel to make a single LQS mask but derived using four quaternion convolutions, one for each direction of edges, the LQS filter produces a result without the combination of results from four separate edge detectors. This methodology could be generalised to design more elaborate LQS filters to perform other geometric operations on colour image pixels. The proposed filter translates smoothly changing colours to different shades of grey and produces coloured edges in multiple directions, where there is a sudden change of colour in the original image. Another key idea of the proposed filter is that it is linear because it operates in homogeneous coordinates.

Asad Munir;Shafiullah Soomro;Chang Ha Lee;Kwang Nam Choi; "Adaptive active contours based on variable kernel with constant initialisation," vol.12(7), pp.1117-1123, 7 2018. In this paper, a novel method of active contours based on the formulation of partial differential equation (PDE) is proposed for image segmentation. The evolution equation incorporates a force term that pushes the contour towards object boundary, a regularisation term which takes into account the smoothness of the level set function and an edge term which helps to stop the contour at required boundaries. The proposed method integrates an image convolved by a variable kernel into an energy formulation, where the width of the kernel varies in each iteration. Therefore, it takes local region information when the width of the kernel is small while for the larger width of the kernel, the proposed method considers global region information across the regions. Due to the use of both local and global image information, the method easily detects objects in the complex background and also segments the objects where intensity changes within the object. Moreover, the proposed method totally eliminates the need of the contour initialisation by using constant initialisation scheme. Experimental results on real and medical images prove the robustness of the proposed method. Finally, the authors validate their method on PH2 database for skin lesion segmentation.

Hongmei Wang;Jiayi Shi; "SAR image segmentation algorithm based on Contourlet domain AFMRF model," vol.12(7), pp.1124-1130, 7 2018. Combining the advantage of Contourlet transform with adaptive fuzzy Markov random field (MRF) model, a novel segmentation algorithm is proposed in this study to achieve precise and continuous division for synthetic aperture radar (SAR) image. The classical MRF model is modified to obtain the edge and process adaptive label factor. The adaptive fuzzy MRF (AFMRF) model is proposed to equilibrate the smooth regions segmentation with texture regions segmentation. At the same time, Contourlet domain hidden Markov tree (HMT) model is introduced to perform multi-scale directional filtering and intra-scale training on the coefficients of SAR images to achieve precise texture segmentation at each scale. Finally, the AFMRF model is integrated into inter-scale and intra-scale HMT training results and the segmented image can be obtained. To verify the validity of the proposed algorithm, several SAR images are experimented and compared with the state-of-the-art algorithms. The experimental results and analysis show that the proposed algorithm can achieve better results on noise suppression, smoothness of target regions, precise and continuous segmentation of fuzzy texture.

Kan Wu;Yizhou Yu; "Automatic object extraction from images using deep neural networks and the level-set method," vol.12(7), pp.1131-1141, 7 2018. The authors propose an automatic method for extracting objects with fine quality from photographs. The authors' method starts with finding bounding boxes that enclose potential objects, which is achievable by state-of-the-art object proposal methods. To further segment objects within obtained bounding boxes, the authors propose a new multi-pass level-set method based on saliency detection and foreground pixel classification. The level-set function is initially constructed with respect to the automatically detected salient parts within the bounding box, which eliminates potential user interaction and predicts an initial set of pixels on the object. The input features for foreground pixel classifiers are constructed as a combination of classical texture features from the Gabor filter banks and convolutional features from a pre-trained deep neural network. Through multi-pass evolution of the level-set function and re-training of the foreground pixel classifier, the authors' method is able to overcome possible inaccuracies in the initial level-set function and converge to the real object boundary.

Kumar Rahul;Anil Kumar Tiwari; "Saliency enabled compression in JPEG framework," vol.12(7), pp.1142-1149, 7 2018. Under low bit-rate requirements, JPEG baseline causes compression artifacts in the image. Through this paper, a novel region-of-interest (ROI) dependent quantization method in JPEG framework is proposed. The proposed method judiciously quantizes DCT coefficients belonging to salient and non-salient regions of the image. In this work, multiple ROIs are optimally identified and ranked by using variances. The number of classes is adaptively calculated using goodness-of-segmentation. After the number of classes and their ranks are obtained, the image is divided into blocks of size 8 × 8. These blocks may belong to more than one class and hence these are ranked based on their membership in various classes. 2D-DCT coefficients of each block are obtained and then quantized adaptively based on their ranks. Overhead for rank information of blocks is minimized by applying delta-encoding. Results are analyzed in terms of objective quality parameters and visual perception and found that the blocking artifacts in the proposed method are significantly lower than JPEG. The efficiency of the proposed method is demonstrated by results of recently published similar methods and the former is found superior in terms of quality of the reconstructed image.

Debolina Chakraborty;Anirban Chakraborty;Ayan Banerjee;Sekhar R. Bhadra Chaudhuri; "Automated spectral domain approach of quasi-periodic denoising in natural images using notch filtration with exact noise profile," vol.12(7), pp.1150-1163, 7 2018. The domain of noise fading from digital images, by virtue of its enormous appellation amongst the researchers, stands out uniquely in the recent research field of image processing over the last few decades. Periodic noises are unintended spurious signals which often agitate an image during acquisition/transmission, thereby resulting in repetitive patterns having spatial dependency and extensively demeaning visual excellence of the image. However, high amplitude noisy spectral components are clearly noticeable from the remaining uncorrupted ones in the corresponding Fourier transformed corrupted image spectrum. Hence, it is easier to distinguish and minimise those noisy components using an appropriate thresholding and filtration technique. Therefore, to start with, a simple yet elegant model of the noise-free natural image has been developed from the corrupted one followed by a proper thresholding method to get the noisy bitmap. Finally, an elegant adaptive sinc restoration filter with the concept of extracting the exact shape of a noise spectrum profile has been applied in the filtration phase. The performance of the proposed algorithm has been assessed both visually and statistically with other state-of-the-art algorithms in the literature in terms of various performance measurement attributes, providing evidence of achieving more effective restoration with considerable lower computational time.

Frédéric Bousefsaf;Mohamed Tamaazousti;Souheil Hadj Said;Rémi Michel; "Image completion using multispectral imaging," vol.12(7), pp.1164-1174, 7 2018. Here, the authors explore the potential of multispectral imaging applied to image completion. Snapshot multispectral cameras correspond to breakthrough technologies that are suitable for everyday use. Therefore, they correspond to an interesting alternative to digital cameras. In their experiments, multispectral images are acquired using an ultracompact snapshot camera-recorder that senses 16 different spectral channels in the visible spectrum. Direct exploitation of completion algorithms by extension of the spectral channels exhibits only minimum enhancement. A dedicated method that consists in a prior segmentation of the scene has been developed to address this issue. The segmentation derives from an analysis of the spectral data and is employed to constrain research area of exemplar-based completion algorithms. The full processing chain takes benefit from standard methods that were developed by both hyperspectral imaging and computer vision communities. Results indicate that image completion constrained by spectral presegmentation ensures better consideration of the surrounding materials and simultaneously improves rendering consistency, in particular for completion of flat regions that present no clear gradients and little structure variance. The authors validate their method with a perceptual evaluation based on 20 volunteers. This study shows for the first time the potential of multispectral imaging applied to image completion.

Kwangjin Yoon;Young-min Song;Moongu Jeon; "Multiple hypothesis tracking algorithm for multi-target multi-camera tracking with disjoint views," vol.12(7), pp.1175-1184, 7 2018. In this study, a multiple hypothesis tracking (MHT) algorithm for multi-target multi-camera tracking (MCT) with disjoint views is proposed. The authors' method forms track-hypothesis trees, and each branch of them represents a multi-camera track of a target that may move within a camera as well as move across cameras. Furthermore, multi-target tracking within a camera is performed simultaneously with the tree formation by manipulating a status of each track hypothesis. Each status represents three different stages of a multi-camera track: tracking, searching, and end-of-track. The tracking status means targets are tracked by a single camera tracker. In the searching status, the disappeared targets are examined if they reappear in other cameras. The end-of-track status does the target exited the camera network due to its lengthy invisibility. These three status assists MHT to form the track-hypothesis trees for multi-camera tracking. Furthermore, a gating technique which eliminates the unlikely observation-to-track association using space-time information has been introduced. In the experiments, the proposed method has been tested using two datasets, DukeMTMC and NLPR_MCT, which demonstrates that the method outperforms the state-of-the-art method in terms of improvement of the accuracy. In addition, real-time and online performance of proposed method is also showed in this study.

Bin Liu;Weijie Liu; "Factoring two-dimensional two-channel non-separable stripe filter banks into lifting steps," vol.12(7), pp.1185-1194, 7 2018. Since the division with remainder cannot be implemented in multivariable polynomials, the two-dimensional non-separable wavelet transform cannot be lifted by using a similar way as that of univariate wavelet transforms. To solve this problem, a general lifting factoring method of two-dimensional two-channel non-separable stripe filter banks is presented. The constructing form of the polyphase matrices of the stripe filter banks is deduced and the general factoring of the polyphase matrices is given. Compared with the separable lifting wavelet transform, the proposed lifting factoring method can extract better texture information. The lifting form is more succinct than that of the tensor product lifting wavelet transform. The computation amount of the proposed factoring method for image decomposition is a quarter of the two-dimensional two-channel non-separable stripe filter bank and the original two-dimensional two-channel non-separable wavelet system is quickened. Moreover, the proposed lifting factorising method is faster than the traditional two-dimensional two-channel non-separable wavelet transform based on the Fourier transformation framework in which the size of each filter is greater than N + 1. The proposed lifting factorising method has better sparsity than that of the original wavelet transform and the famous two-dimensional two-channel biorthogonal symmetric non-separable wavelet transform.

Peyman Rahmani;Gholamhossein Dastghaibyfard; "Two reversible data hiding schemes for VQ-compressed images based on index coding," vol.12(7), pp.1195-1203, 7 2018. This study proposes two reversible data hiding (RDH) schemes for vector quantisation (VQ)-compressed images based on switching-tree coding (STC) and dynamic tree-coding scheme (DTCS). Most developed VQ-based RDH schemes produce non-legitimate codes as output. In order to preserve the legitimacy of the embedded VQ code, some schemes embed data into VQ indices by employing an index replacement mechanism and some other schemes perform embedding by adopting one of the possible ways during encoding each index when multiple ways are possible to encode the index. In the current research, two schemes are proposed based on the second mechanism. Outputted code of the proposed schemes is a legitimate STC/DTCS code and the conventional STC/DTCS decoder can decode it to the original VQ index table. The experimental results show that the proposed schemes are feasible and in comparison with some previous RDH schemes, the first one provides higher embedding capacity and the second one embeds a substantial amount of data while provides lower bit rate than most the previous schemes. In addition, the embedding-efficiency of both proposed schemes is higher than that of the previous schemes.

Venkata Udaya Sameer;Sugumaran S;Ruchira Naskar; "K-unknown models detection through clustering in blind source camera identification," vol.12(7), pp.1204-1213, 7 2018. Source camera identification (SCI) is a forensic problem of mapping an image back to its source, often in relation to cybercrime. In this digital era, this problem needs to be addressed with the utmost care as a falsely identified source might implicate an innocent person. A very practical problem in this study is the presence of unknown models in the set of cameras under question. In other words, the images under question might not have originated from any of the camera models that are accessible to the forensic analyst, but from a different inaccessible source. Under such a circumstance, the conventional source detection techniques fail to identify the correct source, and falsely map the image to one of the accessible camera models. To address this problem, here the authors propose an $lpar N + Krpar $(N+K) SCI scheme which is capable of identifying N known (accessible) as well as K unknown (inaccessible) camera models. The authors' experimental results prove that the proposed scheme efficiently separates the known and unknown models, and helps to achieve considerably high source identification accuracy as compared to the state-of-the-art.

Bo Dai;Zhiqiang Hou;Wangsheng Yu;Feng Zhu;Xin Wang;Zefenfen Jin; "Visual tracking via ensemble autoencoder," vol.12(7), pp.1214-1221, 7 2018. The authors present a novel online visual tracking algorithm via ensemble autoencoder (AE). In contrast to other existing deep model based trackers, the proposed algorithm is based on the theory that the image resolution has an influence on vision procedures. When the authors employ a deep neural network to represent the object, the resolution is corresponding to the network size. The authors apply a small network to represent the pattern in a relatively lower resolution and search the object in a relatively larger area of the neighbourhood. After roughly estimating the location of the object, the authors apply a large network, which can provide more detailed information, to estimate the state of the object more accurately. Thus, the authors employ a small AE mainly for position searching and a larger one mainly for scale estimating. When tracking an object, the two networks interact to operate under the framework of particle filtering. Extensive experiments on the benchmark dataset show that the proposed algorithm performs favourably compared with some state-of-the-art methods.

Aswini Kumar Samantaray;Priyadarshi Kanungo;Bibhuprasad Mohanty; "Neighbourhood decision based impulse noise filter," vol.12(7), pp.1222-1227, 7 2018. A novel impulse noise filter that preserves the image details and effectively suppresses high-density noise has been proposed in this work. The proposed filter works in two phases: (i) noise pixel detection phase and (ii) noise pixel restoration phase. In the detection phase, the impulse noise corrupted pixels are detected using a neighbourhood decision approach. In the second phase, the true values of corrupted pixels are restored using a first-order neighbourhood decision approach. Experiments are carried out with both grey scale and colour images of various resolutions, texture and structures. The proposed scheme has high peak-signal-to-noise ratio and better visual quality in comparison to the standard median filter, modified decision based unsymmetrical trimmed median filter and improved fast peer-group filter with a varying noise density from 10 to 90%.

Souad Lahrache;Rajae El Ouazzani;Abderrahim El Qadi; "Rules of photography for image memorability analysis," vol.12(7), pp.1228-1236, 7 2018. Photos are becoming more spread with digital age. Cameras, smart phones and Internet provide large dataset of images available to a wide audience. Assessing memorability of these photos is becoming a challenging task. Besides, finding the best representative model for memorable images will enable memorability prediction. The authors develop a new approach-based rule of photography to evaluate image memorability. In fact, they use three groups of features: image basic features, layout features and image composition features. In addition, they introduce a diversified panel of classifiers based on some data mining techniques used for memorability analysis. They experiment their proposed approach and they compare its results to the state-of-the-art approaches dealing with image memorability. Their approach experiment's results prove that models used in their approach are encouraging predictors for image memorability.

Jianlei Liu; "Visibility distance estimation in foggy situations and single image dehazing based on transmission computation model," vol.12(7), pp.1237-1244, 7 2018. The existing visibility distance estimation algorithms in foggy situations use the region growing method to extract the vertical position of inflection point of image intensity changing. These algorithms have lower inflection point location accuracy for an image with a non-homogeneous road surface. To deal with these problems, this study presents a novel visibility distance measuring technique under foggy weather conditions. This method combines two major models: inflection point estimation (IPE) model and transmission refining (TR) model. The proposed IPE model based on transmission computation model derives a very useful relation between the transmission value of inflection points and the constant ${rm e}^{ - 2}$e-2. In order to acquire the more accurate transmission map and vertical position of each inflection point, this study establishes an effective TR model. This model exploits the edge information of input images, in order to significantly reduce the effects of artefact. The proposed algorithm provides more accurate visibility distance estimation of an image with a non-homogeneous road surface than the well-known algorithm through qualitative evaluations in experiments. The experimental results also show that the TR model has better outcomes than the guided filter approach through qualitative and quantitative evaluations.

Xianyan Wu;Qi Han;Xiamu Niu;Hongli Zhang;Siu-Ming Yiu;Junbin Fang; "JPEG image width estimation for file carving," vol.12(7), pp.1245-1252, 7 2018. Image width is an important factor for making the partially recovered data perceptually meaningful in image file carving. The authors conduct a comprehensive comparison of the performance of the representative methods for estimating the JPEG image width. Experimental results show that the best methods based on pixels are always better than the best methods based on quantised discrete cosine transform (DCT) coefficients. To keep the good performance of the pixel-based methods when the correct quantisation tables are unavailable, the authors replace the correct quantisation tables with the standard ones. Experimental results certify that such a replacement has only a little effect on the performance of the pixel-based methods, the best of which still outperform the best methods based on quantised DCT coefficients. The two results indicate that it may be enough to just focus on the pixel-based methods for future work. Finally, they propose a pixel-based method, which derives the candidate image widths from the most likely adjacent minimum coded unit (MCU) pairs in the vertical direction. The candidate width which appears most frequently is chosen as the estimated image width. Experimental results show that the proposed method usually has the best performance when most MCUs of an image are recovered.

Xiang-Xia Li;Bin Li;Lian-Fang Tian;Li Zhang; "Automatic benign and malignant classification of pulmonary nodules in thoracic computed tomography based on RF algorithm," vol.12(7), pp.1253-1264, 7 2018. Classification of benign and malignant pulmonary nodules can provide useful indicators for estimating the risk of lung cancer. In this study, an improved random forest (RF) algorithm is proposed for classification of benign and malignant pulmonary nodules in thoracic computed tomography images. First, an improved random walk algorithm is proposed to automatically segment pulmonary nodules. Then, intensity, geometric and texture features based on the grey-level co-occurrence matrix, rotation invariant uniform local binary pattern and Gabor filter methods are combined to generate an effective and discriminative feature vector. Mutual information is employed to reduce the dimensionality. Finally, an improved RF classifier is trained to classify benign and malignant nodules. An appropriate feature subset is selected by the bootstrap method and an effective combination method is introduced to predict a class label. The proposed classification method on the lung images dataset consortium dataset achieves a sensitivity of 0.92 and the area under the receiver-operating-characteristic curve of 0.95. An additional evaluation is performed on another dataset coming from General Hospital of Guangzhou Military Command. A mean sensitivity and a mean specificity of the proposed method are 0.85 and 0.82, respectively. Experimental results demonstrate that the proposed method achieves the satisfactory classification performance.

Ranya Al Darwich;Laurent Babout; "Investigating local orientation methods to segment microstructure with 3D solid texture," vol.12(7), pp.1265-1272, 7 2018. This study investigates local orientation-based approaches to the complex problem of pattern segmentation in three-dimensional (3D) texture image. The current problem focuses on the extraction of so-called lamellar colonies in titanium alloy, which, from the materials science and engineering point of view, are microstructural features that play a fundamental role on crack propagation and bifurcation during mechanical loading. Methods based on local orientation estimation extend the notion of using local gradient to reveal variation of semi-planar pattern orientation in the 3D image. The study introduces a computational approach that accelerates the calculation of the eigenvectors from the local matrices of inertia of all voxels composing the 3D image. Then different paths are proposed to segment colonies or inter-colony boundaries, i.e. polar orientation map and minimum scalar product map, in order to delimitate regions of similar orientations. The investigated segmentation methods have been compared with other methods that are mainly based on the popular solution of filter banks. Tests, which have been performed on both synthetic and real 3D images, show that the proposed local orientation-based methods better delineate object boundaries than the counterparts.

Manjit Kaur;Vijay Kumar; "Colour image encryption technique using differential evolution in non-subsampled contourlet transform domain," vol.12(7), pp.1273-1283, 7 2018. The main challenges of image encryption are robustness against attacks, key space, key sensitivity, and diffusion. To deal with these challenges, a differential evolution-based image encryption technique is proposed. In the proposed technique, two concepts are utilised to encrypt the images in an efficient manner. The first one is Arnold transform, which is utilised to permute the pixels position of an input image to generate a scrambled image. The second one is differential evolution, which is used to tune the parameters required by a beta chaotic map. Since the beta chaotic map suffers from parameter tuning issue. The entropy of an encrypted image is used as a fitness function. The proposed technique is compared with seven well-known image encryption techniques over five well-known images. The experimental results reveal that the proposed technique outperforms the other existing techniques in terms of security and better visual quality.

Ichraf Lahouli;Evangelos Karakasis;Robby Haelterman;Zied Chtourou;Geert De Cubber;Antonios Gasteratos;Rabah Attia; "Hot spot method for pedestrian detection using saliency maps, discrete Chebyshev moments and support vector machine," vol.12(7), pp.1284-1291, 7 2018. The increasing risks of border intrusions or attacks on sensitive facilities and the growing availability of surveillance cameras lead to extensive research efforts for robust detection of pedestrians using images. However, the surveillance of borders or sensitive facilities poses many challenges including the need to set up many cameras to cover the whole area of interest, the high bandwidth requirements for data streaming and the high-processing requirements. Driven by day and night capabilities of the thermal sensors and the distinguished thermal signature of humans, the authors propose a novel and robust method for the detection of pedestrians using thermal images. The method is composed of three steps: a detection which is based on a saliency map in conjunction with a contrast-enhancement technique, a shape description based on discrete Chebyshev moments and a classification step using a support vector machine classifier. The performance of the method is tested using two different thermal datasets and is compared with the conventional maximally stable extremal regions detector. The obtained results prove the robustness and the superiority of the proposed framework in terms of true and false positives rates and computational costs which make it suitable for low-performance processing platforms and real-time applications.

IEEE Transactions on Signal Processing - new TOC (2018 July 16) [Website]

Clayton G. Davis;Kevin S. Lorenz;Joel Goodman;George Stantchev;Luciano Boglione;Bryan D. Nousain; "Alias-Free Products of Signals Near Nyquist Rate," vol.66(16), pp.4151-4159, Aug.15, 15 2018. Products of time-series signals have found wide-spread application in many fields of signal processing. For example, they are often used in modeling and compensating distortions generated by analog and mixed-signal components. Without excess bandwidth, the products of time-series signals will produce aliased artifacts that do not represent the physical phenomenology of the components being modeled. To ameliorate the effects of these potentially unwanted, aliased distortion products, we compare two multidimensional filters: the first is a simple one-dimensional (1-D) polyphase filter that is used to generate excess bandwidth; the second is a multidimensional (M-D) filter that is convolved with the time-series product. We demonstrate that the 1-D polyphase filter, although computationally simple, still produces unwanted aliasing when applied to signals near Nyquist rate, which can adversely affect modeling and/or compensation performance. By contrast, the M-D filter is computationally expensive, but is capable of most closely matching the ideal antialias filter response.

Murat Torlak;Won Namgoong; "Sub-Nyquist Sampling Receiver for Overlay Cognitive Radio Users," vol.66(16), pp.4160-4169, Aug.15, 15 2018. The secondary users (SUs) in cognitive radio (CR) networks exploit the underutilized radio spectrum of the primary networks. In overlay CR systems, transmit waveforms can be designed either in orthogonal frequency-division multiplexing or in multicarrier code division multiple access modulation formats by deactivating subcarriers corresponding to primary users’ frequency bands to avoid interference. Transmissions of these overlay CR waveforms are expected to be extremely wideband, occupying up to 1 GHz of bandwidth. As reception of such wideband signals requires very high Nyquist sampling rates, we propose to employ a sub-Nyquist sampling front end. The proposed sub-Nyquist sampling receiver collects as coherently as possible all the SU signals spread across the entire frequency span of interest. Our theoretical analysis and simulation results demonstrate that the proposed receiver performs very close to the Nyquist rate receiver with significantly reduced complexity.

Maksim Butsenko;Johan Swärd;Andreas Jakobsson; "Estimating Sparse Signals Using Integrated Wideband Dictionaries," vol.66(16), pp.4170-4181, Aug.15, 15 2018. In this paper, we introduce a wideband dictionary framework for estimating sparse signals. By formulating integrated dictionary elements spanning bands of the considered parameter space, one may efficiently find and discard large parts of the parameter space not active in the signal. After each iteration, the zero-valued parts of the dictionary may be discarded to allow a refined dictionary to be formed around the active elements, resulting in a zoomed dictionary to be used in the following iterations. Implementing this scheme allows for more accurate estimates, at a much lower computational cost, as compared to directly forming a larger dictionary spanning the whole parameter space or performing a zooming procedure using standard dictionary elements. Different from traditional dictionaries, the wideband dictionary allows for the use of dictionaries with fewer elements than the number of available samples without loss of resolution. The technique may be used on both one- and multidimensional signals, and may be exploited to refine several traditional sparse estimators, here illustrated with the LASSO and the SPICE estimators. Numerical examples illustrate the improved performance.

Siamak Zamani Dadaneh;Edward R. Dougherty;Xiaoning Qian; "Optimal Bayesian Classification With Missing Values," vol.66(16), pp.4182-4192, Aug.15, 15 2018. Missing values can be an impediment to designing and applying classifiers. Missing values are common in biomedical studies due to various reasons, including missing tests or complex profiling technologies for different omics measurements in modern biomedicine. Many procedures have been proposed to impute values that are missing. This paper considers missing feature values in the context of optimal Bayesian classification, which selects a classifier that minimizes the expected error with respect to the posterior distribution governing an uncertainty class of feature-label distributions. The missing-value problem fits neatly into the overall framework of optimal Bayesian classification by marginalizing out the missing-value process from the feature-label distribution, and then updating the prior distribution of class-conditional parameters to posterior distributions using new observations. Generally, an optimal Bayesian classifier is defined via the effective class-conditional densities, which are averages of the parameterized feature-label distributions in the uncertainty class relative to the posterior distribution. Hence, once the posterior distribution incorporating the missing value process is found, the optimal Bayesian classifier pertaining to the features with missing values can be derived from the corresponding effective class-conditional densities. This paper presents the general theory, derives a closed-form decision rule for the optimal Bayesian classifier in a Gaussian model with independent features, and utilizes Hamiltonian Monte Carlo for the Gaussian model with arbitrary covariance matrices. Superior performance is demonstrated when compared to linear discriminant analysis, quadratic discriminant analysis, and support vector machines in conjunction with Gibbs sampling imputation using synthetic and real-world omics data.

Sissi Xiaoxiao Wu;Hoi-To Wai;Anna Scaglione; "Estimating Social Opinion Dynamics Models From Voting Records," vol.66(16), pp.4193-4206, Aug.15, 15 2018. This paper aims at modeling and inferring the influence among individuals from voting data (or more generally from actions that are selected by choosing one of <inline-formula><tex-math notation="LaTeX">$m$</tex-math></inline-formula> different options). The voting data are modeled as outcomes of a discrete random process, which we refer to as the discuss-then-vote model, whose evolution is governed by the DeGroot opinion dynamics with stubborn nodes. Based on the proposed model, we formulate the maximum a posterior estimator for the opinions and influence matrix (or the transition matrix) and derive a tractable approximation that results in a convex optimization problem. In the paper, the identifiability of the network dynamics’ parameters and the vote prediction procedure based on the influence matrix are discussed in depth. Our methodology is tested through numerical simulations as well as through its application to a set of the U.S. Senate roll call data. Interestingly, in spite of the relatively small data record available, the influence matrix inferred from the real data is consistent with the common intuition about the influence structure in the U.S. Senate.

Wen Fan;Junli Liang;Jian Li; "Constant Modulus MIMO Radar Waveform Design With Minimum Peak Sidelobe Transmit Beampattern," vol.66(16), pp.4207-4222, Aug.15, 15 2018. This paper addresses the problem of minimum peak sidelobe transmit beampattern design for colocated multiple-input multiple-output radar systems. Two new methods are proposed to achieve it via designing constant modulus (CM) waveforms directly: One is to maximize the ratio of the minimum mainlobe level to the peak sidelobe level (PSL) without specifying pattern masks, and another is to minimize PSL under the mainlobe ripple and unspecified waveform modulus constraints. The resultant optimization problems are difficult to solve due to the coupled numerator and denominator of the quadratic fractional programming problem formulation in the former and double-sided quadratic constraints and unknown waveform modulus in the latter. For the former, we decouple the numerator and denominator by equivalent transformation and introduction of boundary-type auxiliary variables in order that feasible methods are derived to tackle them independently and efficiently. For the latter, we simplify the coupled double-sided quadratic constraints on the same waveform vector into those on different single auxiliary variables to tackle the corresponding nonconvex optimization problem. Numerical examples show that the proposed algorithms can attain sufficient low peak sidelobe beampattern levels under the CM constraints.

Licheng Zhao;Daniel P. Palomar; "A Markowitz Portfolio Approach to Options Trading," vol.66(16), pp.4223-4238, Aug.15, 15 2018. In this paper, we study the problem of option portfolio design under the Markowitz mean-variance framework. We extend the common practice of a pure-stock portfolio and include options in the design. The options returns are modeled statistically with first- and second-order moments, enriching the conventional delta-gamma approximation. The naive mean-variance formulation allows for a zero-risk design that, in a practical scenario with parameter estimation errors, is totally misleading and leads to bad results. This zero-risk fallacy can be circumvented with a more realistic robust formulation. Transaction cost is also considered in the formulation for a proper practical design. We propose an efficient BSUM-M-based algorithm to solve the optimization problem. The proposed algorithm can perform as well as the off-the-shelf solvers but with a much lower computational time—up to one order of magnitude lower. Numerical results based on real data are conducted and the performance is presented in terms of Sharpe ratio, cumulative profit and loss, drawdown, overall return over turnover, value at risk, expected shortfall, and certainty equivalent.

Paulo S. R. Diniz; "On Data-Selective Adaptive Filtering," vol.66(16), pp.4239-4252, Aug.15, 15 2018. The current trend of acquiring data pervasively calls for some data-selection strategy, particularly in the case a subset of the data does not bring enough innovation. In this paper, we present some extensions of the existing adaptive filtering algorithms enabling data selection, which also address the censorship of outliers measured through unexpected high estimation errors. The resulting algorithms allow the prescription of how often the acquired data are expected to be incorporated in the learning process based on some a priori assumptions regarding the environment data. A detailed derivation of how to implement the data selection in a computationally efficient way is provided along with the proper choice of the parameters inherent to the data-selective affine projection algorithms. Similar discussions lead to the proposal of the data-selective least mean square and data-selective recursive least squares algorithms. Simulation results show the effectiveness of the proposed algorithms for selecting the innovative data without sacrificing the estimation accuracy, while reducing the computational cost.

Gordana Drašković;Frédéric Pascal; "New Insights Into the Statistical Properties of <inline-formula><tex-math notation="LaTeX">$M$</tex-math></inline-formula>-Estimators," vol.66(16), pp.4253-4263, Aug.15, 15 2018. This paper proposes an original approach to better understanding the behavior of robust scatter matrix <inline-formula><tex-math notation="LaTeX">$M$</tex-math></inline-formula>-estimators. Scatter matrices are of particular interest for many signal processing applications since the resulting performance strongly relies on the quality of the matrix estimation. In this context, <inline-formula><tex-math notation="LaTeX">$M$</tex-math></inline-formula>-estimators appear as very interesting candidates, mainly due to their flexibility to the statistical model and their robustness to outliers and/or missing data. However, the behavior of such estimators still remains unclear and not well understood since they are described by fixed-point equations that make their statistical analysis very difficult. To fill this gap, the main contribution of this work is to prove that these estimators distribution is more accurately described by a Wishart distribution than by the classical asymptotical Gaussian approximation. To that end, we propose a new “Gaussian-core” representation for complex elliptically symmetric distributions and we analyze the proximity between <inline-formula><tex-math notation="LaTeX">$M$</tex-math></inline-formula>-estimators and a Gaussian-based sample covariance matrix, unobservable in practice and playing only a theoretical role. To confirm our claims, we also provide results for a widely used function of <inline-formula><tex-math notation="LaTeX">$M$</tex-math></inline-formula>-estimators, the Mahalanobis distance. Finally, Monte Carlo simulations for various scenarios are presented to validate theoretical results.

Fan Liu;Longfei Zhou;Christos Masouros;Ang Li;Wu Luo;Athina Petropulu; "Toward Dual-functional Radar-Communication Systems: Optimal Waveform Design," vol.66(16), pp.4264-4279, Aug.15, 15 2018. We focus on a dual-functional multi-input-multi-output (MIMO) radar-communication (RadCom) system, where a single transmitter with multiple antennas communicates with downlink cellular users and detects radar targets simultaneously. Several design criteria are considered for minimizing the downlink multiuser interference. First, we consider both omnidirectional and directional beampattern design problems, where the closed-form globally optimal solutions are obtained. Based on the derived waveforms, we further consider weighted optimizations targeting a flexible tradeoff between radar and communications performance and introduce low-complexity algorithms. Moreover, to address the more practical constant modulus waveform design problem, we propose a branch-and-bound algorithm that obtains a globally optimal solution, and derive its worst-case complexity as function of the maximum iteration number. Finally, we assess the effectiveness of the proposed waveform design approaches via numerical results.

Andrew K. Bolstad; "Identification of Generalized Memory Polynomials Using Two-Tone Signals," vol.66(16), pp.4280-4290, Aug.15, 15 2018. This paper shows that the coefficients of a generalized memory polynomial model of a nonlinear device can be estimated by examining the output when the input is a series of two-tone signals. There are several benefits to using two-tone signals rather than noise-like waveforms for system identification, including the relative ease of generating two-tone signals with a high signal-to-noise ratio and high dynamic range. The two-tone approach described here results in a block upper triangular linear system of equations that can be solved for the unknown coefficients. The block upper triangular structure provides a sufficient condition for the matrix to be nonsingular: if each diagonal submatrix has full column rank, so does the matrix. In addition to showing that certain sets of two-tone signals satisfy this sufficient condition, methods for controlling the condition number are also given. Simulations verify the two-tone estimation method and illustrate the effects of noise and matrix conditioning. Some useful results regarding certain Khatri–Rao products can be found in the proofs.

Guanyu Wang;Jiang Zhu;Rick S. Blum;Peter Willett;Stefano Marano;Vincenzo Matta;Paolo Braca; "Signal Amplitude Estimation and Detection From Unlabeled Binary Quantized Samples," vol.66(16), pp.4291-4303, Aug.15, 15 2018. Signal amplitude estimation and detection from unlabeled quantized binary samples are studied, assuming that the order of the time indexes is completely unknown. First, maximum likelihood (ML) estimators are utilized to estimate both the permutation matrix and unknown signal amplitude under arbitrary but known signal shape and quantizer thresholds. Sufficient conditions are provided, under which an ML estimator can be found in polynomial time, and an alternating maximization algorithm is proposed to solve the general problem via good initialization. In addition, the statistical identifiability of the model is studied. Furthermore, an approximation of the generalized likelihood ratio test detector is adopted to detect the presence of the signal. In addition, an accurate approximation of the probability of successful permutation matrix recovery is derived, and explicit expressions are provided to reveal the relationship between the signal length and the number of quantizers. Finally, numerical simulations are performed to verify the theoretical results.

Tales Imbiriba;José Carlos M. Bermudez;Jean-Yves Tourneret;Neil J. Bershad; "A New Decision-Theory-Based Framework for Echo Canceler Control," vol.66(16), pp.4304-4314, Aug.15, 15 2018. A control logic has a central role in many echo cancellation systems for optimizing the performance of adaptive filters, while estimating the echo path. For reliable control, accurate double-talk and channel change detectors are usually incorporated to the echo canceler. This work expands the usual detection strategy to define a classification problem characterizing four possible states of the echo canceler operation. The new formulation allows the use of decision theory to continuously control the transitions among the different modes of operation. The classification rule reduces to a low-cost statistics, for which it is possible to determine the probability of error under all hypotheses, allowing the classification performance to be accessed analytically. Monte Carlo simulations using synthetic and real data illustrate the reliability of the proposed method.

David Cohen;Deborah Cohen;Yonina C. Eldar;Alexander M. Haimovich; "SUMMeR: Sub-Nyquist MIMO Radar," vol.66(16), pp.4315-4330, Aug.15, 15 2018. Multiple-input multiple-output (MIMO) radar exhibits several advantages with respect to the traditional radar array systems in terms of flexibility and performance. However, MIMO radar poses new challenges for both hardware design and digital processing. In particular, achieving high azimuth resolution requires a large number of transmit and receive antennas. In addition, digital processing is performed on samples of the received signal, from each transmitter to each receiver, at its Nyquist rate, which can be prohibitively large when high resolution is needed. Overcoming the rate bottleneck, sub-Nyquist sampling methods have been proposed that break the link between radar signal bandwidth and sampling rate. In this paper, we extend these methods to MIMO configurations and propose a sub-Nyquist MIMO radar (SUMMeR) system that performs both time and spatial compression. We present a range-azimuth-Doppler recovery algorithm from sub-Nyquist samples obtained from a reduced number of transmitters and receivers, that exploits the sparsity of the recovered targets’ parameters. This allows us to achieve reduction in the number of deployed antennas and the number of samples per receiver, without degrading the time and spatial resolutions. Simulations illustrate the detection performance of SUMMeR for different compression levels and shows that both time and spatial resolution are preserved, with respect to classic Nyquist MIMO configurations. We also examine the impact of design parameters, such as antennas’ locations and carrier frequencies, on the detection performance, and provide guidelines for their choice.

Shenglong Zhou;Naihua Xiu;Hou-Duo Qi; "A Fast Matrix Majorization-Projection Method for Penalized Stress Minimization With Box Constraints," vol.66(16), pp.4331-4346, Aug.15, 15 2018. Kruskal's stress minimization, though nonconvex and nonsmooth, has been a major computational model for dissimilarity data in multidimensional scaling. Semidefinite programming (SDP) relaxation (by dropping the rank constraint) would lead to a high number of SDP cone constraints. This has rendered the SDP approach computationally challenging even for problems of small size. In this paper, we reformulate the stress optimization as an Euclidean distance matrix (EDM) optimization with box constraints. A key element in our approach is the conditional positive-semidefinite cone with rank cut. Although nonconvex, this geometric object allows a fast computation of the projection onto it, and it naturally leads to a majorization-minimization algorithm with the minimization step having a closed-form solution. Moreover, we prove that our EDM optimization follows a continuously differentiable path, which greatly facilitated the analysis of the convergence to a stationary point. The superior performance of the proposed algorithm is demonstrated against some of the state-of-the-art solvers in the field of sensor network localization and molecular conformation.

Hanshen Xiao;Yufeng Huang;Yu Ye;Guoqiang Xiao; "Robustness in Chinese Remainder Theorem for Multiple Numbers and Remainder Coding," vol.66(16), pp.4347-4361, Aug.15, 15 2018. Chinese remainder theorem (CRT) has been widely studied with its applications in frequency estimation, phase unwrapping, coding theory, and distributed data storage. Since traditional CRT is greatly sensitive to the errors in residues due to noises, the problem of robustly reconstructing integers via the erroneous residues has been intensively studied in the literature. In order to robustly reconstruct integers, there are basically two approaches: one is to introduce common divisors in the moduli and the other is to directly decrease the dynamic range. In this paper, we take further insight into the geometry property of the linear space associated with CRT. Echoing both ways to introduce redundancy, we propose a pseudometric as a uniform framework to analyze the tradeoff between the error bound and the dynamic range for robust CRT. Furthermore, we present the first robust CRT for multiple numbers to solve the problem raised by CRT-based undersampling frequency estimation in general. Based on symmetric polynomials proposed, we proved that in most cases, the problem can be solved efficiently in the polynomial time.

Zhichao Sheng;Hoang Duong Tuan;Trung Q. Duong;H. Vincent Poor;Yong Fang; "Low-Latency Multiuser Two-Way Wireless Relaying for Spectral and Energy Efficiencies," vol.66(16), pp.4362-4376, Aug.15, 15 2018. This paper considers two possible approaches, which enable multiple pairs of users to exchange information via multiple multiantenna relays within one time-slot to save communication bandwidth in low-latency communications. The first approach is to deploy full-duplexes for both users and relays to make their simultaneous signal transmission and reception possible. In the second approach, the users use a fraction of a time slot to send their information to the relays and the relays use the remaining complementary fraction of the time slot to send the beamformed signals to the users. The inherent loop self-interference in the duplexes and inter-full-duplexing-user interference in the first approach are absent in the second approach. Under both these approaches, the joint design of the users’ power allocation and relays’ beamformers to either optimize the users’ exchange of information or maximize the energy-efficiency subject to user quality-of-service (QoS) constraints in terms of minimal rate thresholds leads to complex nonconvex optimization problems. Path-following algorithms are developed for their computational solutions. Numerical examples show the advantages of the second approach over the first approach.

IEEE Signal Processing Letters - new TOC (2018 July 09) [Website]

Bin Wang;Hongyu Jiang;Jun Fang;Huiping Duan; "A Proximal ADMM for Decentralized Composite Optimization," vol.25(8), pp.1121-1125, Aug. 2018. In this letter, we propose a proximal alternating direction method of multiplier (ADMM) to solve the composite optimization problem over a decentralized network. Compared with existing methods, such as PG-EXTRA and IC-ADMM, the proposed decentralized proximal ADMM method does not rely on assuming a smooth + nonsmooth structure on the objective functions, thus covering a wider range of composite optimization problems. Simulation results show that the proposed proximal ADMM presents a considerable performance advantage over existing state-of-the-art algorithms for both nonsmooth + nonsmooth and smooth + nonsmooth composite optimization problems.

Xiaoming Huang;Yin Zheng;Junzhou Huang;Yu-Jin Zhang; "A Minimum Barrier Distance Based Saliency Box for Object Proposals Generation," vol.25(8), pp.1126-1130, Aug. 2018. Object proposals generation plays an important role in computer vision. A good object proposals generation model should assign obviously high and low objectness score to the window that contains complete objects and incomplete objects, respectively. However, some existing methods such as local contrast based models usually fail to satisfy this requirement. In this letter, we propose MBDSal Box, a minimum barrier distance (MBD) based saliency box for locating object proposals. MBDSal Box consists of three components: item: First, a window saliency computation module that calculates the MBD saliency of each sliding window; second, a window refinement module that provides more accurate bounding boxes by a marker based watershed algorithm; third, a window scoring module which combines multiple features to compute the final objectness score. The experimental results carried on PASCAL VOC 2007 and Microsoft COCO 2014 datasets show that our model achieves better performance than the state-of-the-art models with competitive speed.

Baogang Li;Yuanbin Yao;He Chen;Yonghui Li;Shuqiang Huang; "Wireless Information Surveillance and Intervention Over Multiple Suspicious Links," vol.25(8), pp.1131-1135, Aug. 2018. This letter investigates the proactive eavesdropping for multiple suspicious links either through interfering or assisting the links. Considering the power constraint at eavesdropper, our objective is to maximize weighted sum eavesdropping rate of multiple suspicious links via jointly optimizing their intervention strategies (jamming or relaying) and the corresponding transmit power at eavesdropper. The formulated problem is shown to be a mixed-integer nonlinear programming (MINLP) problem, which is NP-hard in general. By identifying the separable structure of the formulated problem, we decouple the complex MINLP problem into two subproblems: 1) a jamming subproblem; and 2) a relaying subproblem. These two subproblems are then solved by further recasting them into a combinational problem and a typical concave optimization problem, respectively. Numerical simulations show that our proposed approach can achieve higher eavesdropping rate than conventional eavesdropping approaches.

Christian Kahindo;Mounim A. El-Yacoubi;Sonia Garcia-Salicetti;Anne-Sophie Rigaud;Victoria Cristancho-Lacroix; "Characterizing Early-Stage Alzheimer Through Spatiotemporal Dynamics of Handwriting," vol.25(8), pp.1136-1140, Aug. 2018. We propose an original approach for characterizing early Alzheimer, based on the analysis of online handwritten cursive loops. Unlike the literature, we model the loop velocity trajectory (full dynamics) in an unsupervised way. Through a temporal clustering based on K-medoids, with dynamic time warping as dissimilarity measure, we uncover clusters that give new insights on the problem. For classification, we consider a Bayesian formalism that aggregates the contributions of the clusters, by probabilistically combining the discriminative power of each. On a dataset consisting of two cognitive profiles, early-stage Alzheimer disease and healthy persons, each comprising 27 persons collected at Broca Hospital in Paris, our classification performance significantly outperforms the state-of-the-art, based on global kinematic features.

Chunlei Xie;Yujuan Sun; "Constructions of Even-Period Binary Z-Complementary Pairs With Large ZCZs," vol.25(8), pp.1141-1145, Aug. 2018. The lengths of binary Golay complementary pairs are limited to <inline-formula><tex-math notation="LaTeX">$2^{alpha }10^{beta }26^{gamma }$</tex-math></inline-formula>, where <inline-formula><tex-math notation="LaTeX"> $alpha,beta,gamma in lbrace 0,1,2,ldots rbrace$</tex-math></inline-formula>. This weakness is repaired by the Z-complementary pairs (ZCPs). In this letter, by concatenating several different complementary pairs of length <inline-formula><tex-math notation="LaTeX">$2^m$</tex-math></inline-formula>, we present a construction of even-period binary ZCPs of length <inline-formula><tex-math notation="LaTeX">$N=2^{m+3}+2^{m+2}+2^{m+1}$</tex-math> </inline-formula> with zero-correlation zone (ZCZ) width <inline-formula><tex-math notation="LaTeX">$Z=2^{m+3}$ </tex-math></inline-formula>. We also propose another construction of ZCPsof length <inline-formula> <tex-math notation="LaTeX">$N^{prime }$</tex-math></inline-formula> with larger ZCZ width <inline-formula> <tex-math notation="LaTeX">$Z=6N^{prime }/7$</tex-math></inline-formula>.

Amirhossein Javaheri;Hadi Zayyani;Mario A. T. Figueiredo;Farrokh Marvasti; "Robust Sparse Recovery in Impulsive Noise via Continuous Mixed Norm," vol.25(8), pp.1146-1150, Aug. 2018. This letter investigates the problem of sparse signal recovery in the presence of additive impulsive noise. The heavy-tailed impulsive noise is well modeled with stable distributions. Since there is no explicit formula for the probability density function of <inline-formula><tex-math notation="LaTeX">$Salpha S$</tex-math></inline-formula> distribution, alternative approximations are used, such as, generalized Gaussian distribution, which imposes <inline-formula><tex-math notation="LaTeX">$ell _{p}$</tex-math></inline-formula>-norm fidelity on the residual error. In this letter, we exploit a continuous mixed norm (CMN) for robust sparse recovery instead of <inline-formula> <tex-math notation="LaTeX">$ell _{p}$</tex-math></inline-formula>-norm. We show that in blind conditions, i.e., in the case where the parameters of the noise distribution are unknown, incorporating CMN can lead to near-optimal recovery. We apply alternating direction method of multipliers for solving the problem induced by utilizing CMN for robust sparse recovery. In this approach, CMN is replaced with a surrogate function and the majorization–minimization technique is incorporated to solve the problem. Simulation results confirm the efficiency of the proposed method compared to some recent algorithms for robust sparse recovery in impulsive noise.

Sajad Daei;Farzan Haddadi;Arash Amini; "Sample Complexity of Total Variation Minimization," vol.25(8), pp.1151-1155, Aug. 2018. This letter considers the use of total variation (TV) minimization in the recovery of a given gradient sparse vector from Gaussian linear measurements. It has been shown in recent studies that there exists a sharp phase transition behavior in TV minimization for the number of measurements necessary to recover the signal in asymptotic regimes. The phase-transition curve specifies the boundary of success and failure of TV minimization for large number of measurements. It is a challenging task to obtain a theoretical bound that reflects this curve. In this letter, we present a novel upper bound that suitably approximates this curve and is asymptotically sharp. Numerical results show that our bound is closer to the empirical TV phase-transition curve than the previously known bound obtained by Kabanava.

Hsueh-Wei Liao;Li Su; "Monaural Source Separation Using Ramanujan Subspace Dictionaries," vol.25(8), pp.1156-1160, Aug. 2018. Most source separation algorithms are implemented as spectrogram decomposition. In contrast, time-domain source separation is less investigated since there is a lack of an efficient signal representation that facilitates decomposing oscillatory components of a signal directly in the time domain. In this letter, we utilize the Ramanujan subspace and the nested periodic subspace to address this issue, by constructing a parametric dictionary that emphasizes period information with less redundancy. Methods including iterative subspace projection and convolutional sparse coding can decompose a mixture into signals with distinct oscillation periods according to the dictionary. Experiments on score-informed source separation show that the proposed method is competitive to the state-of-the-art, frequency-domain approaches when the provided pitch information and the signal parameters are the same.

Zhongling Wang;Zhenzhong Chen;Feng Wu; "Thermal to Visible Facial Image Translation Using Generative Adversarial Networks," vol.25(8), pp.1161-1165, Aug. 2018. Thermal cameras can capture images invariant to illumination conditions. However, thermal facial images are difficult to be recognized by human examiners. In this letter, an end-to-end framework, which consists of a generative network and a detector network, is proposed to translate thermal facial images into visible ones. The generative network aims at generating visible images given the thermal ones. The detector can locate important facial landmarks on visible faces and help the generative network to generate more realistic images that are easier to be recognized. As demonstrated in the experiments, the faces generated by our method have good visual quality and maintain identity preserving features.

Zhi Lin;Min Lin;Jian Ouyang;Wei-Ping Zhu;Symeon Chatzinotas; "Beamforming for Secure Wireless Information and Power Transfer in Terrestrial Networks Coexisting With Satellite Networks," vol.25(8), pp.1166-1170, Aug. 2018. This letter proposes a beamforming (BF) scheme to enhance wireless information and power transfer in terrestrial cellular networks coexisting with satellite networks. By assuming that the energy receivers are the potential eavesdroppers overhearing signals intended for information receivers (IRs), we first formulate a constrained optimization problem to maximize the minimal achievable secrecy rate of the IRs subject to the constraints of energy harvest requirement, interference threshold, and transmit power budget. Through exploiting the sequential convex approximation method, we convert the original problem into a linear one with a series of linear matrix inequality and second-order cone constraints. An iterative algorithm is then proposed to obtain the BF weight vectors. Finally, simulation results demonstrate the effectiveness and superiority of the proposed scheme.

S. M. Zafaruddin;Surendra Prasad; "GMRES Algorithm for Large-Scale Vectoring in DSL Systems," vol.25(8), pp.1171-1175, Aug. 2018. We propose an iterative crosstalk cancellation scheme based on the generalized minimal residual (GMRES) algorithm for large-scale digital subscriber line (DSL) systems. The proposed scheme does not require channel inversion and stores fewer vectors for crosstalk cancellation. We analyze the convergence of the GMRES algorithm and derive computable bounds on the residual error and signal-to-noise ratio in terms of system parameters at each iteration for upstream DSL systems. We show that the GMRES algorithm typically requires a single iteration for very large vectored systems to achieve crosstalk-free performance for the very high-speed DSL (VDSL) frequencies and only a few more in the highest frequency bands of the spectrum. This yields significant complexity savings and reduction in memory storage, compared to the zero forcing scheme under certain conditions.

Gouchol Pok;Keun Ho Ryu; "Efficient Block Matching for Removing Impulse Noise," vol.25(8), pp.1176-1180, Aug. 2018. A number of block-based image-denoising methods have been presented in the literature. Those methods, however, are generally adapted to denoising the Gaussian noise, and subsequently do not show good performance for denoising random-valued impulse, and salt-and-pepper noise. We propose an efficient block-based image-denoising method, which is devised specially for fast denoising of impulse noise. The method first constructs a set of array pointers to image blocks containing a specific pixel value at a specific location. With this scheme, finding of blocks similar to a given block can be done by considering only the blocks pointed by the pointers corresponding to the pixel values of the block without comparing all the blocks in the input image. The experimental results show that the proposed method can achieve superior denoising performance in terms of computational time and signal-to-noise ratio measure.

Juntae Kim;Minsoo Hahn; "Voice Activity Detection Using an Adaptive Context Attention Model," vol.25(8), pp.1181-1185, Aug. 2018. Voice activity detection (VAD) classifies incoming signal segments into speech or background noise; its performance is crucial in various speech-related applications. Although speech-signal context is a relevant VAD asset, its usefulness varies in unpredictable noise environments. Therefore, its usage should be adaptively adjustable to the noise type. This letter improves the use of context information by using an adaptive context attention model (ACAM) with a novel training strategy for effective attention, which weights the most crucial parts of the context for proper classification. Experiments in real-world scenarios demonstrate that the proposed ACAM-based VAD outperforms the other baseline VAD methods.

Moonsoo Ra;Whoi-Yul Kim; "Parallelized Tube Rearrangement Algorithm for Online Video Synopsis," vol.25(8), pp.1186-1190, Aug. 2018. Video synopsis allows us to analyze security videos efficiently by condensing or shortening a long video into a short one. To generate a condensed video, moving objects (a.k.a. object tubes) in the video are rearranged in the temporal domain using a predefined objective function. The objective function consists of several energy terms which play important roles in making a visually appealing condensed video. One of the energy terms, collision energy, creates a bottleneck in the computation because it requires two object tubes to calculate the degree of collision between them. Existing approaches try to reduce the computation time of the collision energy calculation by reducing the number of tubes processed at once. However, those approaches are not sufficient to generate condensed video when the number of object tubes becomes large.

In this letter, we propose a fast Fourier transform (FFT)-based parallelized tube rearrangement algorithm. To take advantage of both parallel processing and FFT, we represent object tubes as three-dimensional binary matrices (occupation matrices). An objective function of the tube rearrangement problem is defined on the occupation matrix, and a starting position for each tube in the temporal domain is then determined by optimizing the objective function. Throughout the experiments, the proposed algorithm took a much shorter time to condense the video than existing algorithms, while other performance metrics were similar.

Abhinav Goel;Adarsh Patel;Kyatsandra G. Nagananda;Pramod K. Varshney; "Robustness of the Counting Rule for Distributed Detection in Wireless Sensor Networks," vol.25(8), pp.1191-1195, Aug. 2018. We consider the problem of energy-efficient distributed detection to infer the presence of a target in a wireless sensor network and analyze its robustness to modeling uncertainties. The sensors make noisy observations of the target's signal power, which follows the isotropic power-attenuation model. Binary local decisions of the sensors are transmitted to a fusion center, where a global inference regarding the target's presence is made, based on the counting rule. We consider uncertain knowledge of: 1) the signal decay exponent of the wireless medium; 2) the power attenuation constant; and 3) the distance between the target and the sensors. For a given degree of uncertainty, we show that there exists a limit on the target's signal power below which the distributed detector fails to achieve the desired performance regardless of the number of sensors deployed. Simulation results are presented to determine the level of sensitivity of the detector to uncertainty in these parameters. The results throw light on the limits of robustness for distributed detection, akin to “SNR walls” for classical detection.

Tong Wu;Waheed U. Bajwa; "A Low Tensor-Rank Representation Approach for Clustering of Imaging Data," vol.25(8), pp.1196-1200, Aug. 2018. This letter proposes an algorithm for clustering of two-dimensional data. Instead of “flattening” data into vectors, the proposed algorithm keeps samples as matrices and stores them as lateral slices in a third-order tensor. It is then assumed that the samples lie near a union of free submodules and their representations under this model are obtained by imposing a low tensor-rank constraint and a structural constraint on the representation tensor. Clustering is carried out using an affinity matrix calculated from the representation tensor. Effectiveness of the proposed algorithm and its superiority over existing methods are demonstrated through experiments on two image datasets.

Abderrahmane Mayouche;Adel Metref;Junil Choi; "Downlink Training Overhead Reduction Technique for FDD Massive MIMO Systems," vol.25(8), pp.1201-1205, Aug. 2018. In this letter, a novel minimum mean square error (MSE) based channel estimation framework is proposed to reduce the downlink channel training overhead in frequency division duplexing massive multiple-input multiple-output systems, where the overhead reduction is achieved through training only a subset of antennas and by exploiting the spatial correlation between the antennas at the base station. Closed-form expressions of the analytical MSE and the asymptotic MSE of the system are obtained. Furthermore, a perfect match between theoretical and simulation results is observed, where the channel training overhead can be reduced by half with an acceptable performance.

Heng Wang;Daijin Xiong;Liuqing Chen;Ping Wang; "A Consensus-Based Time Synchronization Scheme With Low Overhead for Clustered Wireless Sensor Networks," vol.25(8), pp.1206-1210, Aug. 2018. For clustered wireless sensor networks, this letter presents a time synchronization scheme with low overhead that is based on the maximum consensus approach. The synchronization process is initiated by the cluster head and includes three steps: 1) threshold-based intracluster time synchronization (TITS); 2) forwarding-based intercluster time synchronization (FITS); and 3) one-way intracluster time synchronization (OITS). Specifically, first, TITS achieves the logical clock of cluster head to be synchronized to the largest logical clock of intracluster nodes with three times point-to-point message exchange in two cycles. Especially, by comparing the logical skew of each intracluster node with the logical skew of cluster head, member nodes with smaller logical skew will not reply to cluster head to reduce the number of message exchanges. Then, in FITS, the cluster heads synchronize with each other and the message exchanges of cluster heads are realized through the message forwarding of the gateway nodes. At last, OITS utilizes the one-way communication to synchronize all the intracluster nodes with cluster head via two times broadcasting of cluster head. Theoretical analysis and simulation results demonstrate that the proposed scheme can reduce the communication traffic greatly and improve the convergence rate.

Vincent Schellekens;Laurent Jacques; "Quantized Compressive K-Means," vol.25(8), pp.1211-1215, Aug. 2018. The recent framework of compressive statistical learning proposes to design tractable learning algorithms that use only a heavily compressed representation—or sketch—of massive datasets. Compressive K-Means (CKM) is such a method: It aims at estimating the centroids of data clusters from pooled, nonlinear, and random signatures of the learning examples. While this approach significantly reduces computational time on very large datasets, its digital implementation wastes acquisition resources because the learning examples are compressed only after the sensing stage. The present work generalizes the CKM sketching procedure to a large class of periodic nonlinearities including hardware-friendly implementations that compressively acquire entire datasets. This idea is exemplified in a quantized CKM procedure, a variant of CKM that leverages 1-bit universal quantization (i.e., retaining the least significant bit of a standard uniform quantizer) as the periodic sketch nonlinearity. Trading for this resource-efficient signature (standard in most acquisition schemes) has almost no impact on the clustering performance, as illustrated by numerical experiments.

Cristóvão Cruz;Alessandro Foi;Vladimir Katkovnik;Karen Egiazarian; "Nonlocality-Reinforced Convolutional Neural Networks for Image Denoising," vol.25(8), pp.1216-1220, Aug. 2018. We introduce a paradigm for nonlocal sparsity reinforced deep convolutional neural network denoising. It is a combination of a local multiscale denoising by a convolutional neural network (CNN) based denoiser and a nonlocal denoising based on a nonlocal filter (NLF), exploiting the mutual similarities between groups of patches. CNN models are leveraged with noise levels that progressively decrease at every iteration of our framework, while their output is regularized by a nonlocal prior implicit within the NLF. Unlike complicated neural networks that embed the nonlocality prior within the layers of the network, our framework is modular, and it uses standard pretrained CNNs together with standard nonlocal filters. An instance of the proposed framework, called NN3D, is evaluated over large grayscale image datasets showing state-of-the-art performance.

Xinwu Liu; "A New TGV-Gabor Model for Cartoon-Texture Image Decomposition," vol.25(8), pp.1221-1225, Aug. 2018. Integrating the advantages of two recently developed total generalized variation (TGV) and Gabor wavelets, this letter presents a new weighted TGV-Gabor model for the challenging problem of cartoon-texture image decomposition. Computationally, by introducing two dual variables, we formulate a highly efficient numerical method based on the primal-dual framework in detail. At last, in comparison with several existing advanced variational models, experimental simulations clearly illustrate the outstanding performance of our proposed edge-preserving model, especially in separating the larger structural features from the smaller textural details completely and maintaining the sharp edges and weak contours simultaneously.

Hongru Sun;Fengchao Zhu;Hai Lin;Feifei Gao; "Robust Magnetic Resonant Beamforming for Secured Wireless Power Transfer," vol.25(8), pp.1226-1230, Aug. 2018. Wireless power transfer (WPT) is an emerging and promising technique for power supplies to mobile and portable devices. Among all approaches, magnetic resonant coupling (MRC) is an excellent one for midrange WPT, which provides high mobility, flexibility, and convenience due to its simplicity in hardware implementation and longer transmission distances. In this letter, we consider an MRC-WPT system with multiple power transmitters, one intended power receiver and multiple unintended power receivers. The optimal robust beamforming design of the complex transmit currents is investigated to achieve the minimal total source power with the worst-case mutual inductances measurement, whereas the unintended receiving powers are constrained by certain bounds. Numerical results demonstrate that the proposed algorithm can significantly improve the performance and the robustness of the MRC-WPT systems.

Yu Zhang;Xinchao Wang;Xiaojun Bi;Dacheng Tao; "A Light Dual-Task Neural Network for Haze Removal," vol.25(8), pp.1231-1235, Aug. 2018. Single-image dehazing is a challenging problem due to its ill-posed nature. Existing methods rely on a suboptimal two-step approach, where an intermediate product like a depth map is estimated, based on which the haze-free image is subsequently generated using an artificial prior formula. In this paper, we propose a light dual-task Neural Network called LDTNet that restores the haze-free image in one shot. We use transmission map estimation as an auxiliary task to assist the main task, haze removal, in feature extraction and to enhance the generalization of the network. In LDTNet, the haze-free image and the transmission map are produced simultaneously. As a result, the artificial prior is reduced to the smallest extent. Extensive experiments demonstrate that our algorithm achieves superior performance against the state-of-the-art methods on both synthetic and real-world images.

IEEE Journal of Selected Topics in Signal Processing - new TOC (2018 July 16) [Website]

* "[Front cover[Name:_blank]]," vol.12(3), pp.C1-C1, June 2018.* Presents the front cover for this issue of the publication.

* "IEEE Journal of Selected Topics in Signal Processing publication information," vol.12(3), pp.C2-C2, June 2018.* Provides a listing of current staff, committee members and society officers.

* "[Blank page[Name:_blank]]," vol.12(3), pp.B415-B415, June 2018.* This page or pages intentionally left blank.

* "[Blank page[Name:_blank]]," vol.12(3), pp.B416-B416, June 2018.* This page or pages intentionally left blank.

* "Table of Contents," vol.12(3), pp.417-418, June 2018.* Presents the table of contents/splash page of the proceedings record.

C. Masouros;M. Sellathurai;C. B. Papadias;L. Dai;W. Yu;T. Sizer; "Introduction to the Issue on Hybrid Analog–Digital Signal Processing for Hardware-Efficient Large Scale Antenna Arrays (Part II)," vol.12(3), pp.419-421, June 2018. The papers in this special section focus on hybrid analog-digital signal processing for hardware efficient large scale antenna arrays. Hybrid analog-digital (HAD) processing provides a key technology for the coming generations of wireless networks, as a means of obtaining hardware-efficient transceivers. The principle behind HAD is that the transceiver processing is divided into the analog and digital domain, where networks of analog components implement large-dimensional processing at the transceiver front end, allowing for a low-dimensional digital processing which necessitates only a few RF chains. This technology has recently been brought at the forefront of research motivated by the proliferation of millimeter-wave (mmWave) communications, as a solution to circumvent the use of large numbers of expensive mmWave RF components. Its scope however is not limited solely tommWave, as hardwareefficient transmission is key for small cell deployments in the microwave frequencies and also in emerging applications such as the internet of things (IoT) involving massive connectivity. All these applications still rely on transceivers capable of beamforming, using cheap, low-power, and physically small devices. Accordingly, the aim of this Special Issue (SI) has been to gather the relevant contributions focusing on the practical challenges of hybrid analog-digital transmission.

Xiwen Jiang;Florian Kaltenberger; "Channel Reciprocity Calibration in TDD Hybrid Beamforming Massive MIMO Systems," vol.12(3), pp.422-431, June 2018. A hybrid analog-digital (AD) beamforming structure is a very attractive solution to build low-cost massive multiple-input multiple-output systems. Typically, these systems use a set of fixed beams for transmission and reception to avoid the need to obtain channel state information at transmitter (CSIT) for each antenna element individually. However, such a method cannot fully exploit the potential of hybrid AD beamforming systems. Alternatively, CSIT can be estimated by assuming a model for the propagation channel, whereas this model is only validated in millimeter-wave band; thanks to its poor scattering nature. In this paper, we focus on time division duplex systems with hybrid beamforming structure and propose a reciprocity calibration scheme that allows to acquire full CSIT. Different from existing CSIT acquisition methods, our approach does not require any assumption on the channel model and can estimate full CSIT.

An Liu;Vincent K. N. Lau;Min-Jian Zhao; "Stochastic Successive Convex Optimization for Two-Timescale Hybrid Precoding in Massive MIMO," vol.12(3), pp.432-444, June 2018. Hybrid precoding, which consists of an RF precoder and a baseband precoder, is a popular precoding architecture for massive multiple-input multiple-output (MIMO) due to its low hardware cost and power consumption. In conventional hybrid precoding, both RF and baseband precoders are adaptive to the real-time channel state information. As a result, an individual RF precoder is required for each subcarrier in wideband systems, leading to high implementation cost. To overcome this issue, two-timescale hybrid precoding (THP), which adapts the RF precoder to the channel statistics, has been proposed. Since the channel statistics are approximately the same over different subcarriers, only a single RF precoder is required in THP. Despite the advantages of THP, there lacks a unified and efficient algorithm for its optimization due to the nonconvex and stochastic nature of the problem. Based on stochastic successive convex approximation (SSCA), we propose an online algorithmic framework called SSCA-THP for general THP optimization problems, in which the hybrid precoder is updated by solving a quadratic surrogate optimization problem whenever a new channel sample is obtained. Then, we prove the convergence of SSCA-THP to stationary points. Finally, we apply SSCA-THP to solve three important THP optimization problems and verify its advantages over existing solutions.

Mahmoud Abdelaziz;Lauri Anttila;Alberto Brihuega;Fredrik Tufvesson;Mikko Valkama; "Digital Predistortion for Hybrid MIMO Transmitters," vol.12(3), pp.445-454, June 2018. This paper investigates digital predistortion (DPD) linearization of hybrid beamforming large-scale antenna transmitters. We propose a novel DPD processing and learning technique for an antenna subarray, which utilizes a combined signal of the individual power amplifier (PA) outputs in conjunction with a decorrelation-based learning rule. In effect, the proposed approach results in minimizing the nonlinear distortions in the direction of the intended receiver. This feature is highly desirable, since emissions in other directions are naturally weak due to beamforming. The proposed parameter learning technique requires only a single observation receiver, and therefore supports simple hardware implementation. It is also shown to clearly outperform the current state-of-the-art technique that utilizes only a single PA for learning. Analysis of the feedback network amplitude and phase imbalances reveals that the technique is robust even to high levels of such imbalances. Finally, we also show that the array system out-of-band emissions are well-behaving in all spatial directions, and essentially below those of the corresponding single-antenna transmitter, due to the combined effects of the DPD and beamforming.

Qingjiang Shi;Mingyi Hong; "Spectral Efficiency Optimization For Millimeter Wave Multiuser MIMO Systems," vol.12(3), pp.455-468, June 2018. As a key enabling technology for 5G wireless, millimeter wave (mmWave) communication motivates the utilization of large-scale antenna arrays for achieving highly directional beamforming. However, the high cost and power consumption of RF chains stand in the way of adoption of the optimal fully digital precoding in large-array systems. To reduce the number of RF chains while still maintaining the spatial multiplexing gain of large array, a hybrid precoding architecture has been proposed for mmWave systems and received considerable interest in both industry and academia. However, the optimal hybrid precoding design has not been fully understood, especially for the multiuser MIMO case. This paper is the first work that directly addresses the nonconvex hybrid precoding problem of mmWave multi-user MIMO systems (without any approximation) by using penalty dual decomposition (PDD) method. The proposed PDD method have a guaranteed convergence to KKT solutions of the hybrid precoding problem under a mild assumption. Simulation results show that, even when both the transmitter and the receivers are equipped with the fewest RF chains that are required to support multistream transmission, hybrid precoding can still approach the performance of fully digital precoding in both the infinite resolution phase shifter case and the finite resolution phase shifter case with several bits quantization.

Hengtao He;Chao-Kai Wen;Shi Jin; "Bayesian Optimal Data Detector for Hybrid mmWave MIMO-OFDM Systems With Low-Resolution ADCs," vol.12(3), pp.469-483, June 2018. Hybrid analog-digital precoding architectures and low-resolution analog-to-digital converter (ADC) receivers are two solutions to reduce hardware cost and power consumption for millimeter wave (mmWave) multiple-input multiple-output (MIMO) communication systems with large antenna arrays. In this study, we consider a mmWave MIMO-orthogonal frequency division multiplexing (OFDM) receiver with a generalized hybrid architecture in which a small number of radio frequency (RF) chains and low-resolution ADCs are employed simultaneously. Owing to the strong nonlinearity introduced by low-resolution ADCs, the task of data detection is challenging, particularly achieving a Bayesian optimal data detection. This study aims to fill this gap. By using a generalized expectation consistent signal recovery technique, we propose a computationally efficient data detection algorithm that provides a minimum mean-square error estimate on data symbols and is extended to a mixed-ADC architecture. Considering particular structure of MIMO-OFDM channel matrix, we provide a low-complexity realization in which only fast fourier transform (FFT) operation and matrix-vector multiplications are required. Furthermore, we present an analytical framework to study the theoretical performance of the detector in the large-system limit, which can precisely evaluate the performance expressions, such as mean-square error and symbol error rate. Based on this optimal detector, the potential of adding a few low-resolution RF chains and high-resolution ADCs for a mixed-ADC architecture is investigated. Simulation results confirm the accuracy of our theoretical analysis and can be used for system design rapidly. The results reveal that adding a few low-resolution RF chains to original unquantized systems can obtain significant gains.

Kilian Roth;Hessam Pirzadeh;A. Lee Swindlehurst;Josef A. Nossek; "A Comparison of Hybrid Beamforming and Digital Beamforming With Low-Resolution ADCs for Multiple Users and Imperfect CSI," vol.12(3), pp.484-498, June 2018. For 5G, it will be important to leverage the available millimeter wave spectrum. To achieve an approximately omnidirectional coverage with a similar effective antenna aperture compared to state-of-the-art cellular systems, an antenna array is required at both the mobile and base station. Due to the large bandwidth and inefficient amplifiers available in CMOS for mmWave, the analog front end of the receiver with a large number of antennas becomes especially power hungry. Two main solutions exist to reduce the power consumption: hybrid beam forming and digital beam forming with low resolution Analog to digital converters (ADCs). In this paper, we compare the spectral and energy efficiency of both systems under practical system constraints. We consider the effects of channel estimation, transmitter impairments, and multiple simultaneous users for a wideband multipath model. Our power consumption model considers components reported in the literature at 60 GHz. In contrast to many other works, we also consider the correlation of the quantization error, and generalize the modeling of it to nonuniform quantizers and different quantizers at each antenna. The result shows that as the signal-to-noise ratio (SNR) gets larger the ADC resolution achieving the optimal energy efficiency gets also larger. The energy efficiency peaks for 5-b resolution at high SNR, since due to other limiting factors, the achievable rate almost saturates at this resolution. We also show that in the multiuser scenario digital beamforming is in any case more energy efficient than hybrid beamforming. In addition, we show that if mixed ADC resolutions are used, we can achieve any desired tradeoff between power consumption and rate close to those achieved with only one ADC resolution.

Yacong Ding;Sung-En Chiu;Bhaskar D. Rao; "Bayesian Channel Estimation Algorithms for Massive MIMO Systems With Hybrid Analog-Digital Processing and Low-Resolution ADCs," vol.12(3), pp.499-513, June 2018. We address the problem of channel estimation in massive multiple-input multiple-output (Massive MIMO) systems where both hybrid analog-digital processing and low-resolution analog-to-digital converters (ADCs) are utilized. The hardware-efficient architecture is attractive from a power and cost point of view, but poses two significant channel estimation challenges. One is due to the smaller dimension of the measurement signal obtained from the limited number of radio frequency chains, and the other is the coarser measurements from the low-resolution ADCs. We address this problem by utilizing two sources of information. First, by exploiting the sparse nature of the channel in the angular domain, the channel estimate is enhanced and the required number of pilots is reduced. Second, by utilizing the transmitted data symbols as the “virtual pilots,” the channel estimate is further improved without adding more pilot symbols. The constraints imposed by the architecture, the sparsity of the channel and the data aided channel estimation are treated in a unified manner by employing a Bayesian formulation. The quantized sparse channel estimation is formulated into a sparse Bayesian learning framework, and solved using the variational Bayesian method. Simulation results show that the proposed algorithm can efficiently estimate the channel even with the architectural constraints, and that significant improvements are enabled by leveraging the transmitted data symbols.

Yu Han;Shi Jin;Jun Zhang;Jiayi Zhang;Kai-Kit Wong; "DFT-Based Hybrid Beamforming Multiuser Systems: Rate Analysis and Beam Selection," vol.12(3), pp.514-528, June 2018. This paper considers the discrete-Fourier-transform-based hybrid beamforming multiuser system and studies the use of analog beam selection schemes. We first analyze the uplink ergodic achievable rates of the zero-forcing (ZF) receiver and the maximum-ratio combining receiver under Ricean fading conditions. We then examine the downlink ergodic achievable rates for the ZF and maximum-ratio transmitting precoders. The long-term and short-term normalization methods are introduced, which utilize long-term and instantaneous channel state information to implement the downlink power normalization, respectively. Also, approximations and asymptotic expressions of both the uplink and downlink rates are obtained, which facilitate the analog beam selection solutions to maximize the achievable rates. An exhaustive search provides the optimal results, but to reduce the time consumption, we resort to the derived rate limits and propose the second selection scheme based on the projected power of the line-of-sight paths. We then combine the advantages of the two schemes and propose a two-step scheme that achieves near-optimal performances with much less time consumption than exhaustive search. Numerical results confirm the analytical results of the ergodic achievable rate and reveal the effectiveness of the proposed two-step method.

José Luis Gómez-Tornero;David Cañete-Rebenaque;Jose Antonio López-Pastor;Alejandro Santos Martínez-Sala; "Hybrid Analog-Digital Processing System for Amplitude-Monopulse RSSI-Based MiMo WiFi Direction-of-Arrival Estimation," vol.12(3), pp.529-540, June 2018. We present a cost-effective hybrid analog digital system to estimate the Direction of Arrival (DoA) of WiFi signals. The processing in the analog domain is based on simple well-known RADAR amplitude monopulse antenna techniques. Then, using the received signal strength indicator (RSSI) delivered by a commercial MiMo WiFi cards, the DoA is estimated using the so-called digital monopulse function. Due to the hybrid analog digital architecture, the digital processing is extremely simple, so that DoA estimation is performed without using IQ data from specific hardware. The simplicity and robustness of the proposed hybrid analog digital MiMo architecture is demonstrated for the ISM 2.45 GHz WiFi band. Also, the limitations with respect to multipath effects are studied in detail. As a proof of concept, an array of two MiMo WiFi DoA monopulse readers is distributed to localize the two-dimensional position of WiFi devices. This cost-effective hybrid solution can be applied to all WiFi standards and other Internet of Things narrowband radio protocols, such as Bluetooth Low Energy or Zigbee.

Cheng Zhang;Yindi Jing;Yongming Huang;Luxi Yang; "Interleaved Training and Training-Based Transmission Design for Hybrid Massive Antenna Downlink," vol.12(3), pp.541-556, June 2018. In this paper, we study the beam-based training design jointly with the transmission design for hybrid massive antenna single-user (SU) and multiple-user (MU) systems where outage probability is adopted as the performance measure. For SU systems, we propose an interleaved training design to concatenate the feedback and training procedures, thus making the training length adaptive to the channel realization. Exact analytical expressions are derived for the average training length and the outage probability of the proposed interleaved training. For MU systems, we propose a joint design for the beam-based interleaved training, beam assignment, and MU data transmissions. Two solutions for the beam assignment are provided with different complexity-performance tradeoff. Analytical results and simulations show that for both SU and MU systems, the proposed joint training and transmission designs achieve the same outage performance as the traditional full-training scheme but with significant saving in the training overhead.

Foad Sohrabi;Ya-Feng Liu;Wei Yu; "One-Bit Precoding and Constellation Range Design for Massive MIMO With QAM Signaling," vol.12(3), pp.557-570, June 2018. The use of low-resolution digital-to-analog converters (DACs) for transmit precoding provides crucial energy efficiency advantage for massive multiple-input multiple-output (MIMO) implementation. This paper formulates a quadrature amplitude modulation (QAM) constellation range and a one-bit symbol-level precoding design problem for minimizing the average symbol error rate (SER) in downlink massive MIMO transmission. A tight upper bound for the SER with low-resolution DAC precoding is first derived. The derived expression suggests that the performance degradation of one-bit precoding can be interpreted as a decrease in the effective minimum distance of the QAM constellation. Using the obtained SER expression, we propose a QAM constellation range design for the single-user case. It is shown that in the massive MIMO limit, a reasonable choice for constellation range with one-bit precoding is that of the infinite-resolution precoding with per-symbol power constraint, but reduced by a factor of √2/π or about 0.8. The corresponding minimum distance reduction translates to about a 2 dB gap between the performance of one-bit precoding and infinite-resolution precoding. This paper further proposes a low-complexity heuristic algorithm for the one-bit precoder design. Finally, the proposed QAM constellation range and precoder design are generalized to the multiuser downlink. We propose to scale the constellation range for the infinite-resolution zero-forcing (ZF) precoding with per-symbol power constraint by the same factor of √2/π for one-bit precoding. The proposed one-bit precoding scheme is shown to be within 2 dB of infinite-resolution ZF. In term of number of antennas, one-bit precoding requires about 50% more antennas to achieve the same performance as infinite-resolution precoding.

* "IEEE Journal of Selected Topics in Signal Processing information for authors," vol.12(3), pp.571-572, June 2018.* Provides instructions and guidelines to prospective authors who wish to submit manuscripts.

* "IEEE Signal Processing Society Information," vol.12(3), pp.C3-C3, June 2018.* Provides a listing of current committee members and society officers.

* "[Blank page[Name:_blank]]," vol.12(3), pp.C4-C4, June 2018.* This page or pages intentionally left blank.

IEEE Signal Processing Magazine - new TOC (2018 July 16) [Website]

* "Front Cover," vol.35(4), pp.C1-C1, July 2018.* Presents the front cover for this issue of the publication.

* "ICASSP 2019," vol.35(4), pp.C2-C2, July 2018.* Presents information on ICASSP 2019.

* "Table of Contents," vol.35(4), pp.1-2, July 2018.* Presents the table of contents for this issue of the publication.

* "Masthead," vol.35(4), pp.2-2, July 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Robert W. Heath; "Highlights from the IEEE SPM's Editorial Board Meeting [From the Editor[Name:_blank]]," vol.35(4), pp.3-4, July 2018. Presents information on Editorial board meetings for this issue of the publication.

* "IEEE Foundation filler," vol.35(4), pp.4-4, July 2018.* Advertisement, IEEE

Ali H. Sayed; "Intelligent Machines and Planet of the Apes [President's Message[Name:_blank]]," vol.35(4), pp.5-7, July 2018. Presents the President's message for this issue of the publication.

Changshui Zhang; "Top Downloads in IEEE Xplore [Reader's Choice[Name:_blank]]," vol.35(4), pp.8-10, July 2018. Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles.

* "IEEE Feedback filler," vol.35(4), pp.10-10, July 2018.* Advertisement, IEEE

* "IEEE Collabratec filler," vol.35(4), pp.11-11, July 2018.* Advertisement, IEEE

Namrata Vaswani; "A Feature Article Cluster on Exploiting Structure in Data Analytics: Low-Rank and Sparse Structures [From the Guest Editor[Name:_blank]]," vol.35(4), pp.12-13, July 2018. The four articles in this special section focus on the exploitation of structure in Big Data analytics, with emphasis on sparse and low-rank structures.

* "IEEE USA," vol.35(4), pp.13-13, July 2018.* Advertisement, IEEE

Yudong Chen;Yuejie Chi; "Harnessing Structures in Big Data via Guaranteed Low-Rank Matrix Estimation: Recent Theory and Fast Algorithms via Convex and Nonconvex Optimization," vol.35(4), pp.14-31, July 2018. Low-rank modeling plays a pivotal role in signal processing and machine learning, with applications ranging from collaborative filtering, video surveillance, and medical imaging to dimensionality reduction and adaptive filtering. Many modern high-dimensional data and interactions thereof can be modeled as lying approximately in a low-dimensional subspace or manifold, possibly with additional structures, and its proper exploitations lead to significant cost reduction in sensing, computation, and storage. In recent years, there has been a plethora of progress in understanding how to exploit low-rank structures using computationally efficient procedures in a provable manner, including both convex and nonconvex approaches. On one side, convex relaxations such as nuclear norm minimization often lead to statistically optimal procedures for estimating low-rank matrices, where first-order methods are developed to address the computational challenges; on the other side, there is emerging evidence that properly designed nonconvex procedures, such as projected gradient descent, often provide globally optimal solutions with a much lower computational cost in many problems. This survey article provides a unified overview of these recent advances in low-rank matrix estimation from incomplete measurements. Attention is paid to rigorous characterization of the performance of these algorithms and to problems where the lowrank matrix has additional structural properties that require new algorithmic designs and theoretical analysis.

Namrata Vaswani;Thierry Bouwmans;Sajid Javed;Praneeth Narayanamurthy; "Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery," vol.35(4), pp.32-55, July 2018. Principal component analysis (PCA) is one of the most widely used dimension reduction techniques. A related easier problem is termed subspace learning or subspace estimation. Given relatively clean data, both are easily solved via singular value decomposition (SVD). The problem of subspace learning or PCA in the presence of outliers is called robust subspace learning (RSL) or robust PCA (RPCA). For long data sequences, if one tries to use a single lower-dimensional subspace to represent the data, the required subspace dimension may end up being quite large. For such data, a better model is to assume that it lies in a low-dimensional subspace that can change over time, albeit gradually. The problem of tracking such data (and the subspaces) while being robust to outliers is called robust subspace tracking (RST). This article provides a magazine-style overview of the entire field of RSL and tracking.

Piya Pal; "Correlation Awareness in Low-Rank Models: Sampling, Algorithms, and Fundamental Limits," vol.35(4), pp.56-71, July 2018. The role of correlation awareness in low-rank compressive inverse problems is studied in this article. In such inverse problems, the ultimate goal is to estimate certain physically meaningful parameters from measurements collected across space and time. The spatiotemporal correlation structure of the data can be judiciously exploited to design highly efficient samplers that allow reliable parameter estimation from compressed measurements. For a large class of spectrum estimation problems (including source localization and line spectrum estimation), certain structured samplers, based on the idea of difference sets, will be shown to be optimal and outperform random samplers. Using these samplers, it is even possible to localize more sources than the number of physical sensors. The underlying principles are also extended to sparse estimation problems under the Bayesian framework. Fundamental performance limits in terms of Cramér-Rao bounds (CRBs) are studied in a new underdetermined regime, and certain saturation effects that only occur in this regime are discussed.

Vardan Papyan;Yaniv Romano;Jeremias Sulam;Michael Elad; "Theoretical Foundations of Deep Learning via Sparse Representations: A Multilayer Sparse Model and Its Connection to Convolutional Neural Networks," vol.35(4), pp.72-89, July 2018. Modeling data is the way we-scientists-believe that information should be explained and handled. Indeed, models play a central role in practically every task in signal and image processing and machine learning. Sparse representation theory (we shall refer to it as Sparseland) puts forward an emerging, highly effective, and universal model. Its core idea is the description of data as a linear combination of few atoms taken from a dictionary of such fundamental elements.

Wayes Tushar;Chau Yuen;Hamed Mohsenian-Rad;Tapan Saha;H. Vincent Poor;Kristin L. Wood; "Transforming Energy Networks via Peer-to-Peer Energy Trading: The Potential of Game-Theoretic Approaches," vol.35(4), pp.90-111, July 2018. Peer-to-peer (P2P) energy trading has emerged as a next-generation energy-management mechanism for the smart grid that enables each prosumer (i.e., an energy consumer who also produces electricity) of the network to participate in energy trading with other prosumers and the grid. This poses a significant challenge in terms of modeling the decisionmaking process of the participants' conflicting interests and motivating prosumers to participate in energy trading and cooperate, if necessary, in achieving different energy-management goals. Therefore, such a decisionmaking process needs to be built on solid mathematical and signal processing principles that can ensure an efficient operation of the electric power grid. This article provides an overview of the use of game-theoretic approaches for P2P energy trading as a feasible and effective means of energy management. Various game- and auction-theoretic approaches are discussed by following a systematic classification to provide information on the importance of game theory for smart energy research. This article also focuses on the key features of P2P energy trading and gives an introduction to an existing P2P testbed. Furthermore, the article gives specific game- and auction-theoretic models that have recently been used in P2P energy trading and discusses important findings arising from these approaches.

Maria S. Greco;Fulvio Gini;Pietro Stinco;Kristine Bell; "Cognitive Radars: On the Road to Reality: Progress Thus Far and Possibilities for the Future," vol.35(4), pp.112-125, July 2018. This article describes some key ideas and applications of cognitive radars, highlighting the limits and the path forward. Cognitive radars are systems based on the perception-action cycle of cognition that senses the environment, learns relevant information from it about the target and the background, and then adapts the radar sensor to optimally satisfy the needs of the mission according to a desired goal. The concept of cognitive radar was originally introduced only for active radar. In this article, we explain how this paradigm can also be applied to passive radar (PR).

Behnam Shahrrava; "Closed-Form Impulse Responses of Linear Time-Invariant Systems: A Unifying Approach [Lecture Notes[Name:_blank]]," vol.35(4), pp.126-132, July 2018. In many signal processing applications, filtering is accomplished through linear time-invariant (LTI) systems described by linear constant-coefficient differential and difference equations since they are conveniently implemented using either analog or digital hardware [1]. An LTI system can be completely characterized in the time domain by its impulse response or in the frequency domain by its frequency response, which is the Fourier transform of the system's impulse response. Equivalently, using the Laplace transform [or the z-transform in the case of discrete-time (DT) systems] as a generalization of the Fourier transform, any continuous-time (CT) or DT LTI system can be characterized by its transfer function (or system function) in the s-domain or the z-domain, respectively. In this article, we explain how to find impulse responses of LTI systems described by differential and difference equations directly in the time domain without resorting to any transform methods or recursive procedures.

Rui Zhu;Fei Zhou;Wenming Yang;Jing-Hao Xue; "On Hypothesis Testing for Comparing Image Quality Assessment Metrics [Tips & Tricks[Name:_blank]]," vol.35(4), pp.133-136, July 2018. In developing novel image quality assessment (IQA) metrics, researchers should compare their proposed metrics with state-of-the-art metrics. A commonly adopted approach is by comparing two residuals between the nonlinearly mapped scores of two IQA metrics and the difference mean opinion score, which are assumed from Gaussian distributions with zero means. An F-test is then used to test the equality of variances of the two sets of residuals. If the variances are significantly different, then we conclude that the residuals are from different Gaussian distributions and that the two IQA metrics are significantly different. The F-test assumes that the two sets of residuals are independent. However, given that the IQA metrics are calculated on the same database, the two sets of residuals are paired and may be correlated. We note this improper usage of the F-test by practitioners, which can result in misleading comparison results of two IQA metrics. To solve this practical problem, we introduce the Pitman test to investigate the equality of variances for two sets of correlated residuals. Experiments on the Laboratory for Image and Video Engineering (LIVE) database show that the two tests can provide different conclusions.

* "[Dates Ahead[Name:_blank]]," vol.35(4), pp.137-137, July 2018.* Presents SPS society upcoming events and meetings.

* "Advertiser Index/Sales Mast," vol.35(4), pp.139-139, July 2018.* Presents a listing of advertisers who were included in this issue of the publication.

Anna Scaglione;Rashka Ramakrishna; "Extreme Whitening [Humor[Name:_blank]]," vol.35(4), pp.139-139, July 2018. Various puzzles, quizzes, games, humorous definitions, or mathematical that should engage the interest of readers.

Pavel Loskot; "Automation Is Coming to Research [In the Spotlight[Name:_blank]]," vol.35(4), pp.140-138, July 2018. The rapid advancement and proliferation of information and communication technologies in the past two decades significantly impacted how we do research. The research process has been digitalized and is increasingly relying on growing computing power and storage capacity to gather and process a constant production of data—our observations of systems and phenomena we would like to understand, control, and improve. To turn these observations into useful knowledge, findings, discoveries, and better decisions, the data needs to be intelligently processed, and the results of such processing suitably visualized. In research communities, the process of obtaining, processing, and visualizing data to yield new insights is usually captured and shared through scientific papers, which normally cite other papers to connect new and previous findings. This creates an intricate web of interlinked papers comprising most of our scientific knowledge. This web is growing exponentially, so it is increasingly more difficult to search and navigate. It is also not easy to validate all published results, and we may often rediscover things that are already known.

* "IEEE Moving filler," vol.35(4), pp.138-138, July 2018.* Advertisement, IEEE

* "ICIP 2019 CFP," vol.35(4), pp.C3-C3, July 2018.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "Mathworks," vol.35(4), pp.C4-C4, July 2018.* Advertisement.

IET Signal Processing - new TOC (2018 July 16) [Website]

Jiang Zhu;Daxiong Ji;Zhiwei Xu;Bailu Si; "Combined optimisation of waveform and quantisation thresholds for multistatic radar systems," vol.12(5), pp.559-565, 7 2018. The problem of designing waveform and quantisation thresholds is studied in a multistatic radar setting, where distributed receivers are connected to a fusion centre via capacity constraints. Different from the previous cloud radio-multistatic radar system which utilises an additive quantisation Gaussian noise model, a real quantisation system is designed. The authors first optimise the waveform without quantisation. Then they compress the received signal at receivers into a scalar without any performance degradation. Furthermore, the scalar quantiser is adopted and quantisation thresholds are designed. Numerical simulations are performed and the effectiveness of combined waveform and thresholds optimisation strategy is demonstrated.

Chahrazed Zekkari;Mohamed Djendi;Abderrezak Guessoum; "[[][Efficient adaptive filtering algorithm for IQ imbalance compensation Tx/Rx systems][Name:_blank]]," vol.12(5), pp.566-573, 7 2018. This study addresses the problem of in-phase and quadratic-phase (IQ) imbalance in digital transmitter and receiver communication systems. Recently, several adaptive and non-adaptive solutions to this problem have been proposed. In this study, an adaptive solution to this problem is given. The authors propose a new robust two-channel adaptive algorithm to compensate the IQ imbalance problem of quadratic receivers. The new algorithm is based on the combination between the backward blind source separation (BBSS) structure and the fast Newton transversal filter (FNTF) technique to form a two-channel algorithm that cancels the noisy IQ signal adaptively. The combination of the Newton recursion of the FNTF algorithm with the BBSS principle shows the efficiency of the new algorithm to enhance the output signal to interference ratio and allows distortion correction at the output. Simulation results show the efficiency of the new algorithm to compensate the IQ imbalance problem in comparison with the conventional two-channel normalised least mean square algorithm.

Zhi-Chao Zhang;Tao Yu;Mao-Kang Luo;Ke Deng; "Estimating instantaneous frequency based on phase derivative and linear canonical transform with optimised computational speed," vol.12(5), pp.574-580, 7 2018. We introduce a new instantaneous frequency (IF) estimator investigating the resulting process of the IF estimation within the framework of the signal's phase derivative and the linear canonical transform. We define a quantitative performance index for computational speed to show that the new estimator can outperform others within that framework in optimising computational speed (minimising the performance index). The authors investigate the relationship between the new estimator and the previous one proposed by Li et al., and also compare their estimation accuracy and computational speed. The authors further verify the theoretical analyses by applying the new and Li's estimators to capture the IF of some linear and non-linear frequency-modulated signals.

Shibendu Mahata;Suman Kumar Saha;Rajib Kar;Durbadal Mandal; "Accurate integer-order rational approximation of fractional-order low-pass Butterworth filter using a metaheuristic optimisation approach," vol.12(5), pp.581-589, 7 2018. This study presents a new approach to design fractional-order low-pass Butterworth filters (FOLPBF) in terms of integer-order rational approximations meeting an accurate magnitude response. A parameter-independent, metaheuristic optimisation algorithm called colliding bodies optimisation (CBO) is used for this purpose. The CBO-based optimisation routine determines the optimal values of the coefficients for the proposed integer-order models for the (1 + α), where, 0 <; α <; 1, order FOLPBFs. The performance of the proposed filter is examined concerning the passband and the stopband characteristics, solution quality robustness, and the convergence rate. The generic nature of the proposed design approach is also demonstrated. The roll-off characteristics of the proposed higher orders of FOLPBFs exhibit accurate stopband attenuation behaviour. The proposed designs also achieve the best magnitude responses as compared with state-of-the-art designs published in the recent literature. The proposed models can be practically implemented without using any fractance devices.

Predrag B. Petrović;Dimitrije Rozgić; "Computational effective modified Newton–Raphson algorithm for power harmonics parameters estimation," vol.12(5), pp.590-598, 7 2018. This study proposes a new algorithm for power harmonics parameters estimation based on the modified Newton-Raphson method. The main modification is achieved through reconfiguration of the Jacobian matrix and direct calculation of its characteristic coefficients without the necessity of inversion. The zero-crossing method was used to compute the frequency basically for initial frequency estimation. With additional digital filtering, the parameters can be initialised properly and the updating steps can be supervised for fast quadratic convergence of Newton-Raphson iterations. This combined approach yields high accuracy and good tracking speed, thereby significantly facilitating both the computation and programming. Reliability and effectiveness of the proposed method were confirmed through simulation tests and results.

Jialiang Gu;Peiru Lin;Bingo Wing-Kuen Ling;Chuqi Yang;Peihua Feng; "Grouping and selecting singular spectral analysis components for denoising based on empirical mode decomposition via integer quadratic programming," vol.12(5), pp.599-604, 7 2018. This study proposes an integer quadratic programming method for grouping and selecting the singular spectral analysis components based on the empirical mode decomposition for performing the denoising. Here, the total number of the grouped singular spectral analysis components is equal to the total number of the intrinsic mode functions. The singular spectral analysis components are assigned to the group indexed by the corresponding intrinsic mode function where the two norm error between the corresponding intrinsic mode function and the sum of the grouped singular spectral analysis components is minimum. Actually, this assignment of the singular spectral analysis components to a particular group is an integer quadratic programming problem. However, the required computational power for finding the solution of the integer quadratic programming problem is high. On the other hand, by representing the integer quadratic programming problem as an integer linear programming problem and employing an existing numerical optimisation computer aided design tool for finding the solution of the integer linear programming problem, the solution can be found efficiently. Computer numerical simulation results are presented.

Ashkan Esmaeili;Ehsan Asadi Kangarshahi;Farokh Marvasti; "Iterative null space projection method with adaptive thresholding in sparse signal recovery," vol.12(5), pp.605-612, 7 2018. Adaptive thresholding methods have proved to yield a high signal-to-noise ratio (SNR) and fast convergence in sparse signal recovery. The robustness of a class of iterative sparse recovery algorithms, such as the iterative method with adaptive thresholding, has been found to outperform the state-of-art methods in respect of reconstruction quality, convergence speed, and sensitivity to noise. In this study, the authors introduce a new method for compressed sensing, using the sensing matrix and measurements. In our method, they iteratively threshold the signal and project the thresholded signal onto the translated null space of the sensing matrix. The threshold level is assigned adaptively. The results of the simulations reveal that the authors' proposed method outperforms other methods in the signal reconstruction (in terms of the SNR). This performance advantage is noticeable when the number of available measurements approaches twice the sparsity number.

Shaddrack Yaw Nusenu;Hui Chen;Wen-Qin Wang; "OFDM chirp radar for adaptive target detection in low grazing angle," vol.12(5), pp.613-619, 7 2018. Low-altitude target detection is of great importance in radar applications, especially for military radar to counter the targets penetrated from low grazing angles. However, it is a challenge to detect the targets in low grazing angle area due to severe multipath reflection effects. In this study, the authors propose orthogonal frequency division multiplexing (OFDM) chirp waveform for target detection in low grazing angle region. The OFDM chirp waveform has better peak-to-average-power ratio level and provides larger time-bandwidth product than that of conventional OFDM waveform, which is beneficial for target detection. Two different radar schemes, namely, OFDM chirp radar and OFDM chirp multiple-input multiple-output (MIMO) radar, are studied under the Earth's curvature geometry model with the consideration of realistic multipath reflection effects. The generalised likelihood ratio test algorithm is developed by jointly exploiting the advantages of OFDM chirp waveform and MIMO configuration in increased degrees of freedom. In addition, an adaptive algorithm is further applied to enhance the target detection performance. All proposed schemes are verified by numerical results.

Rodolfo Gomes;Joao Reis;Zaid Al-Daher;Akram Hammoudeh;Rafael F.S. Caldeirinha; "5G: performance and evaluation of FS-FBMC against OFDM for high data rate applications at 60 GHz," vol.12(5), pp.620-628, 7 2018. This study presents a frequency spreading filter bank multi-carrier (FS-FBMC) waveform as a potential candidate for the fifth generation (5G) network applications at millimetre-waves (mmWaves). The proposed model is developed based on the orthogonal frequency division multiplexing (OFDM) waveform standardised by IEEE 802.15.3c for 60 GHz high data-rate applications. The effects of non-linearities of power amplifiers (PAs) at 60 GHz for both OFDM and FS-FBMC are presented and compared using a realistic PA model. In particular, the sensitivity of both transmission schemes to the non-linear effects is investigated and its impact on the performance degradation in terms of output-power-backoff, is characterised over typical indoor line-of-sight, kiosk and residential, 60 GHz IEEE channel models. The assessment metrics considered are bit error rate and out-of-band emissions. This study concludes radio access schemes for future communication generations, i.e. 5G may employ OFDM for up-link and FS-FBMC for down-link due to the sensitivity to PA non-linearities of both waveforms.

Hangting Cao;Jiang Zhu;Zhiwei Xu; "Adaptive one-bit quantisation via approximate message passing with nearest neighbour sparsity pattern learning," vol.12(5), pp.629-635, 7 2018. In this study, the problem of recovering structured sparse signals with a priori distribution whose structure patterns are unknown is studied from one-bit adaptive (AD) quantised measurements. A generalised approximate message passing (GAMP) algorithm is utilised, and an expectation maximisation (EM) method is embedded in the algorithm to iteratively estimate the unknown a priori distribution. In addition, the nearest neighbour sparsity pattern learning (NNSPL) method is adopted to further improve the recovery performance of the structured sparse signals. Numerical results demonstrate the effectiveness of GAMP-EM-AD-NNSPL method with both simulated and real data.

Xiao-Feng Gong;Jia-Cheng Jiang;Hui Li;You-Gen Xu;Zhi-Wen Liu; "Spatially spread dipole/loop quint for vector-cross-product-based direction finding and polarisation estimation," vol.12(5), pp.636-642, 7 2018. We propose a spatially spread quint (SS-quint) of only dipoles or loops, for direction of arrival (DOA) and polarisation estimation. The proposed SS-quint is spatially centrosymmetric. Based on this centrosymmetry, the authors develop a computationally low-cost DOA and polarisation estimator via vector-cross-product. Compared with the spatially spread electromagnetic vector-sensor, the proposed SS-quint consists of only dipoles or loops, and thus, its components have more consistent responses. Compared with a previously proposed SS-quint configuration, which is required to be strictly uniformly L-shaped, the proposed SS-quint has a more flexible array configuration in the sense that it is not restricted to any particular shape. The Cramér-Rao bounds are derived and simulation results are provided, to demonstrate the performance of the proposed SS-quint array configuration.

Reza Mansoori;Reza Mohseni; "Inverse second-order polynomial time–frequency transform-based detection of moving targets in the stepped-frequency radars," vol.12(5), pp.643-651, 7 2018. One of the most common methods to achieve high-range resolution radar is based on the stepped-frequency waveform. In the stepped-frequency radar, if high-range resolution profile (HRRP) is created by means of the inverse discrete Fourier transform, some distortions can be created that result in reducing the detectability of moving targets. A new inverse second-order polynomial time-frequency transform -based algorithm is proposed to compose HRRP. It is shown that this algorithm efficiently compensates the undesirable effects created by the movements of targets in HRRP and improves the detection probability.

Yonggu Lee;Jinho Choi; "Energy-efficient scheme using multiple antennas in secure distributed detection," vol.12(5), pp.652-658, 7 2018. Here, the authors propose an energy-efficient strategy using multiple antennas for secure distributed detection in wireless sensor networks (WSNs). In multiple access channel, it is possible to communicate between sensors and ally fusion centre with perfect secrecy by an encryption with channel state information (CSI). However, the sensors in an active group may need to constantly report in the encryption as long as the CSI remains unchanged, which deteriorates the energy efficiency in terms of the lifetime of WSN. The authors solve the problem by using random beamforming (e.g. antenna subset modulation). Furthermore, the authors investigate the impact of the number of active antennas on the network lifetime. Through the analysis and simulation, the authors demonstrate that the proposed strategy has a higher energy efficiency than the conventional strategy, while maintaining the same reliability and security.

Muhammad Khalil;Stevan Barber;Kevin Sworby; "High energy efficiency for low error rate in wireless relay networks," vol.12(5), pp.659-665, 7 2018. A new method to increase energy efficiency (EE) and decrease bit error rate (BER) in amplify-and-forward (AF) relay networks is presented in this study. In this network, the source transmitted signal through flat fading channel to a set of intermediate relay nodes which amplify and then forward the signal to the destination through a set of independent flat fading channels. The proposed EE is derived in a balanced scheme with spectral efficiency (SE), when relay nodes locations are taken into consideration. This balance between EE and SE is achieved by minimising power consumption. The BER for this balance is analysed and the relationship between BER and EE is derived. All analytical expressions are validated by numerical analysis and simulation. It is found that only a small increase in EE leads to a reduction in BER.

Haifeng Li;Jing Zhang;Jian Zou; "Improving the bound on the restricted isometry property constant in multiple orthogonal least squares," vol.12(5), pp.666-671, 7 2018. By allowing multiple L indices to be chosen per iteration, multiple orthogonal least squares (MOLS) have been proposed that is an extension of the classical greedy algorithm OLS. Wang and Li 2017 demonstrated that MOLS can successfully reconstruct K-sparse signals from compressed measurements y = Ax in at most K iterations if the sensing matrix A has unit ℓ2-norm columns satisfying the restricted isometry property (RIP) of order LK with δLK <; [√(K/L)+2]/1. In this study, by increasing the RIP order just by one (i.e. LK+1 from LK), the authors refine the bound further to δLK+1 <; (1/√(K/L)+2). In the noisy case, they also propose a stopping criterion of MOLS algorithm, and with this criterion MOLS algorithm can recover the support of sparse signal successfully.

IEEE Transactions on Geoscience and Remote Sensing - new TOC (2018 July 16) [Website]

* "Front Cover," vol.56(7), pp.C1-C1, July 2018.* Presents the front cover for this issue of the publication.

* "IEEE Transactions on Geoscience and Remote Sensing publication information," vol.56(7), pp.C2-C2, July 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Table of contents," vol.56(7), pp.3629-4234, July 2018.* Presents the table of contents for this issue of the publication.

Huan Luo;Cheng Wang;Chenglu Wen;Ziyi Chen;Dawei Zai;Yongtao Yu;Jonathan Li; "Semantic Labeling of Mobile LiDAR Point Clouds via Active Learning and Higher Order MRF," vol.56(7), pp.3631-3644, July 2018. Using mobile Light Detection and Ranging point clouds to accomplish road scene labeling tasks shows promise for a variety of applications. Most existing methods for semantic labeling of point clouds require a huge number of fully supervised point cloud scenes, where each point needs to be manually annotated with a specific category. Manually annotating each point in point cloud scenes is labor intensive and hinders practical usage of those methods. To alleviate such a huge burden of manual annotation, in this paper, we introduce an active learning method that avoids annotating the whole point cloud scenes by iteratively annotating a small portion of unlabeled supervoxels and creating a minimal manually annotated training set. In order to avoid the biased sampling existing in traditional active learning methods, a neighbor-consistency prior is exploited to select the potentially misclassified samples into the training set to improve the accuracy of the statistical model. Furthermore, lots of methods only consider short-range contextual information to conduct semantic labeling tasks, but ignore the long-range contexts among local variables. In this paper, we use a higher order Markov random field model to take into account more contexts for refining the labeling results, despite of lacking fully supervised scenes. Evaluations on three data sets show that our proposed framework achieves a high accuracy in labeling point clouds although only a small portion of labels is provided. Moreover, comparative experiments demonstrate that our proposed framework is superior to traditional sampling methods and exhibits comparable performance to those fully supervised models.

Zefa Yang;Zhiwei Li;Jianjun Zhu;Axel Preusse;Jun Hu;Guangcai Feng;Huiwei Yi;Markus Papst; "An Alternative Method for Estimating 3-D Large Displacements of Mining Areas from a Single SAR Amplitude Pair Using Offset Tracking," vol.56(7), pp.3645-3656, July 2018. Measuring 3-D mining-induced displacements is essential to understand mining deformation mechanisms and assess mining-related geohazards. In our previous work, we proposed a method for estimating 3-D mining-induced large displacements with the surface deformation along the radar line-of-sight (LOS) direction derived from a single amplitude pair (SAP) of synthetic aperture radar (SAR) using the offset tracking (OT) procedure (hereafter referred to as OT-SAP). The OT-SAP method effectively reduces the strict requirements on SAR data of the previous OT-based methods for 3-D mining-induced displacement retrieval. However, OT-SAP is not robust to errors in the LOS deformation, due to the lack of redundant observations. In this paper, we present an alternative approach (hereafter called AOT-SAP) to OT-SAP. The AOT-SAP method involves estimating the 3-D mining-induced large displacements with OT-derived 2-D deformation observations along the LOS and azimuth directions from an SAP of SAR, instead of just the LOS deformation in the OT-SAP method. Consequently, more redundant observations are incorporated in the AOT-SAP method compared with the previous OT-SAP method. The theoretical analysis and experiments based on both simulated and real data sets suggest that AOT-SAP can effectively improve the accuracies of the estimated 3-D displacements compared with the OT-SAP-estimated ones.

Xiaojun Liu;Shinan Lang;Bo Zhao;Feng Zhang;Qing Liu;Chuanjun Tang;Dezhi Li;Guangyou Fang; "High-Resolution Ice-Sounding Radar Measurements of Ice Thickness Over East Antarctic Ice Sheet as a Part of Chinese National Antarctic Research Expedition," vol.56(7), pp.3657-3666, July 2018. This paper presents the ice thickness, fine resolution internal reflecting horizons (IRHs), and distinct bottom topography measurements of Chinese Kunlun Station and Grove Mountains, Antarctica, derived from sounding these glaciers with a high-resolution radar. To enable the development of next-generation ice-sheet models, we need information on IRHs, bottom topography, and basal conditions. To this end, we performed measurements with the progressively improved ice-sounding radar system, currently known as the high-resolution ice-sounding radar developed by the Key Laboratory of Electromagnetic Radiation and Sensing Technology of Institute of Electronics, Chinese Academy of Sciences, Beijing, China. We processed the collected data using focused synthetic aperture radar (SAR) algorithm named the modified range migration algorithm using curvelets and the modified nonlinear chirp scaling algorithm to improve radar sensitivity and reduce along-track surface clutter. Representative results from selected transects indicate that we successfully sounded 3-km-thick ice with a fine resolution of 0.75 m. In this paper, we provide a brief description of the radar system, discuss the focused SAR processing algorithms, and provide sample results to demonstrate the successful sounding of the ice sheet in Antarctica.

Xinyu Wang;Yanfei Zhong;Yao Xu;Liangpei Zhang;Yanyan Xu; "Saliency-Based Endmember Detection for Hyperspectral Imagery," vol.56(7), pp.3667-3680, July 2018. This paper focuses on the endmember extraction (EE) technique for analyzing hyperspectral images. We first prove that the reconstruction errors (REs) and abundance anomalies (AAs) (abundances that fail to satisfy the abundance constraints) are effective in extracting undetected endmembers. Then, according to the spatial continuity of the endmember objects and differing from noise or outliers with a sparse distribution, the endmembers are assumed to be located at some salient areas in the RE and AA maps. A novel EE algorithm termed saliency-based endmember detection (SED) is proposed, where the visual saliency model is introduced to explore and analyze the spatial information that is contained in the AA and RE maps. Specifically, the AA and RE maps are regarded as the visual inputs, whereas the endmembers are treated as the visual stimuli. In SED, we assume that the pure pixel assumption holds. Based on the characteristics of the human visual system, the proposed method can not only extract endmembers in homogenous areas, but it can also highlight the small targets whose abundances may be spatially varied. In addition, since the spatial information is exploited in the reconstruction, the capability of the endmembers to represent the hyperspectral scene is automatically considered in the process of EE, and the detected endmembers are both accurate and reliable. The experimental results obtained on both simulated and real hyperspectral data confirm the merits and viability of the proposed algorithm.

Min Chen;Danyang Liu;Kejian Qian;Jun Li;Mengling Lei;Yi Zhou; "Lunar Crater Detection Based on Terrain Analysis and Mathematical Morphology Methods Using Digital Elevation Models," vol.56(7), pp.3681-3692, July 2018. Lunar impact craters are the most typical geomorphic feature on the moon and are of great importance in studies of lunar terrain features. This paper presents a crater detection algorithm (CDA) that is based on terrain analysis and mathematical morphology methods. The proposed CDA is applied to digital elevation models (DEMs) to identify the boundaries of impact craters. The topographic and morphological characteristics of impact craters are discussed, and detailed steps are presented to detect different types of craters, such as dispersal craters, connective craters, and con-craters. The DEM from the Lunar Reconnaissance Orbiter, which has a resolution of 100 m, is used to verify the proposed CDA. The results show that the boundaries of impact craters can be detected. The results enable increased understanding of surface processes through the characterization of crater morphometry and the use of crater size-frequency distributions to estimate the ages of planetary surfaces.

Yong Wang;Xuefei Chen; "3-D Interferometric Inverse Synthetic Aperture Radar Imaging of Ship Target With Complex Motion," vol.56(7), pp.3693-3708, July 2018. A novel algorithm for 3-D interferometric inverse synthetic aperture radar (InISAR) imaging of ship target with complex motion via orthogonal double baseline is presented. For the ship target with a certain translational velocity and 3-D rotation, the distance between any scatterers on the target and the radar is analyzed in detail, and the keystone transform is used to reduce the impact of migration through resolution cell of ship target with big size. Then, the fractional Fourier transform is adopted to achieve the 2-D ISAR image of the target, and the mismatch of the ISAR images achieved by the three radars is solved by the image coregistration method according to the 1-D range profile. Finally, the 3-D InISAR image of the ship target is achieved with the interferometric operation with the three ISAR images. The effectiveness of the proposed method is proved by some simulation results, and the influence of different motion parameters on the 3-D imaging of ship target is analyzed simultaneously in this paper.

Takao Kobayashi;Jung-Ho Kim;Seung Ryeol Lee; "HF (5 MHz) Imaging of the Moon by Kaguya Lunar Radar Sounder Off Nadir Echo Data," vol.56(7), pp.3709-3714, July 2018. HF (5MHz) imaging of lunar surface was attempted using off nadir echo data of Kaguya Lunar Radar Sounder (LRS). LRS observation data of multiple orbits were processed and mapped onto a surface which was defined based on the Kaguya Digital Elevation Model. The transmitting/receiving antenna of LRS is a dipole antenna which illuminates lunar surface on both sides of orbit, consequently, in a single-orbit observation, a detected target location has ambiguity in terms of the side of the orbit. However, use of multiple orbit data enables to resolve this ambiguity problem, i.e., radar illumination is controlled. We demonstrated this HF imaging technique by reconstructing surface images of Rupes Recta region using LRS observation data of 61 orbits. Control of radar illumination was confirmed by the presence/absence of Rupes Recta image in the reconstructed surface images. As it was anticipated, the reconstructed surface images presented false images which were identified as mirror images of major surface features. We also carried out simulation of these LRS observations of 61 orbits over the Rupes Recta site using Kirchhoff-approximation Surface Scattering (KiSS) simulation code. Comparison of the images of LRS observation and those of KiSS simulation exposed some discrepancies. Our interpretation is that the discrepancies are attributed to shallow subsurface scatterings which the KiSS simulation does not take into account. This implies the possibility of imaging shallow lava tubes by LRS, although we did not find one in this particular site of Rupes Recta region.

Suman Singha;Malin Johansson;Nicholas Hughes;Sine Munk Hvidegaard;Henriette Skourup; "Arctic Sea Ice Characterization Using Spaceborne Fully Polarimetric L-, C-, and X-Band SAR With Validation by Airborne Measurements," vol.56(7), pp.3715-3734, July 2018. In recent years, spaceborne synthetic aperture radar (SAR) polarimetry has become a valuable tool for sea ice analysis. Here, we employ an automatic sea ice classification algorithm on two sets of spatially and temporally near coincident fully polarimetric acquisitions from the ALOS-2, Radarsat-2, and TerraSAR-X/TanDEM-X satellites. Overlapping coincident sea ice freeboard measurements from airborne laser scanner data are used to validate the classification results. The automated sea ice classification algorithm consists of two steps. In the first step, we perform a polarimetric feature extraction procedure. Next, the resulting feature vectors are ingested into a trained neural network classifier to arrive at a pixelwise supervised classification. Coherency matrix-based features that require an eigendecomposition are found to be either of low relevance or redundant to other covariance matrix-based features, which makes coherency matrix-based features dispensable for the purpose of sea ice classification. Among the most useful features for classification are matrix invariant-based features (geometric intensity, scattering diversity, and surface scattering fraction). Classification results show that 100% of the open water is separated from the surrounding sea ice and that the sea ice classes have at least 96.9% accuracy. This analysis reveals analogous results for both X-band and C-band frequencies and slightly different for the L-band. The subsequent classification produces similarly promising results for all four acquisitions. In particular, the overlapping image portions exhibit a reasonable congruence of detected sea ice when compared with high-resolution airborne measurements.

Richard K. Martin;Christian Keyser;Luke Ausley;Michael Steinke; "LADAR System and Algorithm Design for Spectropolarimetric Scene Characterization," vol.56(7), pp.3735-3746, July 2018. We present a new active imaging architecture that enables rapid spectropolarimetric imaging in a compact system. The transmitter laser produces a synchronous cascade of closely spaced wavelengths, each of which is modulated with a unique amplitude pattern. Temporal multiplexing of the optical return signals is used to reduce system cost, size, weight, and power. This architecture enables a single laser and a single detector for all wavelengths and polarization states. This in turn enables pixel-by-pixel scene characterization, which could be used for target identification in future work. The basic hardware and software architecture will be presented in addition to technologies that are being investigated to develop this system architecture. We will introduce analytical expressions for the temporally multiplexed transmitted and detected signals based on the proposed hardware configuration. We then derive optimal and computationally efficient algorithms for the estimation of the overall range per pixel and the reflectivity per wavelength per pixel, as well as the Cramer–Rao lower bound on estimator variance. The bound and simulated performance yield guidelines for the system parameters required to achieve a desired level of fidelity of the spectral and polarimetric reflectivity. The proposed system is validated with laboratory data and explored via simulations.

Shizhen Chang;Bo Du;Liangpei Zhang; "BASO: A Background-Anomaly Component Projection and Separation Optimized Filter for Anomaly Detection in Hyperspectral Images," vol.56(7), pp.3747-3761, July 2018. Many hyperspectral anomaly detectors are designed based on the traditional Mahalanobis distance-based RX algorithm, which is usually considered as an inverse operation of the principal components analysis. Such detectors include the uniform target detector (UTD) algorithm, RX-UTD algorithm, and so on. However, the possibility of background statistical contamination caused by anomalies still exists. In order to alleviate this problem, in this paper, we propose a spectral matched filter (background-anomaly component projection and separation optimized filter) to minimize the average output energy of separate image components and the output values of the weighted background regular term for hyperspectral image anomaly detection, which could strengthen the separation between anomalies and backgrounds. By calculating the optimal solution to the background-anomaly component projection and separation function, we obtain the optimal projection, where we can effectively suppress the background while highlighting the anomalies. Proposed algorithm has the following research advantages: 1) it creates a novel collaborative component projection and robust background optimization function to separate the background and anomalies and 2) it analyzes the intrinsic statistical distribution of pixels and applies appropriate iterative shrinkage-thresholding algorithm to solve the ℓ1-min problem. Experiments were conducted on three real hyperspectral data sets. The detection results demonstrate that the proposed algorithm is superior to other state-of-the-art anomaly detection algorithms.

Yuchen He;Shitao Zhu;Guoxiang Dong;Songlin Zhang;Anxue Zhang;Zhuo Xu; "Resolution Analysis of Spatial Modulation Coincidence Imaging Based on Reflective Surface," vol.56(7), pp.3762-3771, July 2018. The spatial modulation coincidence imaging (SMCI), as a novel kind of microwave coincidence imaging method, is proposed in this paper. The SMCI system provides a new way to produce the time-space independent signal instead of multitransmitter architecture with wideband randomly modulated signal in radar coincidence imaging. Due to some special features, metamaterial plate is utilized as the reflective surface to modulate the incident signal to construct random radiation field. The resolution of SMCI system is analyzed under large viewing angle with two different transmitting signals. Reflective surface is nonuniformly divided to derive the expression of resolution. The analysis results show that the resolution of SMCI system is mainly determined by the size of reflective surface and center frequency, which is similar to the traditional aperture. The SMCI system is low cost and flexible in design. Simultaneously, it can avoid the synchronization problem between subsources. Moreover, the SMCI system can achieve the resolution of space target through single transmitter–single receiver radar system. High-resolution image can be reconstructed since the tests are nonlinear. Finally, a series of simulation experiments is presented based on the nondirect-viewing scene we proposed. Using the algorithm based on a compressed sensing theory, we reconstructed the target image with high resolution.

Baiyuan Ding;Gongjian Wen; "Target Reconstruction Based on 3-D Scattering Center Model for Robust SAR ATR," vol.56(7), pp.3772-3785, July 2018. This paper proposes a robust synthetic aperture radar (SAR) automatic target recognition method based on the 3-D scattering center model. The 3-D scattering center model is established offline from the CAD model of the target using a forward method, which can efficiently predict the 2-D scattering centers as well as the scattering filed of the target at arbitrary poses. For the SAR images to be classified, the 2-D scattering centers are extracted based on the attributed scattering center model and matched with the predicted scattering center set using a neighbor matching algorithm. The selected model scattering centers are used to reconstruct an SAR image based on the 3-D scattering center model, which is compared with the test image to reach a robust similarity. The designed similarity measure comprehensively considers the image correlation between the test image and the model reconstructed image and the model redundancy as for describing the test image. As for target recognition, the model with the highest similarity is determined to the target type of the test SAR image when it is denied to be an outlier. Experiments are conducted on both the data simulated by an electromagnetic code and the data measured in the moving and stationary target acquisition recognition program under standard operating condition and various extended operating conditions to validate the effectiveness and robustness of the proposed method.

Donghai Zheng;Rogier van der Velde;Jun Wen;Xin Wang;Paolo Ferrazzoli;Mike Schwank;Andreas Colliander;Rajat Bindlish;Zhongbo Su; "Assessment of the SMAP Soil Emission Model and Soil Moisture Retrieval Algorithms for a Tibetan Desert Ecosystem," vol.56(7), pp.3786-3799, July 2018. The Soil Moisture Active Passive (SMAP) satellite mission launched in January 2015 provides worldwide soil moisture (SM) monitoring based on L-band brightness temperature (TBp) measurements at horizontal (TBH) and vertical (TBV ) polarizations. This paper presents a performance assessment of SMAP soil emission model and SM retrieval algorithms for a Tibetan desert ecosystem. It is found that the SMAP emission model largely underestimates the SMAP measured THB (≈ 15 K), and the TBV is underestimated during dry-down episodes. A cold bias is noted for the SMAP effective temperature due to underestimation of soil temperature, leading to the TBp underestimation (>5 K). The remaining TBH underestimation is found to be related to the surface roughness parameterization that underestimates its effect on modulating the TBp measurements. Further, the topography and uncertainty of soil information are found to have minor impacts on the TBp simulations. The SMAP baseline SM products produced by single-channel algorithm (SCA) using the TBV measurements capture the measured SM dynamics well, while an underestimation is noted for the dry-down periods because of TBV underestimation. The products based on the SCA with TBH measurements underestimate the SM due to underestimation of TBH, and the dual-channel algorithm overestimates the SM. After implementing a new surface roughness parameterization and improving the soil temperature and texture information, the deficiencies noted above in TBp simulation and SM retrieval are greatly resolved. This indicates that the SMAP SM retrievals can be enhanced by improving both surface roughness and adopted so- l temperature and texture information for Tibetan desert ecosystem.

Jie Mei;Liqiang Zhang;Yuebin Wang;Zidong Zhu;Huiqian Ding; "Joint Margin, Cograph, and Label Constraints for Semisupervised Scene Parsing From Point Clouds," vol.56(7), pp.3800-3813, July 2018. To parse large-scale urban scenes using the supervised methods, a large amount of training data that can account for the vast visual and structural variance of urban environment is necessary. Unfortunately, such training data are mostly obtained by tedious and time-consuming manual work. To overcome the drawback, we propose a semisupervised learning framework that combines the margin, cograph, and label constraints into an objective function for point cloud parsing. Mathematically, the margin constraint is presented to learn a novel distance criterion that can effectively recognize points of different classes. The graph regularization is then employed to characterize the intrinsic geometry structure of the data manifold and explore relationships among points. The label consistency regularization is introduced to ensure the category consistency of the clustered points and single point. To classify the out-of-sample data, the framework successfully transforms the semisupervised classification results into the linear classifier by adopting a linear regression. An iterative algorithm is utilized to efficiently and effectively optimize the objective function with characteristics of multiple variables and highly nonlinear. The point clouds of four urban scenes are used to validate our method. The experimental results show that our method outperforms the state-of-the-art algorithms.

Jincheng Li;Jie Chen;Pengbo Wang;Otmar Loffeld; "A Coarse-to-Fine Autofocus Approach for Very High-Resolution Airborne Stripmap SAR Imagery," vol.56(7), pp.3814-3829, July 2018. An autofocus operation is an indispensable procedure to obtain well-focused images for synthetic-aperture radar (SAR) systems without precise navigation devices. Three challenges have been faced in the very high-resolution (VHR) airborne SAR autofocus due to the long cumulative time: the varying along-track velocity, the residual range cell migration (RCM), and the range-dependent phase errors with higher order components. When it comes to the stripmap mode, the autofocus becomes more complicated, since the scenario with a few strong scatterers is more likely to be encountered with a moving beam. Combining the merits of parametric and nonparametric autofocus algorithms, a robust motion error estimation method is proposed in this paper. First, we perform a stripmap multiaperture mapdrift autofocus operation to extract the along-track velocity and the most range-invariant errors, removing the residual RCM at a subaperture scale. Second, one referential center block is selected to retrieve the residual range-invariant error, which can eliminate the residual RCM globally in the range dimension. With a global high-quality input, the residual range-variant phase errors can be retrieved precisely utilizing a center-to-edge local maximum-likelihood weighted phase gradient autofocus kernel at last. Experiments on real VHR airborne stripmap SAR data are performed to demonstrate the robustness of the proposed method.

Shunichiro Fujinami;Ryo Natsuaki;Kazuhide Ichikawa;Akira Hirose; "Experimental Analysis on the Mechanisms of Singular Point Generation in InSAR by Employing Scaled Optical Interferometry," vol.56(7), pp.3830-3837, July 2018. Interferograms obtained in interferometric synthetic aperture radar (InSAR) often suffer from decorrelation and singular points (SPs) originating from thermal noise and interference. To analyze the phenomenon, first, this paper presents the results of scaled optical experiment free from thermal noise, where the SP origin is interference. We find that the amplitude of the SP-constructing pixels, namely, singular unit, and of nearby pixels is lower than that of other pixels. This amplitude reduction is enhanced by multilooking process. These results suggest that the number of effective scatterers in a single pixel has reduced to such an extent that individual interference has become visible. We also conduct the same analysis on the SAR data. We find that plain areas show the same features as the optical experiment, implying the same mechanisms of SP generation. In contrast, sea areas present no localization, indicating thermal noise in electronics as the major reason. It is widely known that interference among many incoherent scattered waves presents Rayleigh or similar distribution in its amplitude as a result of central limit theorem. As the number of scatterers reduces, the amplitude becomes log-normal or other distribution. However, no analysis was reported on the local properties in such a case that the central limit theorem does not hold. Investigation of such local properties will also be useful in designing SP filters. The significance of noncentral-limit-theorem situations will increase its importance in the use of SAR data, of which resolution becomes further higher in the near future.

Jiaojiao Li;Qian Du;Yunsong Li;Wei Li; "Hyperspectral Image Classification With Imbalanced Data Based on Orthogonal Complement Subspace Projection," vol.56(7), pp.3838-3851, July 2018. Conventional classification algorithms have shown great success for balanced classes. In remote sensing applications, it is often the case that classes are imbalanced. This paper proposes a novel solution to solve the problem of imbalanced training samples in hyperspectral image classification. It consists of two parts: one is for large-size sample sets and the other is for small-size sets. Specifically, an algorithm based on the orthogonal complement subspace projection (OCSP) is proposed to select samples from large-size classes, and an algorithm also based on OCSP is proposed to create artificial samples for small-size ones. The impact on representation-based classifiers, i.e., sparse and collaborative representation classifiers and traditional classifiers (e.g., support vector machine), is investigated. Experimental results demonstrate that the proposed solution can outperform other existing solutions in the literature.

Motofumi Arii;Hiroyoshi Yamada;Masato Ohki; "Characterization of L-Band MIMP SAR Data From Rice Paddies at Late Vegetative Stage," vol.56(7), pp.3852-3860, July 2018. Insufficient validation for polarimetric decomposition techniques has led to its less perceived popularity for more than 20 years. The true composition ratio of scattering mechanisms within a radar backscatter should be essential to make polarimetric synthetic aperture radar (SAR) operational applications. To achieve this, a novel comprehensive approach to accurately identify the contribution of each scattering mechanism by a multi-incidence angle and multipolarimetric (MIMP) SAR observation combined with a theoretical model simulation is newly applied to L-band SAR data. Rice paddies in Niigata, Japan having a simple vegetation structure without topography were observed by Polarimetric and Interferometric Airborne SAR L-band 2, by gradually varying the flight path in terms of incidence angle. In addition to the MIMP SAR observation, a dominant scattering mechanism is reliably isolated through the theoretical characterization of the data by a discrete scatterer model. Avoiding unnecessary Bragg scattering effect caused by the methodically distributed rice paddies, the volume scattering from grains is identified as a dominant scattering mechanism over incidence angles. In addition, HH and VV are strongly affected by the double-bounce scattering between stalks and the ground surfaces at only small incidence angles, whereas the contribution of the double-bounce scattering for HV is not obvious. The results at the L-band will be compared with another MIMP SAR data at the X-band obtained for the same rice paddies in 2014 so that multifrequency MIMP SAR data analysis shall be conducted for our next step.

Xiangrong Zhang;Jingyan Zhang;Chen Li;Cai Cheng;Licheng Jiao;Huiyu Zhou; "Hybrid Unmixing Based on Adaptive Region Segmentation for Hyperspectral Imagery," vol.56(7), pp.3861-3875, July 2018. Unmixing is an important issue of hyperspectral images. Most unmixing methods adopt linear mixing models for simplicity. However, multiple scattering usually occurs between vegetation and soil in a bilinear scene. Thus, nonlinear mixing problems which are difficult to be solved should be taken into consideration under this circumstance. In practice, both linear and nonlinear spectral mixtures exist in hyperspectral scenes. Considering the characteristics of different regions in images, we propose a hybrid unmixing algorithm for hyperspectral images based on region adaptive segmentation. Our method uses a standard K-means clustering algorithm to obtain different regions, including homogeneous regions and detailed regions. The model of the homogeneous regions is assumed to be linear, which will be pursued using the method of sparse-constrained nonnegative matrix factorization (NMF), and the mixing in the detailed regions is assumed to be based on a nonlinear model. We also propose a new nonlinear unmixing method, called graph-regularized semi-NMF, which considers the manifold structure of hyperspectral data as the unmixing method to deal with the detailed regions. Finally, by combining the two regions, we obtain the abundance of the whole hyperspectral image. The proposed method can not only achieve more precise abundance but also be good at keeping the edge information of the bilinear abundance. The experimental results on both synthetic and real data also show that the proposed method is effective for improving the unmixing accuracy of hyperspectral remote-sensing images.

Xiaoying Ouyang;Dongmei Chen;Yonghui Lei; "A Generalized Evaluation Scheme for Comparing Temperature Products from Satellite Observations, Numerical Weather Model, and Ground Measurements Over the Tibetan Plateau," vol.56(7), pp.3876-3894, July 2018. Ground surface temperature (GST) measurements are scarce in Tibetan plateau (TP), whereas the satellite observations and numerical weather model outputs are good alternatives to fill the spatial gaps among ground stations. However, the evaluation of different temperature products is challenging due to the distinct temporal and spatial dimensions in their acquisition methods. This paper intended to develop an evaluation framework for comparing the performances of various temperature data, including the Advanced Along-Track Scanning Radiometer (AATSR) satellite land surface temperature (LST) data, the high Asia refined (HAR) analysis numerical outputs, and the GST. In the proposed framework, we introduce a diurnal temperature cycle model and an aggregated weighted method to solve the temporal and spatial mismatch problem between different data sets. The results over TP show that the evaluation framework solves the temporal and spatial matching among different data sets. AATSR LST and HAR outputs are consistent regardless of the heterogeneous and weather conditions at Linzhi site indicating that the fully homogeneous land surface conditions are not the only way for the satellite/simulation validations. Our results suggest that the proposed framework of time normalization and spatial aggregation method is appropriate for evaluating satellite thermal infrared retrieved data sets and numerical simulations even when the proper ground measurements are insufficient. Since it performs well in the high elevation and complex land surface-conditioned TP region, it will be easy to be adopted in the other regions with a variety of data sets.

Guanglong Xing;Hongyu Xu;Fernando L. Teixeira; "Evaluation of Eccentered Electrode-Type Resistivity Logging in Anisotropic Geological Formations With a Matrix Method," vol.56(7), pp.3895-3902, July 2018. A robust and succinct matrix-based method is developed to simulate the response of eccentered resistivity logging tools in anisotropic geological formations. This is done by first deriving the solution of the state vector describing the potential and the current component along the radial direction in a cylindrical system. Rescaled cylindrical eigenfunctions are employed to overcome the poor scaling inherent to the canonical cylindrical functions for very small or very large arguments. A Levin-type method for approximating integrals with rapidly oscillatory functions is introduced to effectively perform the numerical integrals. The influence of both the translational and rotational eccentricities of the logging tool within the borehole surrounded by an anisotropic formation is studied based on numerical results provided by the present matrix method. We found that neither translational nor rotational eccentricity can be neglected, especially in low-resistive or anisotropic geological formations.

Dalei Hao;Jianguang Wen;Qing Xiao;Shengbiao Wu;Xingwen Lin;Dongqin You;Yong Tang; "Modeling Anisotropic Reflectance Over Composite Sloping Terrain," vol.56(7), pp.3903-3923, July 2018. Heterogeneous terrain significantly complicates signals received by airborne or satellite sensors. It has been demonstrated that both solar direct beam and diffuse skylight illumination conditions are significant factors influencing the anisotropy of reflectance over mountainous areas. Several models and methods have been developed to account for topographic effects on surface reflectance at the pixel level in remote sensing. However, subtopographic effects are generally neglected for low-spatial-resolution pixels due to the complex law of radiative transfer and the limitations of higher spatial resolution digital elevation models, which can lead to deviations in reflectance estimation. Accurately estimating the subtopographic effects on anisotropic reflectance over composite sloping terrain under different illumination conditions presents a challenge for remote sensing models and applications. In this paper, the diffused equivalent slope model (dESM) was developed, which is an anisotropic reflectance simulation model coupled with diffuse skylight over composite sloping terrain. The corresponding subtopographic impact factor was also proposed to exhibit how microslope topography affects reflectance over composite sloping terrain under different illumination conditions. Simulated reflectance data sets simulated by the radiosity method and Moderate Resolution Imaging Spectroradiometer reflectance data were used to evaluate the performance of the dESM model. The results reveal that the dESM model can accurately capture the reflectance anisotropy over composite sloping terrain under different illumination conditions, and the subtopographic impact factor can account for the effects of microslope topography, shadow, and illumination conditions.

N. Acito;M. Diani; "Atmospheric Column Water Vapor Retrieval From Hyperspectral VNIR Data Based on Low-Rank Subspace Projection," vol.56(7), pp.3924-3940, July 2018. The knowledge of atmospheric column water vapor concentration is crucial for compensating water absorption effects in remote sensing data. Several algorithms for the estimation of such a parameter were proposed in the past. One of the most effective algorithms is the atmospheric precorrected differential absorption (APDA) technique. APDA relies on a simplified radiative transfer model (RTM) that does not account for the spatial variability of the adjacency effects. In this paper, we study the impact of the simplified RTM assumption on the performance of the algorithm by exploiting a more realistic and well-established RTM. Starting from such a model, we derive a new water retrieval algorithm called low-rank subspace projection-based water estimator. It exploits the high degree of spectral correlation experienced in the reflectances of most of the existing materials. An extensive experimental analysis is carried out on simulated data in order to assess and compare the performance of the two algorithms. Simulation results allow the critical analysis of the two algorithms by highlighting their strengths and drawbacks.

Baptist Vandersmissen;Nicolas Knudde;Azarakhsh Jalalvand;Ivo Couckuyt;André Bourdoux;Wesley De Neve;Tom Dhaene; "Indoor Person Identification Using a Low-Power FMCW Radar," vol.56(7), pp.3941-3952, July 2018. Contemporary surveillance systems mainly use video cameras as their primary sensor. However, video cameras possess fundamental deficiencies, such as the inability to handle low-light environments, poor weather conditions, and concealing clothing. In contrast, radar devices are able to sense in pitch-dark environments and to see through walls. In this paper, we investigate the use of micro-Doppler (MD) signatures retrieved from a low-power radar device to identify a set of persons based on their gait characteristics. To that end, we propose a robust feature learning approach based on deep convolutional neural networks. Given that we aim at providing a solution for a real-world problem, people are allowed to walk around freely in two different rooms. In this setting, the IDentification with Radar data data set is constructed and published, consisting of 150 min of annotated MD data equally spread over five targets. Through experiments, we investigate the effectiveness of both the Doppler and time dimension, showing that our approach achieves a classification error rate of 24.70% on the validation set and 21.54% on the test set for the five targets used. When experimenting with larger time windows, we are able to further lower the error rate.

Lei Liu;Feng Zhou;Xueru Bai;John Paisley;Hongbing Ji; "A Modified EM Algorithm for ISAR Scatterer Trajectory Matrix Completion," vol.56(7), pp.3953-3962, July 2018. The anisotropy of radar cross section of scatterers makes the scatterer trajectory matrix incomplete in sequential inverse synthetic aperture radar images. As a result, factorization methods cannot be directly applied to reconstruct the 3-D geometry of scatterers without additional consideration. We propose a modified expectation-maximization (EM) algorithm to retrieve the complete scatterer trajectory matrix. First, we derive the motion dynamics of the projected scatterer, which approximates an ellipse. Then, based on the estimated ellipse parameters using the known data of each scatterer trajectory, we use the Kalman filter to initialize the missing data. To address the limitations of a traditional EM, which only considers the rank-deficient characteristics of the scatterer trajectory matrix, we propose to augment EM by using both the known rank-deficient and elliptical motion characteristics. Experimental results on simulated data verify the effectiveness of the proposed method.

Jianlai Chen;Mengdao Xing;Guang-Cai Sun;Yuexin Gao;Wenkang Liu;Liang Guo;Yang Lan; "Focusing of Medium-Earth-Orbit SAR Using an ASE-Velocity Model Based on MOCO Principle," vol.56(7), pp.3963-3975, July 2018. The available focusing algorithms for medium-Earth-orbit (MEO) SAR are all based on the complex nonhyperbolic range equation, which may make it more difficult in imaging processing. In this paper, we model the range equation as the standard hyperbolic form based on the motion compensation (MOCO) principle. However, the conventional two-step MOCO may introduce azimuth spectrum expansion due to the potential large motion error, which can lead to severe azimuth ambiguity. To resolve this problem, we develop an omega-K algorithm based on a modified two-step MOCO and an adaptively straight equivalent (ASE)-velocity model. The algorithm is implemented through three-step processing: 1) the modified two-step MOCO does not compensate for the quadratic motion error (the main factor for the spectrum expansion); 2) an ASE-velocity model is introduced to compensate for the quadratic motion error; and 3) an extended Stolt mapping is proposed to perform the accurate range cell migration correction, and the tandem singular value decomposition-nonlinear chirp scaling algorithm is to correct the azimuth-variant phase error and to perform the azimuth compression. Processing of simulated data and airborne SAR real data validates the effectiveness of the proposed algorithm.

Hannah Joerg;Matteo Pardini;Irena Hajnsek;Konstantinos P. Papathanassiou; "3-D Scattering Characterization of Agricultural Crops at C-Band Using SAR Tomography," vol.56(7), pp.3976-3989, July 2018. The aim of this paper is to interpret and characterize the changes of the 3-D polarimetric scattering signatures of agricultural crops at C-band and to relate them to temporal changes of the soil and plant parameters. For this, a time series of multibaseline (MB) synthetic aperture radar (SAR) data acquired at C-band by the airborne F-SAR system of the German Aerospace Center over the Wallerfing test site in Germany was analyzed. The availability of MB SAR data enables the resolution of scattering contributions in height by means of SAR tomography. The tomographic profiles at different polarizations were analyzed regarding temporal changes for different crop types. First, it was investigated if the center of mass (CoM) of the vertical reflectivity profiles as a single parameter enables the tracking of changes in soil and vegetation. The results show that the vertical reflectivity profiles and their CoM do not allow resolving the ambiguity if a change originates from soil or vegetation dynamics as expected. Thus, the scattering contributions from ground and volume were separated in height, using a filtering approach, and used for the estimation of the ground and volume scattering powers by means of covariance matching. Comparing the outputs with coincident ground measurements showed that dielectric as well as geometric changes in the vegetation are traceable by the separated ground and volume powers. Finally, the estimated powers were analyzed with respect to orientation effects, i.e., to polarimetric anisotropic behavior. They were found to be not significant for the crops under study at C-band.

Sayyed Hamed Alizadeh Moghaddam;Mehdi Mokhtarzade;Amin Alizadeh Naeini;AliReza Amiri-Simkooei; "A Statistical Variable Selection Solution for RFM Ill-Posedness and Overparameterization Problems," vol.56(7), pp.3990-4001, July 2018. Parameters of a rational function model (RFM), known as rational polynomial coefficients, are commonly redundant and highly correlated, leading to the problems of overparameterization and ill-posedness, respectively. In this paper, an innovative two-stage statistical method, called an uncorrelated and statistically significant RFM (USS-RFM), is presented to deal directly with these two problems. In the first stage, the proposed method employs a novel correlation analysis, which aims to exclude highly correlated coefficients. In the second stage, a new iterative significance test is applied to detect and remove unnecessary coefficients from the RFM. The proposed method is implemented on eight real data sets captured by Cartosat-1, GeoEye-1, Pleiades, Spot-3, and WorldView-3 platforms. The results are evaluated in terms of the positioning accuracy, model degrees of freedom, processing time, and figure condition analysis. Experimental results prove the efficiency of the proposed method, showing that it could achieve subpixel accuracy even for cases with five ground control points. The proposed USS-RFM is compared to an ℓ1-norm regularization (L1R) technique and a particle swarm optimization (PSO) algorithm in the terrain-dependent case of the RFM. The results demonstrate the superiority of the USS-RFM, which performs better than the alternative methods in terms of positioning accuracy by more than 50% on average. Moreover, the RFMs resulted from the USS-RFM demonstrate to have higher degrees of freedom and, as a result, higher level of reliability. From the perspective of processing time, USS-RFM and L1R are similar while both are much faster than PSO.

Pengyuan Lv;Yanfei Zhong;Ji Zhao;Liangpei Zhang; "Unsupervised Change Detection Based on Hybrid Conditional Random Field Model for High Spatial Resolution Remote Sensing Imagery," vol.56(7), pp.4002-4015, July 2018. High spatial resolution (HSR) remote sensing images provide detailed geometric information about land cover. As a result, it is possible to detect more subtle changes with the help of HSR images. However, due to the increased spatial resolution and the limited spectral information, it is difficult to identify the real changes only through the spectral feature of the image. To fully explore the spectral-spatial information and improve the change detection performance for HSR images, this paper proposes the hybrid conditional random field (HCRF) model, which combines the traditional random field method with an object-based technique. In the proposed method, the spectral discriminative information of a single pixel is extracted by the unary potential, which is modeled using a soft clustering method to make an initial separation of changed and unchanged pixels. The pairwise potential then considers the contextual information of adjacent pixels to favor spatial smoothing. An object term is also introduced in the HCRF model to keep the homogeneity of changed objects. By the use of these approaches, the oversmoothing problem of the random field-based methods and the detection error caused by the segmentation strategy in the object-based methods can be relieved. The proposed method was tested on three HSR image data sets and outperformed the compared state-of-the-art techniques.

Alexander P. Trishchenko; "Reprojection of VIIRS SDR Imagery Using Concurrent Gradient Search," vol.56(7), pp.4016-4024, July 2018. The application of the gradient search method for reprojection of Visible Infrared Imaging Radiometer Suite (VIIRS) satellite data record (SDR) imagery is described. The method is an extension of the scheme developed earlier for reprojection of Moderate Resolution Imaging Spectroradiometer (MODIS) L1B imagery. The new scheme has three important improvements: 1) the interscan and intrascan search steps are combined into a single step to save computational time; 2) one-sided (left-right, up-down) gradients are utilized to improve convergence; and 3) the use of the map projection instead of the latitude-longitude coordinate system to improve performance and robustness. The scheme is computationally very fast, employing only basic arithmetic operations and precalculated matrices of spatial gradients. An average number of iteration steps for the reprojection of mid-latitude quadruple VIIRS SDR granule is less than 1.5, i.e., the scheme usually converges in less than 2 iterations. The ambiguity in the overlapping areas due to the bow-tie effect is resolved by forcing a solution located closer to the scan line center. In addition, the accuracy of VIIRS imagery geo-location was evaluated by comparison against MODIS 250 m images. Absolute geolocation biases of the VIIRS imagery over the 1-year period from June 01, 2016 to May 01, 2017 were found on average to be within 0.004 and -0.003 of the sample size (Δ line) in the along-track direction and 0.055 and 0.035 of the sample size (Δ pixel) in the along-scan direction for bands I2 and M7, respectively. These results demonstrate the excellent geometric performance of the VIIRS Suomi National Polar-orbiting Partnership sensor and are consistent with those reported by the VIIRS geolocation teams.

Robert W. Jansen;Raghu G. Raj;Luke Rosenberg;Mark A. Sletten; "Practical Multichannel SAR Imaging in the Maritime Environment," vol.56(7), pp.4025-4036, July 2018. The U.S. Naval Research Laboratory (NRL) recently developed an X-band airborne multichannel synthetic aperture radar (MSAR) test bed system that consists of 32 along-track phase centers. This system was deployed in September 2014 and again in October 2015 to perform extensive and systematic data collections on a variety of land and maritime scenes under different environmental conditions. This paper presents a detailed experimental analysis of imaging in the maritime domain using data captured by the NRL MSAR system. After presenting some of the important details of our NRL MSAR system, we demonstrate velocity-based imaging of a variety of moving backscatter sources including ships and shoaling ocean waves. Our analysis is based on the velocity SAR (VSAR) technique, which was originally conceived by Friedlander and Porat. Practical application of this algorithm in the maritime domain requires a number of pre- and postprocessing stages, which are described here in detail. Our results are then benchmarked against the traditional along-track interferometry, where it is demonstrated that VSAR processing is better able to correctly compensate motion-induced distortion.

Jianping Wang;Pascal Aubry;Alexander Yarovoy; "Wavenumber-Domain Multiband Signal Fusion With Matrix-Pencil Approach for High-Resolution Imaging," vol.56(7), pp.4037-4049, July 2018. In this paper, a wavenumber-domain matrix-pencil-based multiband signal fusion approach was proposed for multiband microwave imaging. The approach proposed is based on the Born approximation of the field scattered from a target resulting in the fact that in a given scattering direction, the scattered field can be represented over the whole frequency band as a sum of the same number of contributions. Exploiting the measured multiband data and taking advantage of the parametric modeling for the signals in a radial direction, a unified signal model can be estimated for a large bandwidth in the wavenumber domain. It can be used to fuse the signals at different subbands by extrapolating the missing data in the frequency gaps between them or coherently integrating the overlaps between the adjacent subbands, thus synthesizing an equivalent wideband signal spectrum. Taking an inverse Fourier transform, the synthesized spectrum results in a focused image with improved resolution. Compared with the space–time domain fusion methods, the proposed approach is applicable for radar imaging with the signals collected by either collocated or noncollocated arrays in different frequency bands. Its effectiveness and accuracy are demonstrated through both numerical simulations and experimental imaging results.

Fei Li;Xiuwei Zhang;Lei Zhang;Dongmei Jiang;Yanning Zhang; "Exploiting Structured Sparsity for Hyperspectral Anomaly Detection," vol.56(7), pp.4050-4064, July 2018. Sparse representation-based background modeling facilitates much recent progress in hyperspectral anomaly detection (AD). The sparse representation of background often exhibits underlying structure, which is crucial to distinguish between background and anomaly. However, how to exploit such underlying structure is still challenging. To address this problem, we present a novel hyperspectral AD method, which can exploit the structured sparsity in modeling the background more accurately. With the plausible background area detected by a local RX detector, a robust background spectrum dictionary is learned in a principal component analysis way. A reweighted Laplace prior-based structured sparse representation model is then employed to reconstruct the spectrum of each pixel. With considering the structured sparsity in representation, the background pixels can be reconstructed more accurately than the anomaly ones, which thus can be detected based on the reconstruction error. To further improve the detection performance, an intracluster reconstruction model is developed to exploit the spatial similarity among the background pixels in the same cluster. The anomaly pixels can then be detected based on the cost of intracluster reconstruction error. By linearly combining these two detection results, improvement is obviously achieved on detection accuracy. Experimental results on both simulated and real-world data sets demonstrate that the proposed method outperforms several state-of-the-art hyperspectral AD methods.

Stephen L. Durden; "Relating GPM Radar Reflectivity Profile Characteristics to Path-Integrated Attenuation," vol.56(7), pp.4065-4074, July 2018. The Global Precipitation Measurement (GPM) mission was launched in February 2014; its dual-frequency precipitation radar (DPR) operates at both Ku- and Ka-band s. Attenuation in precipitation is typically not negligible, especially at Ka-band. Hence, attenuation correction is an important part of the GPM DPR retrieval algorithm. The operational algorithm uses a path-integrated attenuation (PIA) obtained by comparing the measured surface return with that expected either from a nearby, nonprecipitating area or from the same area, acquired at a previous, nonprecipitating time. This surface reference technique has worked well in most situations but can result in erroneously low estimates of the path attenuation in situations with nonuniform filling of the radar beam, especially at Ka-band due to its larger attenuation. This paper explores the existence of relationships between the Ka-band PIA and the characteristics of the measured reflectivity profiles. The author finds that PIA is, indeed, related to reflectivity profiles, with strongest correlation between the PIA and the measured rainfall dual-frequency reflectivity ratio just above the surface. This relationship could be used as an estimator in the cases with severe nonuniform beam filling.

Tran Vu La;Ali Khenchaf;Fabrice Comblet;Carole Nahum; "Assessment of Wind Speed Estimation From C-Band Sentinel-1 Images Using Empirical and Electromagnetic Models," vol.56(7), pp.4075-4087, July 2018. Surface wind speed estimation from synthetic aperture radar (SAR) data is principally based on empirical (EP) approaches, e.g., CMOD functions. However, it is necessary and significant to compare radar backscattering modeling based on EP and electromagnetic (EM) approaches for enhancing the understanding of the physical processes between radar signal and sea surface, which is important for the design of radar sensors (e.g., cyclone global navigation satellite system). Indeed, through comparisons, it is worth noticing that the scattering of wave breaking is not taken into account in the physical modeling of radar backscattering. Surface wind speed is selected here as a reference parameter for investigating the difference between EP and EM models, due to its important role in radar backscattering modeling. In addition, wind speed estimates can be easily compared to in situ measurements. For EP approach, CMOD5.N and Komarov's model are selected for wind speed estimation from Sentinel-1 images. The CMOD5.N can offer wind speed estimates up to 25-35 m/s, while wind speed estimation based on Komarov's model does not require wind direction input. For EM approach, the asymptotic models, i.e., composite two-scale model, small-slope approximation (SSA), and resonant curvature approximation (RCA), are investigated for wind speed retrieval. They are studied with two models of surface roughness spectrum: semi-EP spectrum and EP model. In general, normalized radar cross section (NRCS) calculated by CMOD5.N and SSA/RCA is quite similar for incidence angles below 40° in vertical polarized and below 30° in horizontal polarized. For larger ones, significant NRCS deviations between two approaches are demonstrated, due to the lack of wave breaking scattering in EM models. As a result, wind speed estimates by CMOD5.N and SSA/RCA are very close for low and moderate incidence angles, while SSA-/RCA-based wind speeds are overestimated for larger one- .

Jake Mashburn;Penina Axelrad;Stephen T. Lowe;Kristine M. Larson; "Global Ocean Altimetry With GNSS Reflections From TechDemoSat-1," vol.56(7), pp.4088-4097, July 2018. TechDemoSat-1 (TDS-1) is an experimental Global Navigation Satellite System Reflections (GNSS-R) satellite launched in 2014. The GNSS-R receiver onboard performs real-time navigation and generates delay-Doppler correlation maps for Earth-reflected Global Positioning System (GPS) L1 C/A ranging signals. This paper investigates the performance of the TDS-1 data for ocean surface altimetry retrievals. The analysis includes consideration of the transmitter and receiver orbits, time tag corrections, models for ionospheric and tropospheric delays, zenith to nadir antenna baseline offsets, ocean and solid Earth tides, and a comparison with mean sea surface topography. An error budget is compiled to account for each error source and compared with the experimentally derived surface height retrievals. By analyzing data sets covering global ocean surfaces over ±60° latitude, the current performance of spaceborne GNSS-R altimetry with the TDS-1 data set is experimentally established. In comparison with the mean sea surface topography, the surface height residuals are found to be 6.4 m, $1sigma $ with a 1-s integration time. A discussion of the factors limiting this performance is presented, with implications for future GNSS-R altimetry missions designed for the observation of mesoscale ocean circulation.

Tatsumi Uezato;Mathieu Fauvel;Nicolas Dobigeon; "Hyperspectral Image Unmixing With LiDAR Data-Aided Spatial Regularization," vol.56(7), pp.4098-4108, July 2018. Spectral unmixing (SU) methods incorporating the spatial regularizations have demonstrated increasing interest. Although spatial regularizers that promote smoothness of the abundance maps have been widely used, they may overly smooth these maps and, in particular, may not preserve edges present in the hyperspectral image. Existing unmixing methods usually ignore these edge structures or use edge information derived from the hyperspectral image itself. However, this information may be affected by the large amounts of noise or variations in illumination, leading to erroneous spatial information incorporated into the unmixing procedure. This paper proposes a simple yet powerful SU framework that incorporates external data [i.e. light detection and ranging (LiDAR) data]. The LiDAR measurements can be easily exploited to adjust the standard spatial regularizations applied to the unmixing process. The proposed framework is rigorously evaluated using two simulated data sets and a real hyperspectral image. It is compared with methods that rely on spatial information derived from a hyperspectral image. The results show that the proposed framework can provide better abundance estimates and, more specifically, can significantly improve the abundance estimates for the pixels affected by shadows.

Homa Ansari;Francesco De Zan;Richard Bamler; "Efficient Phase Estimation for Interferogram Stacks," vol.56(7), pp.4109-4125, July 2018. Signal decorrelation poses a limitation to multipass SAR interferometry. In pursuit of overcoming this limitation to achieve high-precision deformation estimates, different techniques have been developed, with short baseline subset, SqueeSAR, and CAESAR as the overarching schemes. These different analysis approaches raise the question of their efficiency and limitation in phase and consequently deformation estimation. This contribution first addresses this question and then proposes a new estimator with improved performance, called Eigendecomposition-based Maximum-likelihood-estimator of Interferometric phase (EMI). The proposed estimator combines the advantages of the state-of-the-art techniques. Identical to CAESAR, EMI is solved using eigendecomposition; it is therefore computationally efficient and straightforward in implementation. Similar to SqueeSAR, EMI is a maximum-likelihood-estimator; hence, it retains estimation efficiency. The computational and estimation efficiency of EMI renders it as an optimum choice for phase estimation. A further marriage of EMI with the proposed Sequential Estimator by Ansari et al. provides an efficient processing scheme tailored to the analysis of Big InSAR Data. EMI is formulated and verified in relation to the state-of-the-art approaches via mathematical formulation, simulation analysis, and experiments with time series of Sentinel-1 data over the volcanic island of Vulcano, Italy.

Hélène Sportouche;Antoine Roueff;Pascale C. Dubois-Fernandez; "Precision of Vegetation Height Estimation Using the Dual-Baseline PolInSAR System and RVoG Model With Temporal Decorrelation," vol.56(7), pp.4126-4137, July 2018. Estimating vegetation height from polarimetric interferometric synthetic aperture radar (PolInSAR) data using the Random Volume over Ground model has motivated several studies. Most of these propose estimators and apply them to real data to demonstrate their potential. In previous publications on the single-baseline system, we proposed a complementary approach, which consisted in analyzing the precision of estimations of vegetation height that can be expected depending on the considered model and on the available a priori knowledge. In this paper, we develop such an analysis for the case of a dual-baseline (DB) system. We consider the DB configuration with a PolInSAR set obtained with three PolSAR acquisitions, the extinction coefficient of the volume is assumed unknown, and the level of temporal decorrelation is assumed to be unknown. The observed high sensitivity of the vegetation height Cramer-Rao bound (CRB) with respect to the system parameters and the vegetation characteristics shows that the system optimization cannot guarantee 1-m precision for all vegetation heights, even for large estimation windows with N=2000 pixels. Nevertheless, an operating regime exists for which the vegetation height estimation precision is around 1 m for N=200 pixels. This regime is obtained for a pair of wavenumbers (0.06 and 0.25 m-1), for vegetation height ranging [20, 50] m, and for polarimetric contrast between the ground and the volume larger than 0.3. Furthermore, we investigate the performance of a maximum-likelihood estimator and compare this to the precision given by the CRB. For the examples considered, with N=200 pixels, we observed convergence issues of the estimator when the polarimetric contrast is smaller or equal to 0.3.

Maryam Hajebi;Ahad Tavakoli;Mojtaba Dehmollaian;Parisa Dehkhoda; "An Iterative Modified Diffraction Tomography Method for Reconstruction of a High-Contrast Buried Object," vol.56(7), pp.4138-4148, July 2018. An iterative subsurface inverse scattering algorithm has been proposed to profile high-contrast 2-D dielectric objects buried under a lossy ground. The proposed iterative modified diffraction tomography (IMDT) method is based on the combination of the traditional iterative Born method and DT technique which is well known for its simplicity and robustness. In effect, IMDT is an iterative Born algorithm that utilizes the spectral domain concept of DT for solving the inversion problem. The proposed iterative approach results in removing the Born approximation’s limitation of DT technique in dealing with high-contrast scatterers. To this end, the total field inside the reconstruction domain is renewed and then expanded into upgoing and downgoing plane waves in each iteration. Consequently, by exponential expansion of the fields and deriving a modified DT formulation, high-contrast targets are also efficiently reconstructed. To assess the proposed IMDT method, various high-contrast objects and noise conditions are studied. It is shown that the IMDT algorithm significantly outperforms the DT technique and is capable of reconstructing high-contrast objects efficiently and accurately even in noisy environments.

Wei Yao;Jan van Aardt;Martin van Leeuwen;Dave Kelbe;Paul Romanczyk; "A Simulation-Based Approach to Assess Subpixel Vegetation Structural Variation Impacts on Global Imaging Spectroscopy," vol.56(7), pp.4149-4164, July 2018. Consistent and scalable estimation of vegetation structural parameters-essential to understanding forest ecosystems-is widely investigated through remote sensing imaging spectroscopy. NASA's proposed spaceborne mission, the Hyperspectral Infrared Imager (HyspIRI), will measure spectral radiance from 380 to 2500 nm in 10-nm contiguous bands with a 60-m ground sample distance (GSD) and provide a global benchmark from which future changes can be assessed. The historic foci of spectrometers have been foliar/canopy biochemistry and species classification; however, given the relatively large GSD of a spaceborne instrument, there is uncertainty as to the effects of subpixel vegetation structure on observed radiance. This paper, therefore, evaluates the linkages between the within-pixel vegetation structure and imaging spectroscopy signals at the pixel level. We constructed a realistic virtual forest scene representing the National Ecological Observatory Network (NEON) Pacific Southwest domain site. Anticipated HyspIRI data (60-m GSD) for this site were then simulated using the physics-driven Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. Both the models were first validated via comparison to overflow classic Airborne Visible/Infrared Imaging Spectrometer and NEON's imaging spectrometer (NIS). Then, to assess the impact of within-pixel: 1) tree canopy cover (CC); 2) tree positioning; and 3) distribution on large-footprint HyspIRI signals, we generated the variations of the baseline virtual forest scene and measured the anticipated spectral radiance using DIRSIG. Results indicate that HyspIRI is sensitive to subpixel vegetation structural variation in the visible to a short-wavelength infrared spectrum due to vegetation structural changes. This has implications for improving the system's suitability for consistent global vegetation structural assessments by adapting calibration strategies to account for this subpix- l variation.

Chen Yi;Yong-Qiang Zhao;Jonathan Cheung-Wai Chan; "Hyperspectral Image Super-Resolution Based on Spatial and Spectral Correlation Fusion," vol.56(7), pp.4165-4177, July 2018. Super-resolution image reconstruction has been utilized to overcome the problem of spatial resolution limitation in hyperspectral (HS) imaging. To improve the spatial resolution of HS image, this paper proposes an HS-multispectral (MS) fusion method, which exploits spatial and spectral correlations and proper regularization. High spatial correlation between MS image and the desired high-resolution HS image is conserved via an over-completed dictionary, and the spectral degradation between them projected onto the space of sparsity is applied as the spectral constraint. The high spectral correlation between high-spatial- and low-spatial-resolution HS image is preserved through linear spectral unmixing. The idea of an interactive feedback proposed in our previous work is also used when dealing with spatial reconstruction and unmixing. Low-rank property is introduced in this paper to regularize the sparse coefficients of the HS patch matrix, which is utilized as the spatial constraint. Experiments on both simulated and real data sets demonstrate that the proposed fusion algorithm achieves lower spectral distortions and the super-resolution results are superior to those of other state-of-the-art methods.

Li Yan;Ruixi Zhu;Yi Liu;Nan Mo; "Scene Capture and Selected Codebook-Based Refined Fuzzy Classification of Large High-Resolution Images," vol.56(7), pp.4178-4192, July 2018. Scene classification has been successfully applied to the semantic interpretation of large high-resolution images (HRIs). The bag-of-words (BOW) model has been proven to be effective but inadequate for HRIs because of the complex arrangement of the ground objects and the multiple types of land cover. How to define the scenes in HRIs is still a problem for scene classification. The previous methods involve selecting the scenes manually or with a fixed spatial distribution, leading to scenes with a mixture of objects from different categories. In this paper, to address these issues, a scene capture method using adjacent segmented images and a support vector machine classifier is proposed to generate scenes dominated by one category. The codebook in BOW is obtained from clustering features extracted from all the categories, which may lose the discrimination in some vocabularies. Thus, more discriminative visual vocabularies are selected by the introduced mutual information and the proposed intraclass variability balance in each category, to decrease the redundancy of the codebook. In addition, a refined fuzzy classification strategy is presented to avoid misclassification in similar categories. The experimental results obtained with three different types of HRI data sets confirm that the proposed method obtains classification results better than those obtained by most of the previous methods in all the large HRIs, demonstrating that the selection of representative vocabularies, the refined fuzzy classification, and the scene capture strategy are all effective in improving the performance of scene classification.

Zhi He;Jun Li;Kai Liu;Lin Liu;Haiyan Tao; "Kernel Low-Rank Multitask Learning in Variational Mode Decomposition Domain for Multi-/Hyperspectral Classification," vol.56(7), pp.4193-4208, July 2018. Multitask learning (MTL) has recently yielded impressive results for classification of remotely sensed data due to its ability to incorporate shared information across multiple tasks. However, it remains a challenging issue to achieve robust classification results in the case that the data are from nonlinear subspaces. In this paper, we propose a kernel low-rank MTL (KL-MTL) method to handle multiple features from the 2-D variational mode decomposition (2-D-VMD) domain for multi-/hyperspectral classification. On the one hand, a nonrecursive 2-D-VMD method is applied to extract various features [i.e., intrinsic mode functions (IMFs)] of the original data concurrently. Compared with the existing 2-D empirical mode decomposition, 2-D-VMD has much stronger mathematical foundation and does not need any recursive sifting process. On the other hand, KL-MTL is proposed for classification by taking the extracted IMFs as features of multiple tasks. In KL-MTL, the low-rank representation formulated by nuclear norm can capture global structure of multiple tasks, while the kernel tricks are utilized for nonlinear extension of the low-rank MTL. Moreover, the optimization problem in KL-MTL is solved by the inexact augmented Lagrangian method. Compared with several state-of-the-art feature extraction and classification methods, the experimental results using both multi-/hyperspectral images demonstrate that the proposed method has satisfactory classification performance.

Yu-Xin Sun;Peng Liu;Ya-Qiu Jin; "Ship Wake Components: Isolation, Reconstruction, and Characteristics Analysis in Spectral, Spatial, and TerraSAR-X Image Domains," vol.56(7), pp.4209-4224, July 2018. Based on a joint analysis of linear Kelvin wake kinematics and water dispersion relation, the features of several components of ship wake are identified in the spectral domain, such as the “X”-shaped Kelvin wake, the narrow “X”-shaped solitary wave packets, and the cross-shaped turbulent and near-field waves. Alternatively, these components can be separately reconstructed in spatial domain using the inverse Fourier transformation. These relations are verified through numerical simulation of the wakes of a ship moving at different speeds. This wake decomposition is now extended to wake feature analysis of real synthetic aperture radar (SAR) image. It reveals that although the images of ship wake have been modulated by SAR imaging mechanisms in various aspects, their spectral characteristics are closely analogous to that of wake surface elevation. Taking advantage of the loci and shape of the wake spectrum, the transverse wave, the divergent wave, the turbulent wave, and the solitary wave packets can be isolated from the original SAR image with full wake appearance. The reconstructed images of wake components facilitate the further estimation of the direction, speed, length, hull geometry, and propulsion system of the ship. This decomposition can also recover wake components from multiple ship wakes and provide an understanding of their roles on SAR image.

Zhongdong Yang;Yuquan Zhen;Zenshan Yin;Chao Lin;Yanmeng Bi;Wu Liu;Qian Wang;Long Wang;Songyan Gu;Longfei Tian; "Prelaunch Radiometric Calibration of the TanSat Atmospheric Carbon Dioxide Grating Spectrometer," vol.56(7), pp.4225-4233, July 2018. TanSat is an important satellite in the Chinese Earth Observation Program which is designed to measure global atmospheric CO2 concentrations from space. The first Chinese superhigh-resolution grating spectrometer for measuring atmospheric CO2 is aboard TanSat. This spectrometer is a suite of three grating spectrometers that make coincident measurements of reflected sunlight in the near-infrared CO2 band near 1.61 and 2.06 μm and in the molecular oxygen A-band (O2A) at 0.76 μm. Their spectral resolving power (λ/Aλ) is ~19000, ~12800, and ~12250 in the O2A, weak absorption band of molecular carbon dioxide band, and strong absorption of carbon dioxide band, respectively. This paper describes the laboratory radiometric calibration of the spectrometer suite, which consists of measurements of the dark current response, gain coefficients, and signal-to-noise ratio (SNR). The SNRs of each channel meet the mission requirements for the O2A and weak CO2 band but slightly miss the requirements in a few channels in the strong CO2 band. The gain coefficients of the three bands have a negligible random error component and achieve very good stability. Most of the R-squared of gain coefficients model consist of five numbers of nine (e.g., 0.99999) after the decimal point, suggesting that the instrument has significant response linearity. The radiometric calibration results meet the requirements of an absolute calibration uncertainty of less than 5%.

* "IEEE Transactions on Geoscience and Remote Sensing information for authors," vol.56(7), pp.C3-C3, July 2018.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "IEEE Transactions on Geoscience and Remote Sensing institutional listings," vol.56(7), pp.C4-C4, July 2018.* Presents the GRSS society institutional listings.

IEEE Geoscience and Remote Sensing Letters - new TOC (2018 July 16) [Website]

* "Front Cover," vol.15(7), pp.C1-C1, July 2018.* Presents the front cover for this issue of the publication.

* "IEEE Geoscience and Remote Sensing Letters publication information," vol.15(7), pp.C2-C2, July 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Table of contents," vol.15(7), pp.969-1136, July 2018.* Presents the table of contents for this issue of the publication.

D. Venkata Ratnam;J. R. K. Kumar Dabbakuti;N. V. V. N. J. Sri Lakshmi; "Improvement of Indian-Regional Klobuchar Ionospheric Model Parameters for Single-Frequency GNSS Users," vol.15(7), pp.971-975, July 2018. In general, global positioning system (GPS) ranging errors and positioning caused by the ionosphere can be corrected by the Klobuchar ionospheric model. GPS satellites broadcast the model coefficients to the single-frequency users based on the average solar flux and seasonal variations. In low-latitude regions, such as India and Brazil, correction of the ionospheric delay based on these coefficients is not accurate because of the large gradients and complex dynamic ionospheric behavior. The traditional employment of refining the ionospheric Klobuchar model parameters with single-shell approximation is inappropriate for the equatorial/low-latitude regions. In this letter, we propose a technique to determine the ionospheric delay by using the new Klobuchar parameters (coefficients) based on multishell-spherical harmonics function (MS-SHF) analysis. It has been reported that by using the MS-SHF model, the ionospheric delays can be modeled accurately in the low-latitude regions. Furthermore, the proposed model performance has been evaluated with the Denis Bouvet (2017) single-frequency ionospheric correction model. In the single-frequency usage, the proposed model can improve (62.69%/77.08%) during quiet/disturbed days. Preliminary results reveal that the refined Klobuchar model parameters impart enhanced ionospheric delay corrections to regional navigation satellite systems with single-frequency GPS receivers, such as the Indian Regional Navigation Satellite System.

Xinpeng Tian;Qiang Liu;Zhenwei Song;Baocheng Dou;Xiuhong Li; "Aerosol Optical Depth Retrieval From Landsat 8 OLI Images Over Urban Areas Supported by MODIS BRDF/Albedo Data," vol.15(7), pp.976-980, July 2018. This letter presents a new algorithm that allows the retrieval of the aerosol optical depth (AOD) at a high (500 m) spatial resolution from Landsat 8 Operational Land Imager (OLI) data over urban areas. Because of the complex structure over urban surfaces, the bidirectional reflectance characteristic is obvious; however, most of the current aerosol retrieval algorithms over land do not account for the anisotropic effect of the surface. This letter improves the quality of AOD retrieval by providing the surface reflectance based on the multiyear MODIS bidirectional reflectance distribution function (BRDF)/Albedo model parameters product (MCD43A1) and the RossThick-LiSparse reciprocal kernel-driven BRDF model. The ground-based Aerosol Robotic Network (AERONET) AOD measurements from five sites located in urban and suburban areas are used to validate the AOD retrievals, and the MODIS Terra Collection 6 (C6) dark target/deep blue AOD products (MOD04) at 10-km spatial resolution are obtained for comparison. The validation results show that the AOD retrievals from the OLI images are well correlated with the AERONET AOD measurements (R = 0.987), with a low root-mean-square error of 0.07, a mean absolute error of 0.036, and a relative mean bias of 1.029; approximately 95.3% of the collocations fall within the expected error. The analysis indicates that the BRDF is essential in ensuring the accuracy of AOD retrieval. Compared with the MOD04 AOD retrievals, the OLI AOD retrievals have better spatial continuity and higher accuracy. The new algorithm can provide continuous and detailed spatial distributions of the AOD over complex urban surfaces.

Xianwei Wang;David M. Holland; "A Method to Calculate Elevation-Change Rate of Jakobshavn Isbrae Using Operation IceBridge Airborne Topographic Mapper Data," vol.15(7), pp.981-985, July 2018. To bridge the data gap between Ice, Cloud, and land Elevation Satellite (ICESat) and the delayed forthcoming ICESat-2, the National Aeronautics and Space Administration has launched the Operation IceBridge (OI) campaign in 2009, which has provided variable data set to reveal snow and ice changes in the Arctic and Antarctic. In this letter, we employ the Airborne Topographic Mapper (ATM) data from OI to detect a five-year (between 2010 and 2015) elevation-change rate of Jakobshavn Isbrae (JI), the largest and fastest flowing outlet glacier in Greenland. Grid elevation of OI ATM is calculated and a method to calculate elevation-change rate using these grid data is implemented. The comparison of repeat grid elevation indicates that our method can generate unbiased elevation data. The uncertainty of grid elevation data using our method is approximately 1.0 m (standard deviation). The five-year average elevation rate of JI over regions with ice velocity ≥300 m/a between 2010 and 2015 was about -5.0 m/a (root mean square uncertainty: 0.7 m/a), with a maximum thinning rate of -18.2 m/a near the terminus of the main trunk. Compared with the previously reported elevation-change rates from earlier periods, JI has now reached a new high thinning record.

Zezong Chen;Fei Xie;Chen Zhao;Chao He; "Radio Frequency Interference Mitigation for High-Frequency Surface Wave Radar," vol.15(7), pp.986-990, July 2018. Radio frequency interference (RFI) is a common source of interference for high-frequency surface wave radar (HFSWR) since the HF band is shared among many radio services. The existence of RFI greatly degrades the detection performance of HFSWR. On the basis of a detailed analysis of the characteristics of RFI, a new RFI mitigation method based on joint fractional Fourier transform (FRFT) and complex empirical mode decomposition (CEMD) is proposed in this letter. In this method, the optimum transform order of the FRFT is obtained through a two-level peak search. Subsequently, echoes that contain RFI are transformed into the fractional Fourier domain and are then decomposed into a number of intrinsic mode functions (IMFs) via CEMD. The RFI is mitigated by implementing detection and localization in each IMF. The reconstructed signal is then transformed into the time domain via an inverse FRFT, and the mitigated signal is finally obtained. Processing results using experimental data indicate that the proposed method can effectively suppress RFI without losing echoes.

Henrique C. Oliveira;Vitor C. Guizilini;Israel P. Nunes;Jefferson R. Souza; "Failure Detection in Row Crops From UAV Images Using Morphological Operators," vol.15(7), pp.991-995, July 2018. The detection of failures (DF) in coffee crops is fundamental in evaluating product quality and the optimal occupation of planted areas. The use of unmanned aerial vehicles (UAVs) in precision agriculture has great potential as a tool to analyze critical parameters in cultivation, among them the detection of planting failures. This letter presents a novel methodology for DF from aerial images, obtained using a UAV capable of collecting high-resolution RGB images. The proposed approach uses mathematical morphology operators to detect failures over planted areas and returns both the individual positions of these failures and total failure length (sum of empty spaces between plants), thus facilitating decision making for further actions. Results show that the proposed DF method is reliable for accurately identifying failures over rows of planted coffee crops.

Yuan Fang;Yunyun Hu;Qiwei Zhan;Qing Huo Liu; "Electromagnetic Forward and Inverse Algorithms for 3-D Through-Casing Induction Mapping of Arbitrary Fractures," vol.15(7), pp.996-1000, July 2018. This letter extends the axisymmetric version of the efficient forward and inverse algorithms to characterize and reconstruct arbitrary 3-D fractures in a cased borehole environment. We improve our previous hybrid distorted Born approximation and stabilized biconjugate gradient fast Fourier transform method as the 3-D forward modeling algorithm that can significantly reduce the computational cost in forward modeling. The bounding constraints and model parameter transformation functions are introduced to our previous axisymmetric variational Born iterative inverse method to effectively reconstruct 3-D fractures. Numerical results show orders of magnitude higher efficiency of the forward algorithm than the finite-element method, and the effectiveness of the inverse algorithm for through-casing 3-D fracture reconstruction.

Xiaodong Luan;Qingyun Di;Hongzhu Cai;Michael Jorgensen;Xiaojing Tang; "CSAMT Static Shift Recognition and Correction Using Radon Transformation," vol.15(7), pp.1001-1005, July 2018. The presence of shallow conductive heterogeneities can cause static shift in controlled source audio-frequency magnetotellurics (CSAMT) apparent resistivity sounding curves. This is observed as a shift along the apparent resistivity axis in double logarithmic coordinates. Such effect can cause difficulties in CSAMT data interpretation. In this letter, we established a new method to identify and correct the CSAMT static shift based on the high-resolution Radon transformation. We took advantage of the property that the static shift of apparent resistivity curves behaves as a point in the Radon domain. We presented 3-D synthetic study to demonstrate that the static shift can be effectively removed from the apparent resistivity curve. However, due to the low resolution and precision, the traditional Radon transform can generate “scissor-tail” and then, the static shift may not be completely removed. We proposed to use a new high-resolution Radon transform by improving the regularization matrix using the least squares inversion. Our numerical simulation shows that by using this high-resolution Radon transform, the static shift converges accurately to a point without showing the “scissor-tail.” In the application to field CSAMT data, the high-resolution Radon transform method was able to correct the static shift effectively; thus, it can improve the precision and accuracy in data interpretation.

Yuxi Li;Wei Lu;Guangyou Fang;Shaoxiang Shen; "The Imaging Method and Verification Experiment of Chang’E-5 Lunar Regolith Penetrating Array Radar," vol.15(7), pp.1006-1010, July 2018. As a main payload of the lander of Chang’E-5, lunar regolith penetrating array radar (LRPR) is a multi-input and multioutput ground penetrating array radar system. The following are the main characteristics of LRPR: it works at a fixed location, the layout of the antennas is irregular, and the amount of data for imaging is very rare. A prestack depth migration imaging method based on Kirchhoff integral is presented, which overcomes the problems of irregular antenna layout and the layered medium. The issue of sparse matrix imaging for LRPR is solved successfully by the method. The laboratory experiment demonstrates that the proposed method is very suitable for LRPR imaging, in which the targets are clearly distinguished and their depths are also consistent with the reality.

Ali Abbadi;Hamza Bouhedjeur;Ahcene Bellabas;Tarek Menni;Faouzi Soltani; "Generalized Closed-Form Expressions for CFAR Detection in Heterogeneous Environment," vol.15(7), pp.1011-1015, July 2018. The purpose of this letter is the derivation of generalized closed forms of the probability of detection (<inline-formula> <tex-math notation="LaTeX">$P_{d}$ </tex-math></inline-formula>) and the probability of false alarm (<inline-formula> <tex-math notation="LaTeX">$P_{text {fa}}$ </tex-math></inline-formula>) governing the statistical detection of targets in heterogeneous environment. Simplified general expressions, which can be used to evaluate the performance of a family of constant false alarm rate (CFAR) detectors, are obtained. A performance analysis is performed, which compares the results of the derived expressions and those for cell averaging CFAR, order statistics CFAR, automatic censored mean level CFAR, and Anderson–Darling CFAR detection schemes under different signal-to-noise ratio and interfering targets scenarios. It is found that the derived expressions closely reflect the behavior of both detectors and model other potential detectors that are particularly more robust in the presence of severe interfering targets.

Yajing Guo;Ming Yao;Kai Yuan;Xiaohua Deng; "Simulation and Performance Evaluation of Three Types of ISR Coding Systems," vol.15(7), pp.1016-1019, July 2018. Incoherent scatter radar can measure a variety of ionospheric plasma parameters, but the associated radar echo signals are very weak. To extract the effective ionospheric parameters, complex coding schemes are used with modern incoherent scatter measurements. Long pulse code, Barker code, alternating code, and alternating code modulated by Barker code (i.e., composite code) are some of the more frequently used coding systems. In practice, ionospheric signals are easily disturbed by external factors, and thus, the echo signals contain substantial amounts of noise. Therefore, it is important to investigate the antinoise performances of these codes. According to the basic theory of the ambiguity function, simulations of the 2-D ambiguity functions of these four codes were conducted. In addition, waveform processing simulations were performed for the four codes. According to this letter, the composite code exhibits a greater range of detection, a higher range resolution, and better antinoise performance than the other three types of codes and is more conducive to the detection of weak signals.

Yipeng Ding;Yinhua Sun;Juan Zhang;Ling Wang; "Multiperspective Target Tracking Approach for Doppler Through-Wall Radar," vol.15(7), pp.1020-1024, July 2018. In this letter, a multiperspective target tracking approach is proposed for Doppler through-wall radar (TWR). To properly identify stationary and tangentially moving targets, an expansion receiver is added to the traditional cost-effective Doppler TWR, and a data fusion process is applied to track targets. Using the expansion receiver, the information on the target is acquired from another perspective, which can help to eliminate the detection dead zone. This proposed approach can also improve estimation accuracy. As a preliminary assessment, experimental results are provided to illustrate the performance of the proposed approach.

Umit Alkus;Asaf Behzat Sahin;Hakan Altan; "Stand-Off Through-the-Wall W-Band Millimeter-Wave Imaging Using Compressive Sensing," vol.15(7), pp.1025-1029, July 2018. Through-the-wall radar imaging is a research area which has gathered renewed interest with the development of powerful millimeter-wave sources and detectors. Imaging techniques that require some sort of reconstruction can suffer from slow speeds and lack of complexity in the data acquisition phase of the measurements. These reconstruction methods can be greatly improved using compressive sensing (CS)-based tools that increase speed during these processes. Here, a W-band single-pixel imaging system based on CS, which utilizes a mechanically controlled spatial light modulator to rapidly acquire the image, is demonstrated for metallic targets placed behind drywall. The system uses a frequency-modulated continuous wave W-band transmitter to illuminate the wall and the target. The image reflected off the target's field of view is spatially modulated by 10 × 10 array patterned masks and the signals are collected through a heterodyne receiver. The system can differentiate and locate a behind-the-wall object through the frequency swept Michelson interferometry analysis, since the strong reflections from the surface of the wall and the object induce an interference effect, which is observed at the receiver. The overall design of the optical system is optimized with respect to the geometry of the modulation pattern in the image plane resulting in successfully reconstructed images of objects using CS-based algorithms. Using larger optics and uniquely patterned masks such techniques can provide the solutions toward cost-effective, rapid analysis of structural changes in or behind visibly opaque media.

Erik Blomberg;Laurent Ferro-Famil;Maciej J. Soja;Lars M. H. Ulander;Stefano Tebaldini; "Forest Biomass Retrieval From L-Band SAR Using Tomographic Ground Backscatter Removal," vol.15(7), pp.1030-1034, July 2018. A tomographic synthetic aperture radar (TomoSAR) represents a possible route to improved retrievals of forest parameters. Simulated orbital L-band TomoSAR data corresponding to the proposed Satellites for Observation and Communications-Companion Satellite (SAOCOM-CS) mission (1.275 GHz) are evaluated for retrieval of above-ground biomass in boreal forest. L-band data and biomass measurements, collected at the Krycklan test site in northern Sweden as part of the BioSAR 2008 campaign, are used to compare biomass retrievals from SAOCOM-CS to those based on SAOCOM SAR data. Both data sets are in turn compared with the corresponding airborne case, as represented by experimental airborne SAR through processing of the original SAR data. TomoSAR retrievals use a model involving a logarithmic transform of the volumetric backscatter intensity, Ivol, defined as the total backscatter originating between 10 and 30 m above ground. SAR retrievals are obtained with slope-compensated intensity γ0 using the same model. It is concluded that tomography using SAOCOM-CS represents an improvement over an airborne SAR imagery, resulting in biomass retrievals from a single polarization (HH) having a 26%-30% root-mean-square error with a little to no impact from the look direction or the local topography.

Dongdong Guan;Deliang Xiang;Ganggang Dong;Tao Tang;Xiaoan Tang;Gangyao Kuang; "SAR Image Classification by Exploiting Adaptive Contextual Information and Composite Kernels," vol.15(7), pp.1035-1039, July 2018. For synthetic aperture radar (SAR) image land cover classification, traditional feature-based methods are not always effective because of the heavy multiplicative noise. To solve this problem, we herein propose a new classification method for SAR images considering adaptive spatial contextual information. In contrast to preceding studies, the spatial contextual information of the SAR images is exploited via composite kernels (CKs). Additionally, an image superpixel strategy is employed to design an adaptive neighborhood, which enables the extraction of more accurate spatial information than a fixed-size neighborhood. Specifically, a modified superpixel map is first generated to produce the neighborhood. With this neighborhood, a context kernel is then defined by means of the Gaussian radial basis function. The resulting context kernel is combined with the conventional feature kernel via the designed CKs scheme. The relative proportion of these two kernels is controlled by a weight parameter. The label of each pixel is predicted by feeding the final CKs into a support vector machine classifier. Experiments on two real SAR images demonstrate that the proposed method can greatly improve the classification performance, both visually and quantitatively, in comparison to other traditional feature-based methods.

Quan Chen;Zhen Li;Ping Zhang;Haoran Tao;Jiangyuan Zeng; "A Preliminary Evaluation of the GaoFen-3 SAR Radiation Characteristics in Land Surface and Compared With Radarsat-2 and Sentinel-1A," vol.15(7), pp.1040-1044, July 2018. The first evaluation of GaoFen-3 SAR radiation characteristics and its potential for quantitative parameter estimation in land surface is presented in this letter. Based on two triangle corner reflectors, five impulse response property parameters are calculated to assess image quality of GaoFen-3 SAR, and results show that the peak sidelobe ratio is superior to system specification and the integrated sidelobe ratio cannot be assessed by misconduct in the field campaign mainly for strong background scattering (not low enough). Five distributed target parameters are extracted from three categories and compared with Radarsat-2 and Sentinel-1A quasi-synchronous images in two days. The results indicate good radiometric resolution (RR) of 3 dB and good equivalent number of looks closed to 1 for GaoFen-3 single-look complex product, which proves GaoFen-3 SAR image is at the same quality level with the other two C-band SAR images and its RR meets system design of 3.5 dB. Then, the backscatter coefficient of these three SAR images is compared at bare soil area after GaoFen-3's incidence angle being normalized to 43° by Oh2004 empirical model, and the results reveal the problem of several decibels lower for GaoFen-3 in absolute radiometric calibration. Finally, the capability of surface parameter estimation is justified by statistical relation of GaoFen-3 co-polarization backscatter coefficient and bare soil moisture content, indicating its good potential for quantitative applications in land after solving the problem in calibration.

Zaidao Wen;Biao Hou;Qian Wu;Licheng Jiao; "Discriminative Feature Learning for Real-Time SAR Automatic Target Recognition With the Nonlinear Analysis Cosparse Model," vol.15(7), pp.1045-1049, July 2018. This letter presents an efficient application of the nonlinear analysis cosparse model (NACM) to the task of real-time synthetic aperture radar automatic target recognition (ATR). In contrast to the conventional synthesis sparse representation model, NACM enables efficient sparse feature extraction and selection using a feed-forward mechanism. Furthermore, NACM does not require a sparsity-inducing regularizer. This model uses a task-driven learning framework, in which a naive Bayes or a discriminative classifier is adaptively learned along with the regularized features. Experimental results with the moving and stationary target acquisition and recognition benchmark demonstrate the effectiveness and efficiency of our proposed approach. Compared with traditional classification algorithms using sparse representation, our approach not only achieves higher or comparable recognition accuracy but also dramatically reduces the execution time for real-time ATR.

Yuanyuan Hu;Jianchao Fan;Jun Wang; "Classification of PolSAR Images Based on Adaptive Nonlocal Stacked Sparse Autoencoder," vol.15(7), pp.1050-1054, July 2018. Land cover classification using polarimetric synthetic aperture radar (PolSAR) images is an important tool for remote sensing analysis. In view that PolSAR image effective interpretation is commonly affected by the absence of discriminative features and the presence of speckle noises, this letter proposes an adaptive nonlocal stacked sparse autoencoder (ANSSAE) to achieve PolSAR image classification. It extracts the adaptive nonlocal spatial information by adaptively calculating weighted average values of each pixel from nonlocal regions, which can reduce the influence of speckle noises and retain edge details. In the first layer of the ANSSAE, the adaptive nonlocal spatial information is introduced into the objective function to obtain the robust feature representation, whose effects would transfer to the rest of layers. Therefore, the ANSSAE can automatically capture spatial-related, robust, and distinguishable features, which can suppress speckle noises and gain accurate classification results. Experimental results on two real PolSAR images demonstrate that the proposed approach can significantly improve the classification accuracy.

Zhou Xu;Bo Tang;Shuiying Cheng; "Faint Ship Wake Detection in PolSAR Images," vol.15(7), pp.1055-1059, July 2018. Focusing on the faint turbulent wake detection in polarimetric synthetic aperture radar (PolSAR) images, this letter introduces a novel two-step (coarse and fine processing) detector. Based on the polarization decomposition theory, a new parameter that enhances the contrast between wake and sea, which is called surface scattering randomness (SSR), is proposed. The coarse detection process extracts the regions of potential ship wakes by digital axoids transform of SSR. During the fine detection process, the regions detected by the coarse detection are segmented and processed independently in the polarization feature domain. Finally, by introducing the regularity least-squares method, the line parameters of ship wake are obtained, while the a priori knowledge is obtained from Radon transform. Experiments are performed on Radarsat-2 data, and the results demonstrate that the proposed algorithm has strong ability to detect faint turbulent wake in PolSAR images compared with the current detectors, which are based on the Radon or Hough transform.

Justino Martínez;Verónica González-Gambau;Antonio Turiel; "Mitigation of RFI Main Lobes in SMOS Snapshots by Bandpass Filtering," vol.15(7), pp.1060-1064, July 2018. Since the beginning of the soil moisture and ocean salinity mission, the pervading presence of radio frequency interferences (RFI) has been one of the most problematic issues. The effect of an RFI is not just a hot spot but also six tails along the three main axes, and the general presence of ripples which degrade the quality of L1 brightness temperature snapshots. The standard mitigation technique is to apply an apodization (Blackman), but such a low-pass filter leaves traces of the tails and spreads the signal of the main lobes. New RFI mitigation techniques, such as nodal sampling, are very effective in reducing the impact of tails and ripples, but in some cases they lead to the spread of the RFI main lobe, with a significant loss of data on the affected area. In this letter, we propose a new technique to reduce their spread by an adaptive thresholding on a bandpass filtered version of the snapshot, with a significant recovery of data.

Muhammad Dawood;Kaixing Li; "Detection of Brillouin Precursors at Microwave Frequencies Through a Rectangular Waveguide Filled With Wet Soil," vol.15(7), pp.1065-1069, July 2018. This letter extends our recent experimental work to detect Brillouin Precursors (BPs) at microwave frequencies through a rectangular waveguide (WG) filled with wet soil. Simulated and experimental data processing methods are discussed. BPs through the waveguide at TE10 mode are shown at two center frequencies. These are then compared with TEM BP at normal incident angles through similar medium. An optimized Brillouin pulse is numerically constructed, and its performance is compared with Lambert-Beer's law limits.

Shangrong Wu;Zhongxin Chen;Jianqiang Ren;Wujun Jin; Hasituya;Wenqian Guo;Qiangyi Yu; "An Improved Subpixel Mapping Algorithm Based on a Combination of the Spatial Attraction and Pixel Swapping Models for Multispectral Remote Sensing Imagery," vol.15(7), pp.1070-1074, July 2018. To obtain spatial feature distributions from the mixed pixels of remote sensing images and increase the accuracy of land-cover classification and recognition, a double-calculated spatial attraction model (DSAM) based on the combination of spatial attraction model (SAM) and pixel swapping model (PSM) is presented and verified by introducing the law of universal gravitation to describe the attraction between pixels. In DSAM, SAM was used to improve the initialization algorithm of PSM, and the optimization algorithm of PSM was improved accordingly. Using a SPOT-5 remote sensing image, related subpixel mapping (SPM) experiments were performed to verify the SPM effect of DSAM and to test its accuracy. The experimental results indicated that the DSAM SPM results were superior to the SPM results of SAM and PSM. DSAM was proven effective and applicable for the SPM of remotely sensed images, and the model can improve the accuracy of SPM and landcover classification.

Hugh L. Kennedy; "Isotropic Estimators of Local Background Statistics for Target Detection in Imagery," vol.15(7), pp.1075-1079, July 2018. Square windows are usually used to estimate the parameters of local background statistics for the constant false alarm rate detection of anomalies/targets in multispectral and synthetic-aperture radar imagery. It is shown that more isotropic windows offer improved detection performance in Monte Carlo simulations. Separable and nonseparable realizations are discussed and a novel recursive realization is presented.

Biplab Banerjee;Subhasis Chaudhuri; "Scene Recognition From Optical Remote Sensing Images Using Mid-Level Deep Feature Mining," vol.15(7), pp.1080-1084, July 2018. We solve the problem of scene recognition from very high-resolution optical satellite remote sensing (RS) images by exploring the notion of mid-level feature mining. The existing mid-level feature extraction techniques are based on applying feature encodings over a set of discriminatively selected localized feature descriptors from the images. Such techniques inherently suffer from two shortcomings: 1) the local descriptors are not enough discriminative, since they are mostly based on scale invariant feature transform (SIFT) like ad hoc features and 2) the definition of a robust ranking function to select discriminative local features is nontrivial. As a remedy, we propose a pattern mining-based approach for an efficient discovery of mid-level visual elements, which considers convolutional neural network features of the category-independent region proposals extracted from the images as the local descriptors. While the region proposals depict better semantic information than the SIFT like features, the proposed pattern mining strategy can efficiently highlight the correlations between such local descriptors and the class labels. Experimental results suggest that the proposed technique outperforms a number of existing mid-level feature descriptors for the standard optical RS data sets.

Qijian Zhang;Libao Zhang;Wenqi Shi;Yue Liu; "Airport Extraction via Complementary Saliency Analysis and Saliency-Oriented Active Contour Model," vol.15(7), pp.1085-1089, July 2018. Automatic airport extraction in remote sensing images (RSIs) has been widely applied in military and civil applications. An efficient airport extraction framework for RSIs is constructed in this letter. In the first step, we put forward a two-way complementary saliency analysis (CSA) scheme that combines vision-oriented saliency and knowledge-oriented saliency for the airport position estimation. In the second step, we construct a saliency-oriented active contour model (SOACM) for airport contour tracking, where a saliency orientation term is incorporated into the level-set-based energy functions. Under the guidance of saliency feature representations obtained by CSA, the SOACM can acquire well-defined and highly precise object contours. Experimental results demonstrate that the proposed extraction framework shows good adaptability in remote sensing scenes, and uniformly achieves high detection rate and low false alarm rate. Compared with three state-of-the-art algorithms, our proposal can not only estimate the location of airport targets, but also extract detailed information of the airport contours.

Fei Wen;Yongjun Zhang;Zhi Gao;Xiao Ling; "Two-Pass Robust Component Analysis for Cloud Removal in Satellite Image Sequence," vol.15(7), pp.1090-1094, July 2018. Due to the inevitable existence of clouds and their shadows in optical remote sensing images, certain ground-cover information is degraded or even appears to be missing, which limits analysis and utilization. Thus, cloud removal is of great importance to facilitate downstream applications. Motivated by the sparse representation techniques which have obtained a stunning performance in a variety of applications, including target detection, anomaly detection, and so on; we propose a two-pass robust principal component analysis (RPCA) framework for cloud removal in the satellite image sequence. First, a plain RPCA is applied for initial cloud region detection, followed by a straightforward morphological operation to ensure that the cloud region is completely detected. Subsequently, a discriminative RPCA algorithm is proposed to assign aggressive penalizing weights to the detected cloud pixels to facilitate cloud removal and scene restoration. Significantly superior to currently available methods, neither a cloud-free reference image nor a specific algorithm of cloud detection is required in our method. Experiments on both simulated and real images yield visually plausible and numerically verified results, demonstrating the effectiveness of our method.

Bowen Cai;Zhiguo Jiang;Haopeng Zhang;Yuan Yao;Shanlan Nie; "Online Exemplar-Based Fully Convolutional Network for Aircraft Detection in Remote Sensing Images," vol.15(7), pp.1095-1099, July 2018. Convolutional neural network obtains remarkable achievements on target detection, due to its prominent capability on feature extraction. However, it still needs further study for aircraft detection task, since intraclass variation still restricts the accuracy of aircraft detection in remote sensing images. In this letter, we adopt regularity of aircraft circle response to design our end-to-end fully convolutional network (FCN), and embed online exemplar mining into our network to handle intraclass variation. The mined exemplars are employed to capture different intraclass characteristics, which effectively reduces the burden of network training. Specifically, we first select basic exemplars based on labeled information and initialize the relationships between exemplars and aircraft examples. Then, these relationships will be updated by the similarity of these examples in high-level features space. Finally, aircraft examples will be used to train different exemplar detectors according to updated relationships. Motivated by the geometric shape of aircraft, a circle response map is developed to construct our FCN to achieve more efficient aircraft detection. The comparative experiments indicate that superior performance of our network in accurate and efficient aircraft detection.

Zhen Shu;Xiangyun Hu;Jing Sun; "Center-Point-Guided Proposal Generation for Detection of Small and Dense Buildings in Aerial Imagery," vol.15(7), pp.1100-1104, July 2018. For automatic building detection in aerial images, small and dense buildings make it a very challenging task. It is because small objects lack sufficient information, and dense building distribution makes the localization of the objects confusing. High-quality building proposals can certainly promote the detection performance. The key to the problem is adopting sufficiently proper size and location of bounding boxes to use the image information for the proposal generation. Based on machine learning with a deep convolutional neural network, this letter proposes a new pipeline of building proposal generation, which is an end-to-end process during training and testing. First, the proposed pipeline attempts to find possible object center points called point proposals. Subsequently, a location refinement module and an object scoring module are applied to the boxes generated from the point proposals with a series of sizes and aspect ratios to obtain the final object proposals. This center-point-guided location refinement and multibox scoring method effectively alleviates the small and dense object problems. Experiments in INRIA Aerial Image Labeling data set demonstrate the better performance of our approach than other state-of-the-art proposal methods. In addition, we add a normal classification branch based on our generated proposals to conduct experiments on detection task. Detection result outperforms the latest detection framework R-FCN equipped with ResNet-101 7% mean average precision at 0.7.

Hao Fang;Aihua Li;Huoxi Xu;Tao Wang; "Sparsity-Constrained Deep Nonnegative Matrix Factorization for Hyperspectral Unmixing," vol.15(7), pp.1105-1109, July 2018. Nonnegative matrix factorization (NMF) has been widely used in hyperspectral unmixing (HU). However, most NMF-based methods have single-layer structures, which may achieve poor performance for complex data. Deep learning, with its carefully designed hierarchical structure, has shown great advantages in learning data features. In this letter, we design a deep NMF structure by unfolding NMF into multilayers and present a sparsity-constrained deep NMF method for HU. In each layer, the abundance matrix is directly decomposed into the abundance matrix and endmember matrix of the next layer. Due to the nonconvexity of the NMF model, sparsity constraint is added to each layer using a L1 regularizer of the abundance matrix on each layer. To get better initial parameters for the deep NMF network, a layer-wise pretraining strategy based on Nesterov's accelerated gradient algorithm is put forward to initialize the network. An alternative update method is also proposed to further fine-tune the network to get final decomposition results. The experimental results based on synthetic data and real data demonstrate that the proposed method outperforms several other state-of-the-art unmixing approaches.

Lifeng Yan;Minshan Cui;Saurabh Prasad; "Joint Euclidean and Angular Distance-Based Embeddings for Multisource Image Analysis," vol.15(7), pp.1110-1114, July 2018. With the emergence of passive and active optical sensors available for geospatial imaging, information fusion across sensors is becoming ever more important. An important aspect of single (or multiple) sensor geospatial image analysis is feature extraction—the process of finding “optimal” lower dimensional subspaces that adequately characterize class-specific information for subsequent analysis tasks, such as classification, change and anomaly detection, and so on. In recent work, we proposed and developed an angle-based discriminant analysis approach that projected data onto the subspaces with maximal “angular” separability in the input (raw) feature space and reproducing kernel Hilbert space. We also developed an angular locality preserving variant of this algorithm. Despite being a promising approach, the resulting subspace does not preserve Euclidean distance information. In this letter, we advance this work to address that limitation and make it suitable for information fusion—we propose and validate a composite kernel-based subspace learning framework that simultaneously preserves Euclidean and angular information, which can operate on an ensemble of feature sources (e.g., from different sources). We validate this method with the multisensor University of Houston hyperspectral and light detection and ranging data set, and demonstrate that a joint discriminant analysis that leverages angular and Euclidean distance information provides superior classification and sensor (information) fusion performance.

Vicente García-Santos;Enric Valor;Claudia Di Biagio;Vicente Caselles; "Predictive Power of the Emissivity Angular Variation of Soils in the Thermal Infrared (8–14 <inline-formula> <tex-math notation="LaTeX">$mu$ </tex-math></inline-formula>m) Region by Two Mie-Based Emissivity Theoretical Models," vol.15(7), pp.1115-1119, July 2018. A confident knowledge of land surface emissivity at viewing zenith angles far from nadir is of prime interest to perform an accurate correction of the anisotropy effect in the measurements made by orbiting thermal infrared (TIR) sensors. It is also important for the correct treatment of angular measurements carried out by remote sensors such as AATSR/ENVISAT, MODIS/Terra-Aqua, or the recently launched SLSTR/Sentinel-3, which can also be used for the angular normalization of land surface temperature due to viewing geometry effect. In this letter, the anisotropy of TIR emissivity predicted by two analytical, Warren-Wiscombe-Dozier and Hapke, models based on Mie diffraction theory was compared with field-measured values under dry conditions. The results showed good agreement between models and measurements (RMSEs ranging from ±0.004 to ±0.030 depending on the sample, with an average value of ±0.016) if the compactness of the grains soil is taken into account, demonstrating the good performance of the cosine term of the zenith angle included in the expressions of both models. A significant disagreement between models and measurements is, however, obtained for some samples at high zenith angles, suggesting the necessity of a fudge factor to include in the compactness correction in that condition.

Haiming Qin;Cheng Wang;Xiaohuan Xi;Sheng Nie;Guoqing Zhou; "Integration of Airborne LiDAR and Hyperspectral Data for Maize FPAR Estimation Based on a Physical Model," vol.15(7), pp.1120-1124, July 2018. The fraction of photosynthetically active radiation (FPAR) is a key parameter in controlling mass and energy exchanges between vegetation and atmosphere. LiDAR data-derived canopy vertical structural information and hyperspectral image-derived vegetation spectral information can be considered as complementary for vegetation FPAR estimation. To the best of our knowledge, few studies have estimated vegetation FPAR by both LiDAR and hyperspectral data based on physical models. This letter aims to explore the ability of combining airborne LiDAR and hyperspectral data to retrieve maize FPAR based on the energy budget balance principle. First, canopy gap probability and openness were estimated from airborne LiDAR data. Next, canopy reflectance and soil background reflectance were retrieved from hyperspectral image. Then, we estimated maize FPAR based on the energy budget balance principle. Finally, model validity was assessed by in situ data and results showed the physical FPAR estimation model estimated maize FPAR accurately. These results indicated that the physical method proposed in this letter was efficient and reliable to estimate maize FPAR, and FPAR retrieval can benefit from the complementary nature of LiDAR-captured canopy structural information and hyperspectral-detected vegetation spectral characteristics.

Minh-Tan Pham;Sébastien Lefèvre;François Merciol; "Attribute Profiles on Derived Textural Features for Highly Textured Optical Image Classification," vol.15(7), pp.1125-1129, July 2018. Morphological attribute profiles (APs) have thus far been proven effective for remote sensing image classification by several research studies. However, recent studies have shown that a direct application of APs to highly textured and structured images, especially in very high-resolution (VHR) optical imagery, may be insufficient. Some solutions have been proposed to deal with this issue, such as to extract the local histograms and the local features of AP images [histogram-based APs (HAPs) and local feature-based APs (LFAP), respectively], or to combine APs with different textural features. In this letter, we review these approaches and then propose a novel strategy, which directly generates APs on some derived textural features instead of separately combining them. Experimental results from both natural textures and VHR optical remotely sensed images show that the proposed approach can produce superior classification performance than the standard APs, HAPs, LFAPs, as well as the classical combination of APs with textural features.

Tao Song;Lei Kuang;Lingyan Han;Yuheng Wang;Qing Huo Liu; "Inversion of Rough Surface Parameters From SAR Images Using Simulation-Trained Convolutional Neural Networks," vol.15(7), pp.1130-1134, July 2018. This letter investigates the inversion of rough surface parameters (the root mean square height and the correlation length) from microwave images by using deep convolutional neural networks (CNNs). Training data for the deep CNN are simulated numerically using computational electromagnetic method. As CNN is powerful in extracting image features, scattering field from rough surfaces is first converted to microwave images via interpolated fast Fourier transform and then fed into the CNN. In order to reduce overfitting, the regularization technique and dropout layer are used. The proposed CNN consists of five pairs of convolutional and maxpooling layers and two additional convolution layers for feature extraction and two fully connected layers for parameter regression. The experimental results demonstrated the feasibility using deep neural networks for the parameter inversion of rough surface from electromagnetic scattering fields. It suggests potential application of CNN for rough surface parameter inversion from microwave sensing data.

* "Introducing IEEE Collabratec," vol.15(7), pp.1135-1135, July 2018.* Advertisement, IEEE.

* "IEEE Geoscience and Remote Sensing Letters information for authors," vol.15(7), pp.C3-C3, July 2018.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "IEEE Geoscience and Remote Sensing Letters Institutional Listings," vol.15(7), pp.C4-C4, July 2018.* Presents the GRSS society institutional listings.

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing - new TOC (2018 July 16) [Website]

* "Frontcover," vol.11(6), pp.1-1, June 2018.* Presents the front cover for this issue of the publication.

* "IEEE Geoscience and Remote Sensing Societys," vol.11(6), pp.C2-C2, June 2018.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Table of Contents," vol.11(6), pp.1753-1754, June 2018.* Presents the table of contents for this issue of the publication.

S. Tjuatja;D. Kunkee;J. T. Johnson;K. S. Chen; "Foreword to the Special Issue on the 2017 IEEE International Geoscience and Remote Sensing Symposium," vol.11(6), pp.1755-1757, June 2018. The papers included in this special issue were presented at the 2017 International Geoscience and Remote Sensing Symposium (IGARSS 2017)was held on July 23–28, 2017 at the Fort Worth Convention Center, Fort Worth, TX. The theme of the conference was “International Cooperation for Global Awareness.”

Michael Kampffmeyer;Arnt-Børre Salberg;Robert Jenssen; "Urban Land Cover Classification With Missing Data Modalities Using Deep Convolutional Neural Networks," vol.11(6), pp.1758-1768, June 2018. Automatic urban land cover classification is a fundamental problem in remote sensing, e.g., for environmental monitoring. The problem is highly challenging, as classes generally have high interclass and low intraclass variances. Techniques to improve urban land cover classification performance in remote sensing include fusion of data from different sensors with different data modalities. However, such techniques require all modalities to be available to the classifier in the decision-making process, i.e., at test time, as well as in training. If a data modality is missing at test time, current state-of-the-art approaches have in general no procedure available for exploiting information from these modalities. This represents a waste of potentially useful information. We propose as a remedy a convolutional neural network (CNN) architecture for urban land cover classification which is able to embed all available training modalities in the so-called hallucination network. The network will in effect replace missing data modalities in the test phase, enabling fusion capabilities even when data modalities are missing in testing. We demonstrate the method using two datasets consisting of optical and digital surface model (DSM) images. We simulate missing modalities by assuming that DSM images are missing during testing. Our method outperforms both standard CNNs trained only on optical images as well as an ensemble of two standard CNNs. We further evaluate the potential of our method to handle situations where only some DSM images are missing during testing. Overall, we show that we can clearly exploit training time information of the missing modality during testing.

Srija Chakraborty;Ayan Banerjee;Sandeep K. S. Gupta;Philip R. Christensen;Antonia Papandreou-Suppappola; "Time-Varying Modeling of Land Cover Change Dynamics Due to Forest Fires," vol.11(6), pp.1769-1776, June 2018. Seasonal variations in land cover are commonly represented using a constant frequency cosine model with time-varying parameters. As frequency represents the constant annual vegetation growth cycle, the model is not adequate to represent dynamics such as sudden changes in land cover and subsequent regrowth. In this paper, we present a new model to capture time-varying changes in the vegetation growth cycle and detect abrupt changes in land cover due to forest fires. We also design a sequential Monte Carlo estimation approach of the time-varying frequency in the proposed nonlinear model using the particle filter (PF). We further propose a binary hypothesis land cover change detector that is based on a dissimilarity measure between windowed time-series observed during the same month of consecutive years. Experiments show that the PF estimation can detect change with lower delay than the existing approaches. Unsupervised mapping of the fire severity from the model parameter estimates is also developed.

Boyu Feng;Jinfei Wang; "Evaluation of Unmixing Methods for Impervious Surface Area Extraction From Simulated EnMAP Imagery," vol.11(6), pp.1777-1798, June 2018. Distribution of impervious surface area (ISA) is an important input in a wide range of urban ecosystem studies. The future launch of the German hyperspectral satellite environmental mapping and analysis program (EnMAP) in 2019 provides new opportunities for timely and global ISA extraction. The previously proposed EnMAP applications heavily relied on existing reference endmembers, which may be impractical on a global scale. To overcome this defect, we suggest to use the nonnegative matrix factorization (NMF) method to extract the endmember directly from EnMAP imagery. Three traditional unmixing method (e.g., N-Findr, pixel purity index, and independent component analysis) and four NMF-based methods with different constraints (e.g., sparseness, convex volume, and nonlinearity) were used to obtain series of endmember sets, ISA fraction, and classification maps. In results, the NMF-based methods outperformed the three traditional unmixing method, by achieving 0.5–0.6 R-squared values in the linear regression models between predicted and reference ISA percentages, and over 85% overall accuracy in ISA classification maps. We found that the NMF-based spectral unmixing methods are suitable to work with the EnMAP image, when reference endmember data are unavailable. In addition, we processed the widely used Hydice urban test image with the same methods and compared the resulting ISA percentage/classification maps with the EnMAP results, considering the different features of the Hydice and EnMAP sensors. In the results, it is proved that the EnMAP image has great potential in ISA mapping on a global scale, with reasonable overall accuracy and economical efficiency.

Kazi Tanzeem Shahid;Akshay Malhotra;Ioannis D. Schizas;Saibun Tjuatja; "Unsupervised Kernel Correlations Based Hyperspectral Clustering With Missing Pixels," vol.11(6), pp.1799-1810, June 2018. This paper focuses on unsupervised clustering of hyperspectral pixels whose intensity may not be available across certain spectral bands. The presence of statistical correlations among pixels that contain data originating from the same material is exploited here to develop a novel regularized correlation analysis framework to perform clustering. Kernel learning is integrated in a regularized correlations framework to exploit nonlinear dependencies of pixels acquiring information about similar materials. An effective technique is proposed to select the kernel mapping parameters and form pertinent kernel covariance matrices by proper weighted-averaging that takes into account the information content across the available spectral bands. The novel correlations framework will return proper sparse clustering matrices whose nonzero entries point to the correlated pixels. These matrices will be obtained via a minimization formulation that will be solved via computationally efficient subgradient descent iterations. The novel correlations formulation is applied in small-size pixel patches of the original hyperspectral image, while recursively bigger size patches are built that contain information about the same materials. Extensive numerical tests on real hyperspectral images reveal that the proposed approach, in spite of being unsupervised, can outperform existing supervised and unsupervised techniques especially in the presence of missing pixels.

Nina Merkle;Stefan Auer;Rupert Müller;Peter Reinartz; "Exploring the Potential of Conditional Adversarial Networks for Optical and SAR Image Matching," vol.11(6), pp.1811-1820, June 2018. Tasks such as the monitoring of natural disasters or the detection of change highly benefit from complementary information about an area or a specific object of interest. The required information is provided by fusing high accurate coregistered and georeferenced datasets. Aligned high-resolution optical and synthetic aperture radar (SAR) data additionally enable an absolute geolocation accuracy improvement of the optical images by extracting accurate and reliable ground control points (GCPs) from the SAR images. In this paper, we investigate the applicability of a deep learning based matching concept for the generation of precise and accurate GCPs from SAR satellite images by matching optical and SAR images. To this end, conditional generative adversarial networks (cGANs) are trained to generate SAR-like image patches from optical images. For training and testing, optical and SAR image patches are extracted from TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe. The artificially generated patches are then used to improve the conditions for three known matching approaches based on normalized cross-correlation (NCC), scale-invariant feature transform (SIFT), and binary robust invariant scalable key (BRISK), which are normally not usable for the matching of optical and SAR images. The results validate that a NCC-, SIFT-, and BRISK-based matching greatly benefit, in terms of matching accuracy and precision, from the use of the artificial templates. The comparison with two state-of-the-art optical and SAR matching approaches shows the potential of the proposed method but also revealed some challenges and the necessity for further developments.

Motoaki Mouri;Ichi Takumi;Hiroshi Yasukawa; "Development of Reliable and Stable QL1-NMF Algorithm for Analyzing Environmental ELF Magnetic Signals," vol.11(6), pp.1821-1831, June 2018. We previously developed two NMF algorithms (QL1-NMF1 and QL1-NMF2) using the quasi-L1 norm for analyzing environmental ELF magnetic field measurements. When the data included many outliers, the QL1-NMF algorithms returned better results than other BSS algorithms using the L1 norm. However, the derivative of the cost function in QL1-NMF1 was not based on a monotonically increasing function. This problem decreased the validity of the algorithm. QL1-NMF2 had serious problems with stability though it was based on a monotonically increasing derivative. The method therefore required an improvement of validity and stability. In the work described in this paper, we introduced new update functions that were based on a monotonically increasing derivative. Computer simulation results and real data results confirmed the new algorithm worked more stability than the previous one. Moreover, we showed the new algorithm was fast and accurate.

Yazhen Jiang;Xiaoguang Jiang;Ronglin Tang;Zhao-Liang Li;Yuze Zhang;Cheng Huang;Chen Ru; "Estimation of Daily Evapotranspiration Using Instantaneous Decoupling Coefficient From the MODIS and Field Data," vol.11(6), pp.1832-1838, June 2018. Daily evapotranspiration (ET) is of great significance among various practical applications including water management, drought monitoring, and climate change study. The ET estimations from remotely sensed models are usually instantaneous values. Various upscaling methods have been developed to extrapolate the instantaneous ET to daily scale. In the applications of these methods, the accuracy of daily ET estimation relies on both the accuracy of instantaneous ET calculation and the upscaling methods. This paper used the decoupling model to estimate daily ET directly, according to the constancy of the decoupling coefficient (Ω) in the model in a diurnal cycle, without the calculation of instantaneous ET. The estimated daily ET was compared with the Eddy covariance measurements which were corrected by the Bowen ratio method to close the energy imbalances. The result from field data alone showed that the coefficient of determination (R2) of daily ET estimation was 0.860, with a root-mean-square error (RMSE) of 18.2 W/m2, and a bias of −4.7 W/m2. Combining MODIS data and field data, the estimated daily ET had a R2 of 0.860, a RMSE of 21.6 W/m2, and a bias of −5.0 W/m2 . Therefore, it is feasible and effective to obtain daily ET using remote-sensing based instantaneous Ω to replace daily value in the decoupling model.

Yu-Ze Zhang;Li Ni;Hua Wu;Xiao-Guang Jiang; "Evaluations of the Wavelet-Transformed Temperature and Emissivity Separation Method: Lessons Learned From Simulated and Field-Measured TIR Data," vol.11(6), pp.1839-1847, June 2018. Mathematically, once the measured radiance has been corrected for atmospheric effects, the only issue when determining the land surface temperature and emissivity (LST and LSE) is solving the ill-posed problem in the radiative transfer equation (RTE). Recently, based on the wavelet transform theory, a so-called wavelet-transformed temperature and emissivity separation (WTTES) method has been proposed for retrieving LST and LSE from hyperspectral data. Although, in a previous article, an initial suggestion was provided after analyzing the uncertainties under the conditions of several typical errors, it was also noted that considerable work was still necessary for achieving a reliable method for driving the WTTES algorithm, particularly under different situations. To complement the previous analysis of the WTTES algorithm, this paper presents a more detailed and comprehensive evaluation in which we changed the wavelet functions, varied the wavelet levels, and biased the atmospheric profiles. The results in this paper showed that the WTTES algorithm was insensitive to the choice of wavelet function. In addition, the WTTES algorithm could stay stable in most circumstances. A wavelet level of n = 3 was more recommend when the NEΔT was approximately 0.2 K. When a higher level of noise was found, a level of n = 4 could be then used to better overcome the noise. When a lower level of noise was found, a level of n = 2 could be used to further refine the spectral features. Additionally, we also found that the WTTES algorithm could have problems when atmospheric effects were inaccurately compensated for, especially for wet-warm profiles.

D. Arun Kumar;Saroj K. Meher;K. Padma Kumari; "Adaptive Granular Neural Networks for Remote Sensing Image Classification," vol.11(6), pp.1848-1857, June 2018. Monitoring and measuring the conditions of the earth surface play important role in the domain of global change research. In this direction, various methodologies exist for land cover classification of remote sensing images. Neural network (NN) is one such system that has the ability to classify land cover of remote sensing images, but most often its accuracy deteriorates with the level of uncertainty. Granular NN (GNN), that incorporates information granulation operation to NN, is one of the solutions to address this issue. However, complexity of the remote sensing dataset demands for a GNN with changeable granular structure (shape and size of granules) based on the requirement. In this study, our objective is to develop a GNN with adaptive granular structure for the classification of remote sensing images. An architecture of this adaptive GNN (AGNN) evolves according to the information present in the incoming labeled pixels. As a result, the AGNN improves the classification performance compared to other similar models. Performance of the model has been tested with hyperspectral and multispectral remote sensing images. Superiority of the proposed model to other similar methods has been verified with performance measurement metrics, such as overall accuracy, users accuracy, producers accuracy, dispersion score, and kappa coefficient.

Fang Chen;Huiyu Zhou;Christos Grecos;Peng Ren; "Segmenting Oil Spills from Blurry Images Based on Alternating Direction Method of Multipliers," vol.11(6), pp.1858-1873, June 2018. We exploit the alternating direction method of multipliers (ADMM) for developing an oil spill segmentation method, which effectively detects oil spill regions in blurry synthetic aperture radar (SAR) images. We commence by constructing energy functionals for SAR image deblurring and oil spill segmentation separately. We then integrate the two energy functionals into one overall energy functional subject to a linear mapping constraint that correlates the deblurred image and the segmentation indicator. The overall energy functional along with the linear constraint follows the form of ADMM and thus enables an effective augmented Lagrangian optimization. Furthermore, the iterative updates in the ADMM maintain information exchanges between the energy minimizations for SAR image deblurring and oil spill segmentation. Most existing blurry image segmentation strategies tend to consider deblurring and segmentation as two independent procedures with no interactions, and the operation of deblurring is thus not guided for obtaining an accurate segmentation. In contrast, we integrate deblurring and segmentation into one overall energy minimization framework with information exchanges between the two procedures. Therefore, the deblurring procedure is inclined to operate in favor of the more accurate oil spill segmentation. Experimental evaluations validate that our framework outperforms the separate deblurring and segmentation strategy for detecting oil spill regions in blurry SAR images.

Cheng Zhang;Hao Liu;Lijie Niu;Ji Wu; "CSMIR: An L-Band Clock Scan Microwave Interferometric Radiometer," vol.11(6), pp.1874-1882, June 2018. Clock scan microwave interferometric radiometer (CSMIR) is a new concept of synthetic aperture radiometer. It has the potentials of being applied to the future Solar Polar Orbit and Geostationary Earth Orbit missions due to the advantages of having a simple and deployable array structure, easy calibration, and flexible visibility sampling. This paper discusses the development of an L-band CSMIR prototype, presents the achievements in hardware design, calibration method, and overall performance tests through outdoor experiments. The experimental results demonstrate the effectiveness of this new concept although some systematic errors and imperfections still exist. Some lessons learned from this prototype are discussed which are helpful to improve the performance in the future.

Gerard Portal;Mercè Vall-llossera;María Piles;Adriano Camps;David Chaparro;Miriam Pablos;Luciana Rossato; "A Spatially Consistent Downscaling Approach for SMOS Using an Adaptive Moving Window," vol.11(6), pp.1883-1894, June 2018. The European Space Agency (ESA)'s Soil Moisture and Ocean Salinity (SMOS) is the first spaceborne mission using L-band radiometry to monitor the Earth's global surface soil moisture (SM). After more than 7 years in orbit, many studies have contributed to improve the quality and applicability of SMOS-derived SM maps. In this research, a novel downscaling algorithm for SMOS is proposed to obtain high-resolution (HR) SM maps at 1 km (L4), from the ∼40 km native resolution of the instrument. This algorithm introduces the concept of a shape adaptive moving window as an improvement of the current semi-empirical downscaling approach at SMOS Barcelona Expert Center, based on the “universal triangle”. Its inputs are as follows: the SMOS SM (L3 at ∼40 km spatial resolution), the vertical and the horizontal SMOS brightness temperatures (L1C at ∼40 km), and the HR normalized difference vegetation index and land surface temperature from optical-based sensors. The proposed method has the following advantages: HR SM maps are obtained while maintaining the dynamic range from the original L3 product; energy is conserved, because differences between aggregated L4 and L3 SM maps are negligible; and it can be applied to continental areas, even when they integrate different climates. A comparison of SMOS L3 and L4 products with in situ data for networks allocated in Spain and Australia shows good agreement in terms of correlation and root mean square error. The proposed method is shown to capture 1-km SM spatial variability while preserving the quality of SMOS at its native resolution.

Saswati Datta;W. Linwood Jones;Ruiyao Chen; "Identifying Desert Regions for Improved Intercalibration of Satellite Microwave Radiometers," vol.11(6), pp.1895-1904, June 2018. This paper describes the technical approach to generate homogeneous warm brightness temperature scene binary mask over the earth for the intercalibration of microwave imagers. The objective is to identify homogeneous desert scenes as targets in radiometric intercalibration of multisource multisensor systems such as Global Precipitation Measurement (GPM) constellation. In this paper, a method is developed to generate mask over continental Australia using 10, 18, and 37 GHz vertical and horizontal polarized radiance observations from GPM Microwave Imager data. Major factors considered are diurnal variation, second Stokes parameter, spatial homogeneity, and surface emissivity. Major advantage of this tool is its minimal, almost negligible, dependence on any external ancillary modeled or observed data.

Faisal Alquaied;Ruiyao Chen;W. Linwood Jones; "Emissive Reflector Correction in the Legacy Version 1B11 V8 (GPM05) Brightness Temperature of the TRMM Microwave Imager," vol.11(6), pp.1905-1912, June 2018. The Tropical Rainfall Measurement Mission (TRMM) observatory has provided nearly a two-decade time series of precipitation measurements, which is very beneficial for scientific understanding of the Earth's hydrological cycle. This paper concerns the passive microwave sensor named the TRMM microwave imager (TMI) and the development of the legacy version TMI 1B11 V8 (GPM05) brightness temperature data product that will be released in late 2017. In particular, this version-8 includes a robust main reflector emissivity (brightness temperature, Tb ) correction that is based upon rigorous electromagnetic principles and on-orbit radiometric measurements. This emissivity correction process involves two steps, namely, the derivation of the main reflector emissivity coefficients for each TMI channel by analyzing deep space maneuvers and the determination of the reflector physical temperature for the entire 17-plus-year time series by analyzing the single difference between the measured T b and the simulated Tb from a radiative transfer model.

David W. Draper; "Radio Frequency Environment for Earth-Observing Passive Microwave Imagers," vol.11(6), pp.1913-1922, June 2018. This paper examines the radio frequency interference (RFI) environment within microwave imager bands for sensors that have observed the earth over the last two decades. Since the microwave imagers have used various bands both within and outside of International Telecommunications Union (ITU) allocated frequencies for passive satellite earth exploration, this survey provides valuable insight into band selection and mitigation strategies for future missions. Several conclusions are drawn from this paper. First, significant land-based RFI exists at the C -Band. The two-band mitigation solution for the C-Band utilized by Advanced Microwave Scanning Radiometer 2 provides some RFI reduction of high-level RFI (>10 K), and therefore, may have merit for future missions in cases where only two subbands can be feasibly implemented. At the X-band, ocean reflections from direct broadcast and communication satellites (especially around Europe) provide considerable interference above the ITU-allocated 10.6–10.7 GHz band, whereas very little reflected interference is observed within the allocated band. Good out-of-band rejection is required to avoid the reflected RFI above 10.7 GHz. Both inside and outside of the X-band allocation, significant terrestrial RFI exists. For the K-band, the 19.35-GHz band utilized by Special Sensor Microwave Imager Sounder and Tropical Rainfall Measuring Mission Microwave Imager avoids reflected RFI from satellites that is present around the continental United States for satellites observing within the 18.6 to 18.8 GHz allocated band. The analysis suggests that the 19.35-GHz band may be preferable to the 18.7 GHz allocated band for ocean applications if avoiding RFI is desired.

Faisal Alquaied;Ruiyao Chen;W. Linwood Jones; "Hot Load Temperature Correction for TRMM Microwave Imager in the Legacy Brightness Temperature," vol.11(6), pp.1923-1931, June 2018. The Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI), launched in late November 1997, has provided a > 17-year record of tropical precipitation measurements, and prior to the launch of the global precipitation measurements (GPM) satellite in February 2014, the TMI served as the radiometric transfer standard for the TRMM constellation radiometers. Thus, the radiometric calibration of the TRMM constellation radiometers depends upon that of TMI. Therefore, the level-1 TMI brightness temperatures have been reprocessed to produce the TMI legacy brightness temperature product namely 1B11 version 8 (also known as GPM05), which has been released by NASA in late 2017. During this data reprocessing, a radiometric calibration anomaly, associated with the hot load, was discovered. This error was caused by an occasional transient solar intrusion, which introduced a systematic error in the brightness temperature. To remove this error, an independent hot load correction algorithm was implemented. In this paper, the process of identifying the issue, developing the solution, and evaluating the results is described.

Ian Stuart Adams;Justin Bobak; "The Feasibility of Detecting Supercooled Liquid With a Forward-Looking Radiometer," vol.11(6), pp.1932-1938, June 2018. Icing due to supercooled liquid poses a threat to manned and unmanned aircraft. Particularly with the proliferation of unmanned systems, lightweight passive sensors could allow the remote detection of supercooled liquid. A three-dimensional radiative transfer model is utilized to determine the feasibility of a forward-viewing passive sensor for remotely detecting hazardous icing conditions. An analysis of simulated passive microwave spectra for a forward-viewing passive sensor finds dependencies on the liquid water content, cloud distance, and temperature, and sensitivity is noted across the millimeter-wave spectrum. Spectral flattening in the shoulders of molecular resonances corresponds to the presence of clouds, while saturated brightness temperatures near the transition lines are linked to the ambient temperature. A principal component analysis shows significant information content at G -band near the water vapor molecular transition, suggesting compact radiometer systems to detect and range supercooled liquid are feasible. Additional information content may be available at K<inline-formula> <tex-math notation="LaTeX">$_a$</tex-math></inline-formula>-band, suggesting such observations would be useful when size and mass are not constrained.

Fang-Cheng Zhou;Zhao-Liang Li;Hua Wu;Si-Bo Duan;Xiaoning Song;Guangjian Yan; "A Practical Two-Stage Algorithm for Retrieving Land Surface Temperature from AMSR-E Data—A Case Study Over China," vol.11(6), pp.1939-1948, June 2018. Land surface temperature (LST) is an important parameter that directly affects the water and heat balance between the Earth surface and atmosphere. Mapping the LST distribution at continuous temporal and wide spatial scales is very helpful for researching many physical and biochemical processes. Remotely sensed instruments are the key players in these studies. Passive microwave remotely sensed data have the advantage of retrieving the land parameters under nearly all weather conditions because of the power to penetrate clouds. In this study, a practical two-stage algorithm, which uses single-frequency and double-polarization passive microwave brightness temperature observations, is presented to retrieve the LST over China. The vertically polarized land surface emissivity (LSE) at 18.7 GHz is first estimated by a parameterization relationship with the polarization ratio (PR), which is defined as the ratio of the horizontal to the vertical brightness temperatures at the same frequency. Subsequently, the LST is retrieved using the estimated LSE by ignoring the atmospheric effect. The evaluation of the simulated data shows an RMSE of 1.45 K, which is very encouraging. The cross validations by the satellite thermal infrared products are from daily to monthly time scales. The daily accuracy is 3.04 K and the bimonthly accuracy is 4.43 K. A high positive bias in arid and semiarid regions and a negative bias in frequently cloudy regions are noticeable in both comparisons. These biases do not reveal the uncertainties of this LST retrieval algorithm; on the contrary, the retrieved the Advanced Microwave Scanning Radiometer − EOS (AMSR-E) LST appears to be more reasonable compared to the thermal infrared LST under certain circumstances.

Bjorn Lambrigtsen;H. Van Dang;Francis Joseph Turk;Svetla M. Hristova-Veleva;Hui Su;Yixin Wen; "All-Weather Tropospheric 3-D Wind From Microwave Sounders," vol.11(6), pp.1949-1956, June 2018. In this paper, we describe a new approach to measure tropospheric wind vectors from space that has significant advantages over current methods and borders on a breakthrough capability. We report on the results of simulations of observations from a geostationary microwave sounder and, briefly, from a small cluster of low-earth-orbiting CubeSats. The geostationary simulations show a precision of better than 2 m/s for wind speed and 15° for direction. Wind speed bias is about –1 m/s. The transfer function is largely linear. Our studies show that this methodology will meet wind measurement requirements specified by the World Meteorological Organization.

Guili Chen;Jie Guang;Yong Xue;Ying Li;Yahui Che;Shaoqi Gong; "A Physically Based PM<inline-formula><tex-math notation="LaTeX">$_{text{2.5}}$</tex-math> </inline-formula> Estimation Method Using AERONET Data in Beijing Area," vol.11(6), pp.1957-1965, June 2018. Over the past few years, regional air pollution has frequently occurred in Mid-Eastern China, especially in Beijing. As the primary pollutant in urban air, atmospheric particulate matter (PM) not only leads to the decrease of atmospheric visibility, but also increases the mortality and morbidity of respiratory system diseases. By analyzing aerosol volume size distribution data downloaded from the AERONET official website, we find that the size distribution of aerosol in Beijing appears a bimodal log-normal structures and parameters of fine mode in AERONET data are mainly contributed by PM<inline-formula><tex-math notation="LaTeX">$_{text{2.5}}$</tex-math></inline-formula>. In this paper, a physically based model is developed to estimate the concentration of PM<inline-formula> <tex-math notation="LaTeX">$_{text{2.5}}$</tex-math></inline-formula>, in which, fine mode aerosol optical depth (AOD) at 440, 550, and 675 nm, Effective Radius of the Fine particles, ground-based fine particulate matter (PM <inline-formula><tex-math notation="LaTeX">$_{text{2.5}}$</tex-math></inline-formula>) data, relative humidity, and boundary layer height data from 2015 to 2016 are used. Those from 2015 are used for calculating integrated extinction efficiencies (〈Qext〉) based on the model, and those from 2016 are used for PM <inline-formula><tex-math notation="LaTeX">$_{text{2.5}}$</tex-math></inline-formula> validation. Result shows that R 2 of retrieved PM<inline-formula><tex-math notation="LaTeX">$_{text{2.5}}$</tex-math></inline-formula> against ground-based PM<inline-formula><tex-math notation="LaTeX">$_{text{2.5}}$</tex-math></inline-formula> can reach to 0.70 and RMSE is 33.67 μg/m3 at Beijing site at 440 nm. This study concludes that this method has the potential to retrieve PM<inline-formula><tex-math notation="LaTeX">$_{text{2.5}}$</tex-ma- h> </inline-formula> by using AERONET AOD in Beijing, which is independent of ground-based PM<inline-formula> <tex-math notation="LaTeX">$_{text{2.5}}$</tex-math></inline-formula> measurement.

Ülo Suursaar;Tarmo Kall; "Decomposition of Relative Sea Level Variations at Tide Gauges Using Results From Four Estonian Precise Levelings and Uplift Models," vol.11(6), pp.1966-1974, June 2018. Variations in Estonian sea level records (1842–2016) are interpreted using harmonic and sliding window trend analysis together with the latest findings on the crustal movements in the region of the Baltic Sea. Considering the most recent relative-to-geoid land uplift (LU) rates from the semiempirical LU model NKG2016LU_lev, it was possible, probably for the first time, to properly interpret the climate-related sea level variation components in the Estonian tide gauges. The LU relative to geoid in Estonia (0.3–3.4 mm/yr) partly compensates for the global sea level rise (GSLR). Trends in Estonian relative sea level vary site dependently between –1.05 and 1.14 mm/yr (in 1954–2016) or between –0.17 and 1.14 mm/yr (in 1900–2010), yielding a “local sea level rise” (approximately 2.2 to 2.8 mm/yr in 1900–2010), which is faster by ∼0.8 mm/yr than the GSLR over similar period. This gap mainly originates from local land subsidence and variations in water exchange processes between the Baltic Sea and the World Ocean. Moreover, due to high intra- and inter-annual variability (standard deviations in detrended monthly data series from 20 to 22 cm), the sea level trend estimates are not stable enough and the usability of Estonian time series is relatively low for contributing to GSLR estimations.

Scott Gleason;Valery U. Zavorotny;Dennis M. Akos;Sara Hrbek;Ivan PopStefanija;Edward J. Walsh;Dallas Masters;Michael S. Grant; "Study of Surface Wind and Mean Square Slope Correlation in Hurricane Ike With Multiple Sensors," vol.11(6), pp.1975-1988, June 2018. This paper analyzes and compares multiple techniques for estimating the mean square slope (MSS) of surface waves during Hurricane Ike in the Gulf of Mexico and studies the correlation of the MSS estimates with wind speed measurements along the same tracks. Three separate instruments collected measurements in parallel, including a GPS reflectometry (GPS-R) receiver, a stepped frequency microwave radiometer (SFMR), and a wide swath radar altimeter (WSRA). These datasets were used to study the correlation between the flight level and near-surface wind and MSS during Hurricane Ike in 2008. The GPS-R, SFMR, and WSRA instruments recorded temporally and spatially coincident data during two passes over the hurricane eye. This paper estimates the ocean surface MSS using GPS-R for two eye transects using: a least squares model fitting technique, the reflected signal waveform width, and an integration of the reflected signals in an area around the peak. Subsequently, the correlations between the GPS-R and WSRA MSS estimates and the SFMR wind speed estimates are compared to reveal regions of high and low surface MSS to wind speed correlation. Finally, a relationship between wind and MSS was derived from the GPS-R and SFMR data and is compared to existing MSS/wind models, including the results obtained by Katzberg et al. for hurricane conditions.

Wen-Xia Li;Geng-Ming Jiang;Guicai Li;Chuan Li; "Intercalibration of Advanced Himawari Imager's Infrared Channels With IASI/Metop-B 1C Data," vol.11(6), pp.1989-1996, June 2018. This work addresses the intercalibration of infrared channels 7 to 16 of the Advanced Himawari Imager (AHI) aboard Himawari-8 against the hyperspectral channels of the Infrared Atmospheric Sounding Interferometer (IASI) on board Metop-B using the hyperspectral convolution method. The matching measurements in January, March, July, and November 2016 are collected over a tropical intercalibration area (100 °E∼180 °E, 10 °S∼10 °N) under the conditions of collocation, absolute observation time difference of less than 5 min <inline-formula><tex-math notation="LaTeX">${(vert Delta text{time}vert }< {{5^{prime})}}$ </tex-math></inline-formula>, absolute viewing azimuth angle difference of less than 10° <inline-formula> <tex-math notation="LaTeX">${(vert Delta text{VAA}vert }< {{10^circ)}}$</tex-math></inline-formula>, and <inline-formula><tex-math notation="LaTeX">${{vert cos}},theta _{1}{{/ cos}},theta _{2}-{{ 1vert }}< {{0.015}}$</tex-math></inline-formula>, where <inline-formula><tex-math notation="LaTeX">$theta _{1}$</tex-math> </inline-formula> and <inline-formula><tex-math notation="LaTeX">$theta _{2}$</tex-math></inline-formula> are the viewing zenith angles of AHI and IASI measurements, respectively. The daytime matching measurements in the channel 7 (centered at about <inline-formula><tex-math notation="LaTeX">${text{3.88 }} mu {text{m}}$</tex-math> </inline-formula>) contaminated by strong and fast changing specular reflected solar irradiance have weak correlation and consequently are excluded from the intercalibration. The results show that the AHI and IASI matching measurements have strong linear correlation, and the intercalibration coefficients (slope and intercept) ar- obtained through linear regression on the qualified matching measurements. The linear fit results indicate that the in-orbit calibration of the AHI infrared channels is stable in the four months; however, more or less calibration biases exist against IASI/Metop-B hyperspectral channels. The intercalibration results in this work are basically consistent with that of the Meteorological Satellite Center of Japan Meteorological Agency.

Yingjie Li;Jing Chen;Qingmiao Ma;Hankui K. Zhang;Jane Liu; "Evaluation of Sentinel-2A Surface Reflectance Derived Using Sen2Cor in North America," vol.11(6), pp.1997-2021, June 2018. Surface reflectance can be derived from satellite measurements for the top of atmosphere and provides an important dataset for monitoring land change reliably. In this study, the Sentinel-2A surface reflectance was generated using the Sentinel-2 atmospheric correction (Sen2Cor) processor. To evaluate this dataset, surface data at 40 sites of the aerosol robotic network over North America from January 2016 to August 2017 were collected and processed. The surface reflectance reference was derived from the second simulation of the satellite signal in the solar spectrum–vector (6SV) code. The aerosol optical thickness (AOT), water vapor, surface reflectance, and three spectral indices generated by Sen2Cor were evaluated using the metrics including the accuracy, precision, and uncertainty (A, P, and U). The results show that due to the limitations of Sen2Cor aerosol retrieval algorithm, the Sentinel-2A AOT was significantly overestimated, with the relative accuracy over 160%. The Sen2Cor surface reflectance is generally overestimated, especially for the bright pixels, except for the cirrus band. For the 12 Sentinel-2A bands, the mean values of relative A , P, and U are 4.15%, 13.44%, and 14.92%, respectively. Among the three spectral indices, the normalized difference vegetation index performs best, with a correlation coefficient of 0.973 against the surface data. Furthermore, the Sen2Cor surface reflectance was compared with other satellite products. The mean correlation coefficient between Sentinel-2A and Landsat 8 surface reflectance is found to be 0.761. This study suggests that a better AOT retrieval is critical for improvement of Sen2Cor in the future.

Zhengkun Qin;Xiaolei Zou; "Direct Assimilation of ABI Infrared Radiances in NWP Models," vol.11(6), pp.2022-2033, June 2018. GOES-R satellite was successfully launched on November 19, 2016 with an Advanced Baseline Imager (ABI) on board. ABI is a new imager and provides more advanced capabilities for weather applications through more channels and higher temporal and spatial resolutions than the earlier GOES imagers. In this study, impacts from assimilating ABI brightness temperatures in NWP models on quantitative precipitation forecasts (QPFs) are presented and compared with those from assimilating GOES-13/-15 imager data. Cloudy and precipitation-affected brightness temperatures are removed in assimilation through using an infrared (IR) cloud algorithm in quality control. This quality control can be applied for ABI data both at day and at night. Assimilation of the ten ABI IR channels improves the 24-h forecast accuracy of both temperature and specific humidity fields when compared with radiosonde observations. It is also shown that the assimilation of ABI IR channels produces positive impacts on QPFs. The impacts of ABI data assimilation on QPFs are slightly larger than those from assimilation of GOES-13 and -15 imager observations.

Hongli Liu;Jinshui Zhang;Yaozhong Pan;Guanyuan Shuai;Xiufang Zhu;Shuang Zhu; "An Efficient Approach Based on UAV Orthographic Imagery to Map Paddy With Support of Field-Level Canopy Height From Point Cloud Data," vol.11(6), pp.2034-2046, June 2018. Unmanned aerial vehicle technology which is capable of acquiring the centimeter-level orthographic images provides a promising way to map the paddy rice extent. However, crop spectral mixture is still the major that limits the performance of paddy rice identification when using the spectral information alone. In order to eliminate the influence of spectral mixture on crop mapping accuracy, an innovative method was proposed to produce field-level canopy height data by calculating the elevation difference between the vegetated and nonvegetated parcels. We applied support vector machine on four types of datasets: 1) Pixel-based Spectral Features (PSF); 2) PSF and Canopy Height Features; 3) Object-based Spectral Features (OSF); and 4) Object-based Spectral Features and Canopy Height Features (OSCHF). The results showed that the full-class classification accuracy from OSCHF was the highest with an overall accuracy of 94.04% and Kappa of 0.91, which was significantly higher than result using OSF, and the accuracy of PSF was the worst. The OSF could eliminate speckle noise problem to some extent, but grapes and trees were still to some extent misclassified as paddy rice due to similar spectrum. Fortunately, these confusions were effectively avoided by including the canopy height.

Fadi Kizel;Jón Atli Benediktsson;Lorenzo Bruzzone;Gro B. M. Pedersen;Olga K. Vilmundardóttir;Nicola Falco; "Simultaneous and Constrained Calibration of Multiple Hyperspectral Images Through a New Generalized Empirical Line Model," vol.11(6), pp.2047-2058, June 2018. The empirical line (EL) calibration method is commonly used for atmospheric correction of remotely sensed spectral images and recovery of surface reflectance. The current EL-based methods are applicable to calibrate only single images. Therefore, the use of the EL calibration is impractical for imaging campaigns, where many (partially overlapped) images are acquired to cover a large area. In addition, the EL results are unconstrained and an undesired reflectance with negative values or larger than 100% can be obtained. In this paper, we use the standard EL model to formulate a new generalized empirical line (GEL) model. Based on the GEL, we present a novel method for simultaneous and constrained calibration of multiple images. This new method allows for calibration through multiple image constrained empirical line (MIcEL) and three additional calibration modes. Given a set of images, we use the available ground targets and automatically extracted tie points between overlapping images to calibrate all the images in the set simultaneously. Quantitative and visual assessments of the proposed method were carried out relatively to the off-the-shelf method quick atmospheric correction (QUAC), using real hyperspectral images and field measurements. The results clearly show the superiority of MIcEL with respect to the minimization of the difference between the reflectance values of the same object in different overlapping images. An assessment of the absolute accuracy, with respect to 11 field measurement points, shows that the accuracy of MIcEL, with an average mean absolute error (MAE) of ∼11%, is comparable with respect to the QUAC.

Upendra N. Singh;Tamer F. Refaat;Mulugeta Petros;Syed Ismail; "Evaluation of 2-μm Pulsed Integrated Path Differential Absorption Lidar for Carbon Dioxide Measurement—Technology Developments, Measurements, and Path to Space," vol.11(6), pp.2059-2067, June 2018. The societal benefits of understanding climate change through the identification of global carbon dioxide sources and sinks led to the recommendation for NASA's Active Sensing of Carbon Dioxide Emissions over Nights, Days, and Seasons space-based mission for global carbon dioxide measurements. For more than 15 years, the NASA Langley Research Center has developed several carbon dioxide active remote sensors using the differential absorption lidar technique operating at 2-μm wavelength. Recently, an airborne double-pulsed integrated path differential absorption lidar was developed, tested, and validated for atmospheric carbon dioxide measurement. Results indicated 1.02% column carbon dioxide measurement uncertainty and 0.28% bias over the ocean. Currently, this technology is progressing toward triple-pulse operation targeting both atmospheric carbon dioxide and water vapor—the dominant interfering molecule on carbon dioxide remote sensing. Measurements from the double-pulse lidar and the advancement of the triple-pulse lidar development are presented. In addition, measurement simulations with a space-based IPDA lidar, incorporating new technologies, are also presented to assess feasibility of carbon dioxide measurements from space.

Nima Ekhtari;Craig Glennie;Juan Carlos Fernandez-Diaz; "Classification of Airborne Multispectral Lidar Point Clouds for Land Cover Mapping," vol.11(6), pp.2068-2078, June 2018. Airborne light detection and ranging (lidar) data are widely used for high-resolution land cover mapping. The lidar elevation data are typically used as complementary information to passive multispectral or hyperspectral imagery to enable higher land cover classification accuracy. In this paper, we examine the capabilities of a recently developed multispectral airborne laser scanner, manufactured by Teledyne Optech, for direct classification of multispectral point clouds into ten land cover classes including grass, trees, two classes of soil, four classes of pavement, and two classes of buildings. The scanner, Titan MW, collects point clouds at three different laser wavelengths simultaneously, opening the door to new possibilities in land cover classification using only lidar data. We show that the recorded intensities of laser returns together with spatial metrics calculated from the three-dimensional (3D) locations of laser returns are sufficient for classifying the point cloud into ten distinct land cover classes. Our classification methods achieved an overall accuracy of 94.7% with a kappa coefficient of 0.94 using the support vector machine (SVM) method to classify single-return points and an overall accuracy of 79.7% and kappa coefficient of 0.77 using a rule-based classifier on multireturn points. A land cover map is then generated from the classified point cloud. We show that our results outperform the common approach of rasterizing the point cloud prior to classification by ∼4% in overall accuracy, 0.04 in kappa coefficient, and by up to 16% in commission and omission errors. This improvement however comes at the price of increased complexity and computational burden.

Weike Feng;Li Yi;Motoyuki Sato; "Near Range Radar Imaging Based on Block Sparsity and Cross-Correlation Fusion Algorithm," vol.11(6), pp.2079-2089, June 2018. In this paper, we propose a novel processing model for compressive sensing (CS)-based stepped-frequency continuous-wave (SFCW) radar near range imaging, which takes the azimuth dependence of the reflection coefficients of targets into consideration. Based on the block sparse property of the received signal in the defined dictionary, 2-D images of the targets can be obtained at each spatial sampling point. A cross-correlation method is then employed to fuse these 2-D images to obtain the final result. Random undersamplings of frequencies and spatial sampling points are conducted to reduce the data acquisition time, the data size, and the computational complexity. Experimental results of an SFCW-based MIMO radar and a ground-based SAR system show that, compared with the conventional matched filtering-based methods, the proposed method can provide artifacts-reduced higher resolution images by using reduced frequencies and spatial sampling points. We also demonstrate that, compared to the conventional CS-based methods, due to the more suitable established observation model, the proposed method can achieve better imaging results with fewer artifacts for near range targets.

Mingdong Yang;Daiyin Zhu; "Efficient Space-Variant Motion Compensation Approach for Ultra-High-Resolution SAR Based on Subswath Processing," vol.11(6), pp.2090-2103, June 2018. Motion compensation (MOCO) is essential to obtain high-quality images for airborne synthetic aperture radar (SAR). In the case of ultrahigh resolution, how to compensate the space-variant motion errors accurately and efficiently is still a great challenge. An improved space-variant MOCO scheme is presented in this paper, taking into account both precision and efficiency. By subswath processing, an approximate range envelope compensation without interpolation can be implemented efficiently. Meanwhile, to correct the range-variant component of motion errors effectively, the conventional MOCO processing flow should be modified, guaranteeing that the phase compensation of the one-step MOCO algorithm is not affected by the approximate range envelope compensation. Moreover, based on subswath processing, the range variance of residual azimuth-variant errors can also be degraded, as well as the influence of that on azimuthal time–frequency relationship. Thus, the focusing effect in the case of wide beam is improved significantly, and the computational burden of the precise topography- and aperture-dependent (PTA) algorithm is also reduced. The proposed approach is useful in practice, and the resolution of image corrected by this approach attains 0.1 m. Simulations with point targets and processing of real data have validated the proposed approach.

Andreas Ley;Olivier D’Hondt;Olaf Hellwich; "Regularization and Completion of TomoSAR Point Clouds in a Projected Height Map Domain," vol.11(6), pp.2104-2114, June 2018. Tomographic SAR is a technique that allows to extend synthetic aperture radar (SAR) imaging to the third dimension by using several images of a scene acquired from different sensor positions. Three-dimensional point clouds extracted thanks to tomographic processing methods are often corrupted by noise and artifacts that need to be corrected. In this paper, we propose a simple convex optimization formulation that exploits the geometric constraint that the line of sight between a sensor and a surface measurement must not be blocked by another surface. We demonstrate the ability of our method to denoise point clouds and fill holes on both synthetic data and experimental E-SAR data by the German Aerospace Center (DLR).

Fan Zhang;Xiaojie Yao;Hanyuan Tang;Qiang Yin;Yuxin Hu;Bin Lei; "Multiple Mode SAR Raw Data Simulation and Parallel Acceleration for Gaofen-3 Mission," vol.11(6), pp.2115-2126, June 2018. Gaofen-3 is China's first meter-level multipolarization synthetic aperture radar (SAR) satellite with 12 imaging modes for the scientific and commercial applications. In order to evaluate the imaging performance of these modes, the multiple mode SAR raw data simulation is highly demanded. In the paper, the multiple mode SAR simulation framework will be briefly introduced to expose how the raw data simulation guarantees the development of Gaofen-3 and its ground processing system. As an engineering simulation, the complex working modes and practical evaluation requirements of Gaofen-3 mission put forward to the higher demand for simulation simplification and data input/output (I/O) efficiency. To meet the requirements, two improvements have been proposed. First, the stripmap mode based multiple mode decomposition method is introduced to make a solid and simplified system simulation structure. Second, the cloud computing and graphics processing unit (GPU) are integrated to simulate the practical huge volume raw data, resulting in improved calculation and data I/O efficiency. The experimental results of sliding spotlight imaging prove the effectiveness of the Gaofen-3 mission simulation framework and the decomposition idea. The results for efficiency assessment show that the GPU cloud method greatly improves the computing power of a 16-core CPU parallel method about <inline-formula><tex-math notation="LaTeX">$40times$</tex-math></inline-formula> speedup and the data throughput with the Hadoop distributed file system. These results prove that the simulation system has the merits of coping with multiple modes and huge volume raw data simulation and can be extended to the future space-borne SAR simulation.

Manabu Watanabe;Christian N. Koyama;Masato Hayashi;Izumi Nagatani;Masanobu Shimada; "Early-Stage Deforestation Detection in the Tropics With L-band SAR," vol.11(6), pp.2127-2133, June 2018. Polarimetric characteristics of early-stage deforestation areas were examined with L-band synthetic aperture radar (SAR) to develop a forest early warning system. Time series of PALSAR-2/ScanSAR data and Landsat data were used to examine the differences in detection timing of deforestation in the most active deforestation sites in Peru and Brazil. The γ0HH value increased by 0.8 dB on average for areas undergoing early-stages of deforestation, in which felled trees were left on the ground. The detection timing was almost the same as that of using the optical sensor. On the other hand, the γ0HV value does not show significant γ0 change at this early-stage of deforestation. The γ0HV value started to decrease 1–1.5 months after the deforestation was detected by Landsat. The γ0 HV value decreased by 1.5–1.6 dB, 4–5 months after the deforestation. To understand the radar backscattering mechanism at the early-stage deforestation sites, field experiments were carried out 2–16 days after the PALSAR-2/fully-polarimetric observations. The early-stage deforestation sites revealed 1.1–2.5 dB and 2.9–4.0 dB increases for σ0HH and σ 0surface, respectively. This can be explained by the direct (single bounce) scattering from felled trees left on the ground. The sites in which felled trees were removed and the surfaces were flattened showed 5.2–5.3 dB and 5.3–5.5 dB decreases for σ<s- p>0HV and σ0volume, respectively. This can be explained by the lower sensitivity of the HV polarization to both the branches remaining on the ground and the surface roughness, along with its increased sensitivity to the forest biomass. We conclude that an increase in L-band γ0HH is a good indicator for detecting early-stage deforestation sites, where felled trees are left on the ground, while γ0HV may be useful for detecting later-stage deforestation sites.

Ming Liu;Shichao Chen;Jie Wu;Fugang Lu;Jun Wang;Taoli Yang; "Configuration Recognition via Class-Dependent Structure Preserving Projections With Application to Targets in SAR Images," vol.11(6), pp.2134-2146, June 2018. Locality preserving projections (LPP) can preserve the local structure of the datasets effectively. However, it is not capable of separating the samples that are close to each other in the high-dimensional space but belong to different classes. Focusing on the problem, a class-dependent structure preserving projections (CDSPP) algorithm is proposed in this paper to realize synthetic aperture radar (SAR) target configuration recognition. The class information is embedded into the LPP model, and the similarity matrix and the difference matrix are constructed according to the class information. The similarity matrix is utilized to preserve the local structure of the samples belong to the same class, which makes the samples with the same class become more compact after feature extraction. And the difference matrix is utilized to separate the samples that are close to each other in the high-dimensional space but belong to different classes. Target aspect angle sensitivity of SAR images can be eased by using the proposed CDSPP algorithm. Experiments are conducted on the moving and stationary target acquisition and recognition database. The results verify the effectiveness of the proposed algorithm, and comparisons with other algorithms further prove its advantage.

* "Proceedings of the IEEE," vol.11(6), pp.2147-2147, June 2018.* Advertisement, IEEE.

* "Become a published author in 4 to 6 weeks," vol.11(6), pp.2148-2148, June 2018.* Advertisement, IEEE.

* "IEEE Geoscience and Remote Sensing Societys," vol.11(6), pp.C3-C3, June 2018.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "Institutional Listings," vol.11(6), pp.C4-C4, June 2018.* Presents a listing of institutional institutions relevant for this issue of the publication.

IEEE Geoscience and Remote Sensing Magazine - new TOC (2018 July 16) [Website]

* "[Front cover[Name:_blank]]," vol.6(1), pp.C1-C1, March 2018.* Presents the front cover for this issue of the publication.

* "Call for nominations: for the GRSS Administrative Committee," vol.6(1), pp.C2-C2, March 2018.* The IEEE Geocience and Remote Sensing Society (GRSS) Nominations Committee calls upon our membership to nominate members to serve on the GRSS Administrative Committee (AdCom). A nominating petition carrying a minimum of 2% of the names of eligible Society members (~70), excluding students, shall automatically place that nominee on the slate. Such nominations must be made by 30 April 2018. The Nominations Committee may choose to include a name on the slate regardless of the number of names generated by the nominating petition process. Prior to submission of a nomination petition, the petitioner shall have determined that the nominee named in the petition is willing to serve if elected; and evidence of such willingness to serve shall be submitted with the petition. Candidates must be current Members of the IEEE and the GRSS. Petition signatures can be submitted electronically through the Society website or by signing, scanning, and electronically mailing the pdf file of the paper petition. The name of each member signing the paper petition shall be clearly printed or typed. For identification purposes of signatures on paper petitions, membership numbers and addresses as listed in the official IEEE Membership records shall be included. Only signatures submitted electronically through the Society website or original signatures on paper petitions shall be accepted. Further information is provided here.

* "Table of Contents," vol.6(1), pp.1-2, March 2018.* Presents the table of contents for this issue of the publication.

* "Staff Listing," vol.6(1), pp.2-2, March 2018.* Provides a listing of current staff, committee members and society officers.

James L. Garrison; "Welcome from the New Editor-in-Chief [From the Editor[Name:_blank]]," vol.6(1), pp.3-3, March 2018. Presents the introductory editorial for this issue of the publication.

Adriano Camps; "Greetings from Barcelona! [Presdient's Message[Name:_blank]]," vol.6(1), pp.4-6, March 2018. Presents the President’s message for this issue of the publication.

* "BGDDS 2018," vol.6(1), pp.6-6, March 2018.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "CISS 2018," vol.6(1), pp.6-6, March 2018.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "Call for papers," vol.6(1), pp.7-7, March 2018.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

Fabio Dell'Acqua;Giuseppe Siciliano; "Technical Education on Aerospace and Remote Sensing: A Brief Global Overview," vol.6(1), pp.8-14, March 2018. The Lombardy Aerospace Industry Cluster [1] was founded in Italy in 2014 as the final step in constructing a representative institution for the regional system of aerospace industries. The process was initiated in 2009 with the first formal contacts between the local industry federation and the regional government of Lombardy, which led to the foundation of a provisional body (Distretto Aerospaziale Lombardo) in preparation for a more formalized industry cluster to come and in light of a mandate to represent the local aerospace industry politically.

Wei Li;Fubiao Feng;Hengchao Li;Qian Du; "Discriminant Analysis-Based Dimension Reduction for Hyperspectral Image Classification: A Survey of the Most Recent Advances and an Experimental Comparison of Different Techniques," vol.6(1), pp.15-34, March 2018. Hyperspectral imagery contains hundreds of contiguous bands with a wealth of spectral signatures, making it possible to distinguish materials through subtle spectral discrepancies. Because these spectral bands are highly correlated, dimensionality reduction, as the name suggests, seeks to reduce data dimensionality without losing desirable information. This article reviews discriminant analysisbased dimensionality-reduction approaches for hyperspectral imagery, including typical linear discriminant analysis (LDA), state-of-the-art sparse graph-based discriminant analysis (SGDA), and their extensions.

Silvia Mari;Giovanni Valentini;Stefano Serva;Tiziana Scopa;Mauro Cardone;Luca Fasano;Giuseppe Francesco De Luca; "COSMO-SkyMed Second Generation System Access Portfolio," vol.6(1), pp.35-43, March 2018. The Constellation of Small Satellites for the Mediterranean Basin Observation (COSMO)-SkyMed Second Generation (CSG) ground segment (GS) is based on an interoperable and multimission design that provides CSG functionalities to external partners and access through the CSG to services belonging to other Earth observation (EO) partners. Moreover, the CSG GS design supports such cooperation by expansion through replication of user GSs (UGSs) in different ways. In this manner, the CSG GS is able to manage EO foreign missions by providing centralized and multimission access in an integrated environment, thus offering valuable technological solutions to the defense and civilian communities. This article provides an indepth description of the CSG system access portfolio, focusing on the architectural details of the GS that allow the provisioning and exploitation of the CSG's interoperability, expandability, and multisensor/multimission (IEM) features.

Jorge L. Marquez;Carlos Marcelo Scavuzzo; "Activities of the GRSS Argentine Section Chapter [Chapters[Name:_blank]]," vol.6(1), pp.44-46, March 2018. Provides several short items that may include news, reviews or technical notes that should be of interest to practitioners and researchers.

Zhuosen Wang;Xiaofeng Li;Eugene Genong Yu;James C. Tilton; "Activities of the GRSS Washington/Northern Virginia Chapter [Chapters[Name:_blank]]," vol.6(1), pp.47-48, March 2018. Provides several short items that may include news, reviews or technical notes that should be of interest to practitioners and researchers.

Linda Hayden; "GLOBE Data Entry App Version 1.3 Now Available: Create and Edit Sites Without Active Internet Connection [Education[Name:_blank]]," vol.6(1), pp.49-51, March 2018. The Global Learning and Observations to Benefit the Environment (GLOBE) program is a worldwide hands-on primary and secondary school-based science and education program. GLOBE’s vision promotes and supports collaboration among students, teachers, and scientists on inquiry-based investigations of the environment and the Earth system, working in close partnership with NASA, the National Oceanic and Atmospheric Administration (NOAA), and the National Science Foundation (NSF) Earth System Science Projects for study and research about the dynamics of Earth’s environment.

* "PIERS 2018," vol.6(1), pp.51-51, March 2018.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

Bertrand Le Saux;Naoto Yokoya;Ronny Hansch;Saurabh Prasad; "2018 IEEE GRSS Data Fusion Contest: Multimodal Land Use Classification [Technical Committees[Name:_blank]]," vol.6(1), pp.52-54, March 2018. Presents information on the 2018 IEEE GRSS Data Fusion Contest.

* "[Calendar[Name:_blank]]," vol.6(1), pp.55-55, March 2018.* Provides a listing of upcoming events of interest to practitioners and researchers.

* "Remote Sensing Code Library (RSCL)," vol.6(1), pp.C3-C3, March 2018.* RSCL is a publication of IEEE GRSS, just like the GRSS Transactions and the GRSS Newsletter, but its currency is computer codes associated with one or more aspect of geoscience remote sensing. For more information, contact the RSCL Editor.

Topic revision: r6 - 22 May 2015, AndreaVaccari
©2018 University of Virginia. Privacy Statement
Virginia Image and Video Analysis - School of Engineering and Applied Science - University of Virginia
P.O. Box 400743 - Charlottesville, VA - 22904 - E-Mail