Relevant TOCs

IEEE Transactions on Image Processing - new TOC (2017 November 23) [Website]

Çağlar Aytekin;Jarno Nikkanen;Moncef Gabbouj; "A Data Set for Camera-Independent Color Constancy," vol.27(2), pp.530-544, Feb. 2018. In this paper, we provide a novel data set designed for Camera-independent color constancy research. Camera independence corresponds to the robustness of an algorithm’s performance when it runs on images of the same scene taken by different cameras. Accordingly, the images in our database correspond to several laboratory and field scenes each of which is captured by three different cameras with minimal registration errors. The laboratory scenes are also captured under five different illuminations. The spectral responses of cameras and the spectral power distributions of the laboratory light sources are also provided, as they may prove beneficial for training future algorithms to achieve color constancy. For a fair evaluation of future methods, we provide guidelines for supervised methods with indicated training, validation, and testing partitions. Accordingly, we evaluate two recently proposed convolutional neural network-based color constancy algorithms as baselines for future research. As a side contribution, this data set also includes images taken by a mobile camera with color shading corrected and uncorrected results. This allows research on the effect of color shading as well.

Mengqiu Hu;Yang Yang;Fumin Shen;Ning Xie;Heng Tao Shen; "Hashing with Angular Reconstructive Embeddings," vol.27(2), pp.545-555, Feb. 2018. Large-scale search methods are increasingly critical for many content-based visual analysis applications, among which hashing-based approximate nearest neighbor search techniques have attracted broad interests due to their high efficiency in storage and retrieval. However, existing hashing works are commonly designed for measuring data similarity by the Euclidean distances. In this paper, we focus on the problem of learning compact binary codes using the cosine similarity. Specifically, we proposed novel angular reconstructive embeddings (ARE) method, which aims at learning binary codes by minimizing the reconstruction error between the cosine similarities computed by original features and the resulting binary embeddings. Furthermore, we devise two efficient algorithms for optimizing our ARE in continuous and discrete manners, respectively. We extensively evaluate the proposed ARE on several large-scale image benchmarks. The results demonstrate that ARE outperforms several state-of-the-art methods.

Mrinmoy Ghorai;Sekhar Mandal;Bhabatosh Chanda; "A Group-Based Image Inpainting Using Patch Refinement in MRF Framework," vol.27(2), pp.556-567, Feb. 2018. This paper presents a Markov random field (MRF)-based image inpainting algorithm using patch selection from groups of similar patches and optimal patch assignment through joint patch refinement. In patch selection, a novel group formation strategy based on subspace clustering is introduced to search the candidate patches in relevant source region only. This improves patch searching in terms of both quality and time. We also propose an efficient patch refinement scheme using higher order singular value decomposition to capture underlying pattern among the candidate patches. This eliminates random variation and unwanted artifacts as well. Finally, a weight term is computed, based on the refined patches and is incorporated in the objective function of the MRF model to improve the optimal patch assignment. Experimental results on a large number of natural images and comparison with well-known existing methods demonstrate the efficacy and superiority of the proposed method.

Runmin Cong;Jianjun Lei;Huazhu Fu;Qingming Huang;Xiaochun Cao;Chunping Hou; "Co-Saliency Detection for RGBD Images Based on Multi-Constraint Feature Matching and Cross Label Propagation," vol.27(2), pp.568-579, Feb. 2018. Co-saliency detection aims at extracting the common salient regions from an image group containing two or more relevant images. It is a newly emerging topic in computer vision community. Different from the most existing co-saliency methods focusing on RGB images, this paper proposes a novel co-saliency detection model for RGBD images, which utilizes the depth information to enhance identification of co-saliency. First, the intra saliency map for each image is generated by the single image saliency model, while the inter saliency map is calculated based on the multi-constraint feature matching, which represents the constraint relationship among multiple images. Then, the optimization scheme, namely cross label propagation, is used to refine the intra and inter saliency maps in a cross way. Finally, all the original and optimized saliency maps are integrated to generate the final co-saliency result. The proposed method introduces the depth information and multi-constraint feature matching to improve the performance of co-saliency detection. Moreover, the proposed method can effectively exploit any existing single image saliency model to work well in co-saliency scenarios. Experiments on two RGBD co-saliency datasets demonstrate the effectiveness of our proposed model.

Ali Mosleh;Yasser Elmi Sola;Farzad Zargari;Emmanuel Onzon;J. M. Pierre Langlois; "Explicit Ringing Removal in Image Deblurring," vol.27(2), pp.580-593, Feb. 2018. In this paper, we present a simple yet effective image deblurring method to produce ringing-free deblurred images. Our work is inspired by the observation that large-scale deblurring ringing artifacts are measurable through a multi-resolution pyramid of low-pass filtering of the blurred-deblurred image pair. We propose to model such a quantification as a convex cost function and minimize it directly in the deblurring process in order to reduce ringing regardless of its cause. An efficient primal-dual algorithm is proposed as a solution to this optimization problem. Since the regularization is more biased toward ringing patterns, the details of the reconstructed image are prevented from over-smoothing. An inevitable source of ringing is sensor saturation which can be detected costlessly contrary to most other sources of ringing. However, dealing with the saturation effect in deblurring introduces a non-linear operator in optimization problem. In this paper, we also introduce a linear approximation as a solution to handling saturation in the proposed deblurring method. As a result of these steps, we significantly enhance the quality of the deblurred images. Experimental results and quantitative evaluations demonstrate that the proposed method performs favorably against state-of-the-art image deblurring methods.

Yongsik Lee;Seungjoon Yang; "Parallel Block Sequential Closed-Form Matting With Fan-Shaped Partitions," vol.27(2), pp.594-605, Feb. 2018. Applying alpha matting to large images is a challenging task because of its computational complexity. This paper provides a divide and conquer strategy for performing closed-form matting. The matting problem, defined for an entire image, is broken down into systems of linear equations defined for very small blocks of the image. The sizes of the small systems are small enough for us to find solutions efficiently using a direct sparse linear equation system solver. The small systems are solved following a sequential order such that the alpha matte grows from a user scribble. With the block sequential application, matting is performed on fan-shaped partitions in parallel on multiple processing cores. Experiments on large test images as well as on standard benchmark test images show that the proposed parallel block sequential matting provides high quality alpha mattes with good scalability.

Samaneh Abbasi-Sureshjani;Marta Favali;Giovanna Citti;Alessandro Sarti;Bart M. ter Haar Romeny; "Curvature Integration in a 5D Kernel for Extracting Vessel Connections in Retinal Images," vol.27(2), pp.606-621, Feb. 2018. Tree-like structures, such as retinal images, are widely studied in computer-aided diagnosis systems for large-scale screening programs. Despite several segmentation and tracking methods proposed in the literature, there still exist several limitations specifically when two or more curvilinear structures cross or bifurcate, or in the presence of interrupted lines or highly curved blood vessels. In this paper, we propose a novel approach based on multi-orientation scores augmented with a contextual affinity matrix, which both are inspired by the geometry of the primary visual cortex (V1) and their contextual connections. The connectivity is described with a 5D kernel obtained as the fundamental solution of the Fokker–Planck equation modeling the cortical connectivity in the lifted space of positions, orientations, curvatures, and intensity. It is further used in a self-tuning spectral clustering step to identify the main perceptual units in the stimuli. The proposed method has been validated on several easy as well as challenging structures in a set of artificial images and actual retinal patches. Supported by quantitative and qualitative results, the method is capable of overcoming the limitations of current state-of-the-art techniques.

Shulei Wang;Ellen T. Arena;Kevin W. Eliceiri;Ming Yuan; "Automated and Robust Quantification of Colocalization in Dual-Color Fluorescence Microscopy: A Nonparametric Statistical Approach," vol.27(2), pp.622-636, Feb. 2018. Colocalization is a powerful tool to study the interactions between fluorescently labeled molecules in biological fluorescence microscopy. However, existing techniques for colocalization analysis have not undergone continued development especially in regards to robust statistical support. In this paper, we examine two of the most popular quantification techniques for colocalization and argue that they could be improved upon using ideas from nonparametric statistics and scan statistics. In particular, we propose a new colocalization metric that is robust, easily implementable, and optimal in a rigorous statistical testing framework. Application to several benchmark data sets, as well as biological examples, further demonstrates the usefulness of the proposed technique.

Changzhi Luo;Zhetao Li;Kaizhu Huang;Jiashi Feng;Meng Wang; "Zero-Shot Learning via Attribute Regression and Class Prototype Rectification," vol.27(2), pp.637-648, Feb. 2018. Zero-shot learning (ZSL) aims at classifying examples for unseen classes (with no training examples) given some other seen classes (with training examples). Most existing approaches exploit intermedia-level information (e.g., attributes) to transfer knowledge from seen classes to unseen classes. A common practice is to first learn projections from samples to attributes on seen classes via a regression method, and then apply such projections to unseen classes directly. However, it turns out that such a manner of learning strategy easily causes projection domain shift problem and hubness problem, which hinder the performance of ZSL task. In this paper, we also formulate ZSL as an attribute regression problem. However, different from general regression-based solutions, the proposed approach is novel in three aspects. First, a class prototype rectification method is proposed to connect the unseen classes to the seen classes. Here, a class prototype refers to a vector representation of a class, and it is also known as a class center, class signature, or class exemplar. Second, an alternating learning scheme is proposed for jointly performing attribute regression and rectifying the class prototypes. Finally, a new objective function which takes into consideration both the attribute regression accuracy and the class prototype discrimination is proposed. By introducing such a solution, domain shift problem and hubness problem can be mitigated. Experimental results on three public datasets (i.e., CUB200-2011, SUN Attribute, and aPaY) well demonstrate the effectiveness of our approach.

Deepak Mishra;Santanu Chaudhury;Mukul Sarkar;Arvinder Singh Soin;Vivek Sharma; "Edge Probability and Pixel Relativity-Based Speckle Reducing Anisotropic Diffusion," vol.27(2), pp.649-664, Feb. 2018. Anisotropic diffusion filters are one of the best choices for speckle reduction in the ultrasound images. These filters control the diffusion flux flow using local image statistics and provide the desired speckle suppression. However, inefficient use of edge characteristics results in either oversmooth image or an image containing misinterpreted spurious edges. As a result, the diagnostic quality of the images becomes a concern. To alleviate such problems, a novel anisotropic diffusion-based speckle reducing filter is proposed in this paper. A probability density function of the edges along with pixel relativity information is used to control the diffusion flux flow. The probability density function helps in removing the spurious edges and the pixel relativity reduces the oversmoothing effects. Furthermore, the filtering is performed in superpixel domain to reduce the execution time, wherein a minimum of 15% of the total number of image pixels can be used. For performance evaluation, 31 frames of three synthetic images and 40 real ultrasound images are used. In most of the experiments, the proposed filter shows a better performance as compared to the state-of-the-art filters in terms of the speckle region’s signal-to-noise ratio and mean square error. It also shows a comparative performance for figure of merit and structural similarity measure index. Furthermore, in the subjective evaluation, performed by the expert radiologists, the proposed filter’s outputs are preferred for the improved contrast and sharpness of the object boundaries. Hence, the proposed filtering framework is suitable to reduce the unwanted speckle and improve the quality of the ultrasound images.

Lloyd Windrim;Rishi Ramakrishnan;Arman Melkumyan;Richard J. Murphy; "A Physics-Based Deep Learning Approach to Shadow Invariant Representations of Hyperspectral Images," vol.27(2), pp.665-677, Feb. 2018. This paper proposes the Relit Spectral Angle-Stacked Autoencoder, a novel unsupervised feature learning approach for mapping pixel reflectances to illumination invariant encodings. This work extends the Spectral Angle-Stacked Autoencoder so that it can learn a shadow-invariant mapping. The method is inspired by a deep learning technique, Denoising Autoencoders, with the incorporation of a physics-based model for illumination such that the algorithm learns a shadow invariant mapping without the need for any labelled training data, additional sensors, a priori knowledge of the scene or the assumption of Planckian illumination. The method is evaluated using datasets captured from several different cameras, with experiments to demonstrate the illumination invariance of the features and how they can be used practically to improve the performance of high-level perception algorithms that operate on images acquired outdoors.

Guo Lu;Xiaoyun Zhang;Li Chen;Zhiyong Gao; "Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization," vol.27(2), pp.678-691, Feb. 2018. Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What’s more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

Ritesh Pradhan;Ramazan S. Aygun;Manil Maskey;Rahul Ramachandran;Daniel J. Cecil; "Tropical Cyclone Intensity Estimation Using a Deep Convolutional Neural Network," vol.27(2), pp.692-702, Feb. 2018. Tropical cyclone intensity estimation is a challenging task as it required domain knowledge while extracting features, significant pre-processing, various sets of parameters obtained from satellites, and human intervention for analysis. The inconsistency of results, significant pre-processing of data, complexity of the problem domain, and problems on generalizability are some of the issues related to intensity estimation. In this study, we design a deep convolutional neural network architecture for categorizing hurricanes based on intensity using graphics processing unit. Our model has achieved better accuracy and lower root-mean-square error by just using satellite images than ’state-of-the-art’ techniques. Visualizations of learned features at various layers and their deconvolutions are also presented for understanding the learning process.

Keigo Ishikura;Naoto Kurita;Damon M. Chandler;Gosuke Ohashi; "Saliency Detection Based on Multiscale Extrema of Local Perceptual Color Differences," vol.27(2), pp.703-717, Feb. 2018. Visual saliency detection is a useful technique for predicting, which regions humans will tend to gaze upon in any given image. Over the last several decades, numerous algorithms for automatic saliency detection have been proposed and shown to work well on both synthetic and natural images. However, two key challenges remain largely unaddressed: 1) How to improve the relatively low predictive performance for images that contain large objects and 2) how to perform saliency detection on a wider variety of images from various categories without training. In this paper, we propose a new saliency detection algorithm that addresses these challenges. Our model first detects potentially salient regions based on multiscale extrema of local perceived color differences measured in the CIELAB color space. These extrema are highly effective for estimating the locations, sizes, and saliency levels of candidate regions. The local saliency candidates are further refined via two global extrema-based features, and then a Gaussian mixture is used to generate the final saliency map. Experimental validation on the extensive CAT2000 data set demonstrates that our proposed method either outperforms or is highly competitive with prior approaches, and can perform well across different categories and object sizes, while remaining training-free.

Yang Wang;Zhibin Pan;Rui Li; "Performance Re-Evaluation on “Codewords Distribution-Based Optimal Combination of Equal-Average Equal-Variance Equal-Norm Nearest Neighbor Fast Search Algorithm for Vector Quantization Encoding”," vol.27(2), pp.718-720, Feb. 2018. In the re-evaluated paper, Xie et al. proposed a new fast search algorithm for vector quantization encoding, which optimized the priority checking order of variance and norm inequality in order to speed up the encoding procedure. CPU time of different encoding algorithms is given to support their algorithm. However, first, some of the experimental data in the re-evaluated paper are unreasonable and unrepeatable. And second, as an improved algorithm of equal-average equal-variance equal-norm nearest neighbor fast search algorithm, the re-evaluated algorithm in fact cannot achieve a better performance than the existing improved equal-average equal-variance nearest neighbor fast search algorithm. In this paper, these two problems are analyzed, re-evaluated, and discussed in detail.

Zhibo Chen;Wei Zhou;Weiping Li; "Blind Stereoscopic Video Quality Assessment: From Depth Perception to Overall Experience," vol.27(2), pp.721-734, Feb. 2018. Stereoscopic video quality assessment (SVQA) is a challenging problem. It has not been well investigated on how to measure depth perception quality independently under different distortion categories and degrees, especially exploit the depth perception to assist the overall quality assessment of 3D videos. In this paper, we propose a new depth perception quality metric (DPQM) and verify that it outperforms existing metrics on our published 3D video extension of High Efficiency Video Coding (3D-HEVC) video database. Furthermore, we validate its effectiveness by applying the crucial part of the DPQM to a novel blind stereoscopic video quality evaluator (BSVQE) for overall 3D video quality assessment. In the DPQM, we introduce the feature of auto-regressive prediction-based disparity entropy (ARDE) measurement and the feature of energy weighted video content measurement, which are inspired by the free-energy principle and the binocular vision mechanism. In the BSVQE, the binocular summation and difference operations are integrated together with the fusion natural scene statistic measurement and the ARDE measurement to reveal the key influence from texture and disparity. Experimental results on three stereoscopic video databases demonstrate that our method outperforms state-of-the-art SVQA algorithms for both symmetrically and asymmetrically distorted stereoscopic video pairs of various distortion types.

Fabio Bellavia;Carlo Colombo; "Dissecting and Reassembling Color Correction Algorithms for Image Stitching," vol.27(2), pp.735-748, Feb. 2018. This paper introduces a new compositional framework for classifying color correction methods according to their two main computational units. The framework was used to dissect fifteen among the best color correction algorithms and the computational units so derived, with the addition of four new units specifically designed for this work, were then reassembled in a combinatorial way to originate about one hundred distinct color correction methods, most of which never considered before. The above color correction methods were tested on three different existing datasets, including both real and artificial color transformations, plus a novel dataset of real image pairs categorized according to the kind of color alterations induced by specific acquisition setups. Differently from previous evaluations, special emphasis was given to effectiveness in real world applications, such as image mosaicing and stitching, where robustness with respect to strong image misalignments and light scattering effects is required. Experimental evidence is provided for the first time in terms of the most recent perceptual image quality metrics, which are known to be the closest to human judgment. Comparative results show that combinations of the new computational units are the most effective for real stitching scenarios, regardless of the specific source of color alteration. On the other hand, in the case of accurate image alignment and artificial color alterations, the best performing methods either use one of the new computational units, or are made up of fresh combinations of existing units.

Rituparna Sarkar;Scott T. Acton; "SDL: Saliency-Based Dictionary Learning Framework for Image Similarity," vol.27(2), pp.749-763, Feb. 2018. In image classification, obtaining adequate data to learn a robust classifier has often proven to be difficult in several scenarios. Classification of histological tissue images for health care analysis is a notable application in this context due to the necessity of surgery, biopsy or autopsy. To adequately exploit limited training data in classification, we propose a saliency guided dictionary learning method and subsequently an image similarity technique for histo-pathological image classification. Salient object detection from images aids in the identification of discriminative image features. We leverage the saliency values for the local image regions to learn a dictionary and respective sparse codes for an image, such that the more salient features are reconstructed with smaller error. The dictionary learned from an image gives a compact representation of the image itself and is capable of representing images with similar content, with comparable sparse codes. We employ this idea to design a similarity measure between a pair of images, where local image features of one image, are encoded with the dictionary learned from the other and vice versa. To effectively utilize the learned dictionary, we take into account the contribution of each dictionary atom in the sparse codes to generate a global image representation for image comparison. The efficacy of the proposed method was evaluated using three tissue data sets that consist of mammalian kidney, lung and spleen tissue, breast cancer, and colon cancer tissue images. From the experiments, we observe that our methods outperform the state of the art with an increase of 14.2% in the average classification accuracy over all data sets.

Shao Huang;Weiqiang Wang;Shengfeng He;Rynson W. H. Lau; "Egocentric Temporal Action Proposals," vol.27(2), pp.764-777, Feb. 2018. We present an approach to localize generic actions in egocentric videos, called temporal action proposals (TAPs), for accelerating the action recognition step. An egocentric TAP refers to a sequence of frames that may contain a generic action performed by the wearer of a head-mounted camera, e.g., taking a knife, spreading jam, pouring milk, or cutting carrots. Inspired by object proposals, this paper aims at generating a small number of TAPs, thereby replacing the popular sliding window strategy, for localizing all action events in the input video. To this end, we first propose to temporally segment the input video into action atoms, which are the smallest units that may contain an action. We then apply a hierarchical clustering algorithm with several egocentric cues to generate TAPs. Finally, we propose two actionness networks to score the likelihood of each TAP containing an action. The top ranked candidates are returned as output TAPs. Experimental results show that the proposed TAP detection framework performs significantly better than relevant approaches for egocentric action detection.

Fang Zhao;Jiashi Feng;Jian Zhao;Wenhan Yang;Shuicheng Yan; "Robust LSTM-Autoencoders for Face De-Occlusion in the Wild," vol.27(2), pp.778-790, Feb. 2018. Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for existing face recognizers, which is heavily desired in real-world applications concerning surveillance and security. Although much research effort has been devoted to developing face de-occlusion methods, most of them can only work well under constrained conditions, such as all of faces are from a pre-defined closed set of subjects. In this paper, we propose a robust LSTM-Autoencoders (RLA) model to effectively restore partially occluded faces even in the wild. The RLA model consists of two LSTM components, which aims at occlusion-robust face encoding and recurrent occlusion removal respectively. The first one, named multi-scale spatial LSTM encoder, reads facial patches of various scales sequentially to output a latent representation, and occlusion-robustness is achieved owing to the fact that the influence of occlusion is only upon some of the patches. Receiving the representation learned by the encoder, the LSTM decoder with a dual channel architecture reconstructs the overall face and detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down the task of face de-occlusion into restoring the occluded part step by step. Moreover, to minimize identify information loss and guarantee face recognition accuracy over recovered faces, we introduce an identity-preserving adversarial training scheme to further improve RLA. Extensive experiments on both synthetic and real data sets of faces with occlusion clearly demonstrate the effectiveness of our proposed RLA in removing different types of facial occlusion at various locations. The proposed method also provides significantly larger performance gain than other de-occlusion methods in promoting recognition performance over partially-occluded faces.

Xun Yang;Meng Wang;Dacheng Tao; "Person Re-Identification With Metric Learning Using Privileged Information," vol.27(2), pp.791-805, Feb. 2018. Despite the promising progress made in recent years, person re-identification remains a challenging task due to complex variations in human appearances from different camera views. This paper presents a logistic discriminant metric learning method for this challenging problem. Different with most existing metric learning algorithms, it exploits both original data and auxiliary data during training, which is motivated by the new machine learning paradigm—learning using privileged information. Such privileged information is a kind of auxiliary knowledge, which is only available during training. Our goal is to learn an optimal distance function by constructing a locally adaptive decision rule with the help of privileged information. We jointly learn two distance metrics by minimizing the empirical loss penalizing the difference between the distance in the original space and that in the privileged space. In our setting, the distance in the privileged space functions as a local decision threshold, which guides the decision making in the original space like a teacher. The metric learned from the original space is used to compute the distance between a probe image and a gallery image during testing. In addition, we extend the proposed approach to a multi-view setting which is able to explore the complementation of multiple feature representations. In the multi-view setting, multiple metrics corresponding to different original features are jointly learned, guided by the same privileged information. Besides, an effective iterative optimization scheme is introduced to simultaneously optimize the metrics and the assigned metric weights. Experiment results on several widely-used data sets demonstrate that the proposed approach is superior to global decision threshold-based methods and outperforms most state-of-the-art results.

Homa Foroughi;Nilanjan Ray;Hong Zhang; "Object Classification With Joint Projection and Low-Rank Dictionary Learning," vol.27(2), pp.806-821, Feb. 2018. For an object classification system, the most critical obstacles toward real-world applications are often caused by large intra-class variability, arising from different lightings, occlusion, and corruption, in limited sample sets. Most methods in the literature would fail when the training samples are heavily occluded, corrupted or have significant illumination or viewpoint variations. Besides, most of the existing methods and especially deep learning-based methods, need large training sets to achieve a satisfactory recognition performance. Although using the pre-trained network on a generic large-scale data set and fine-tune it to the small-sized target data set is a widely used technique, this would not help when the content of base and target data sets are very different. To address these issues simultaneously, we propose a joint projection and low-rank dictionary learning method using dual graph constraints. Specifically, a structured class-specific dictionary is learned in the low-dimensional space, and the discrimination is further improved by imposing a graph constraint on the coding coefficients, that maximizes the intra-class compactness and inter-class separability. We enforce structural incoherence and low-rank constraints on sub-dictionaries to reduce the redundancy among them, and also make them robust to variations and outliers. To preserve the intrinsic structure of data, we introduce a supervised neighborhood graph into the framework to make the proposed method robust to small-sized and high-dimensional data sets. Experimental results on several benchmark data sets verify the superior performance of our method for object classification of small-sized data sets, which include a considerable amount of different kinds of variation, and may have high-dimensional feature vectors.

Feihu Zhang;Benjamin W. Wah; "Fundamental Principles on Learning New Features for Effective Dense Matching," vol.27(2), pp.822-836, Feb. 2018. In dense matching (including stereo matching and optical flow), nearly all existing approaches are based on simple features, such as gray or RGB color, gradient or simple transformations like census, to calculate matching costs. These features do not perform well in complex scenes that may involve radiometric changes, noises, overexposure and/or textureless regions. Various problems may appear, such as wrong matching at the pixel or region level, flattening/breaking of edges and/or even entire structural collapse. In this paper, we propose two fundamental principles based on the consistency and the distinctiveness of features. We show that almost all existing problems in dense matching are caused by features that violate one or both of these principles. To systematically learn good features for dense matching, we develop a general multi-objective optimization based on these two principles and apply convolutional neural networks to find new features that lie on the Pareto frontier. By using two-frame optical flow and stereo matching as applications, our experimental results show that the features learned can significantly improve the performance of state-of-the-art approaches. Based on the KITTI benchmarks, our method ranks first on the two stereo benchmarks and is the best among existing two-frame optical-flow algorithms on flow benchmarks.

Suhad Lateef Al-khafaji;Jun Zhou;Ali Zia;Alan Wee-Chung Liew; "Spectral-Spatial Scale Invariant Feature Transform for Hyperspectral Images," vol.27(2), pp.837-850, Feb. 2018. Spectral-spatial feature extraction is an important task in hyperspectral image processing. In this paper we propose a novel method to extract distinctive invariant features from hyperspectral images for registration of hyperspectral images with different spectral conditions. Spectral condition means images are captured with different incident lights, viewing angles, or using different hyperspectral cameras. In addition, spectral condition includes images of objects with the same shape but different materials. This method, which is named spectral-spatial scale invariant feature transform (SS-SIFT), explores both spectral and spatial dimensions simultaneously to extract spectral and geometric transformation invariant features. Similar to the classic SIFT algorithm, SS-SIFT consists of keypoint detection and descriptor construction steps. Keypoints are extracted from spectral-spatial scale space and are detected from extrema after 3D difference of Gaussian is applied to the data cube. Two descriptors are proposed for each keypoint by exploring the distribution of spectral-spatial gradient magnitude in its local 3D neighborhood. The effectiveness of the SS-SIFT approach is validated on images collected in different light conditions, different geometric projections, and using two hyperspectral cameras with different spectral wavelength ranges and resolutions. The experimental results show that our method generates robust invariant features for spectral-spatial image matching.

Xiaoyuan Yang;Jingkai Wang;Ridong Zhu; "Random Walks for Synthetic Aperture Radar Image Fusion in Framelet Domain," vol.27(2), pp.851-865, Feb. 2018. A new framelet-based random walks (RWs) method is presented for synthetic aperture radar (SAR) image fusion, including SAR-visible images, SAR-infrared images, and Multi-band SAR images. In this method, we build a novel RWs model based on the statistical characteristics of framelet coefficients to fuse the high-frequency and low-frequency coefficients. This model converts the fusion problem to estimate the probability of each framelet coefficient being assigned each input image. Experimental results show that the proposed approach improves the contrast while preserves the edges simultaneously, and outperforms many traditional and state-of-the-art fusion techniques in both qualitative and quantitative analysis.

IEEE Transactions on Medical Imaging - new TOC (2017 November 23) [Website]

* "Table of contents," vol.36(11), pp.C1-C4, Nov. 2017.* Presents the table of contents for this issue of the publication.

* "IEEE Transactions on Medical Imaging publication information," vol.36(11), pp.C2-C2, Nov. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Sumit Gupta;Kenneth J. Loh; "Noncontact Electrical Permittivity Mapping and pH-Sensitive Films for Osseointegrated Prosthesis and Infection Monitoring," vol.36(11), pp.2193-2203, Nov. 2017. The objective of this paper is to develop a noncontact, noninvasive system for detecting and monitoring subcutaneous infection occurring at the tissue and osseointegrated prosthesis interface. It is known that the local pH of tissue can change due to infection. Therefore, the sensing system integrates two parts, namely, pH-sensitive thin films that can be coated onto prosthesis surfaces prior to them being implanted and an electrical capacitance tomography (ECT) algorithm that can reconstruct the spatial permittivity distribution of a region of space in a noncontact fashion. First, a thin film pH sensor was fabricated by spray coating, and tests confirmed that the film exhibited changes in its permittivity due to pH. Second, the ECT forward and inverse problems were implemented. Third, an aluminum rod was employed as a representative phantom of an osseointegrated prosthesis and then spray coated with the pH sensor. Finally, the film-coated phantom was immersed in different pH buffers, dried, and subjected to ECT interrogation and spatial permittivity reconstruction. The results validated that ECT was able to detect and localize permittivity variations correlated to pH changes.

Christian F. Baumgartner;Konstantinos Kamnitsas;Jacqueline Matthew;Tara P. Fletcher;Sandra Smith;Lisa M. Koch;Bernhard Kainz;Daniel Rueckert; "SonoNet: Real-Time Detection and Localisation of Fetal Standard Scan Planes in Freehand Ultrasound," vol.36(11), pp.2204-2215, Nov. 2017. Identifying and interpreting fetal standard scan planes during 2-D ultrasound mid-pregnancy examinations are highly complex tasks, which require years of training. Apart from guiding the probe to the correct location, it can be equally difficult for a non-expert to identify relevant structures within the image. Automatic image processing can provide tools to help experienced as well as inexperienced operators with these tasks. In this paper, we propose a novel method based on convolutional neural networks, which can automatically detect 13 fetal standard views in freehand 2-D ultrasound data as well as provide a localization of the fetal structures via a bounding box. An important contribution is that the network learns to localize the target anatomy using weak supervision based on image-level labels only. The network architecture is designed to operate in real-time while providing optimal output for the localization task. We present results for real-time annotation, retrospective frame retrieval from saved videos, and localization on a very large and challenging dataset consisting of images and video recordings of full clinical anomaly screenings. We found that the proposed method achieved an average F1-score of 0.798 in a realistic classification experiment modeling real-time detection, and obtained a 90.09% accuracy for retrospective frame retrieval. Moreover, an accuracy of 77.8% was achieved on the localization task.

Yuhe Li;Zhendong Qiao;Shaoqin Zhang;Zhenhuan Wu;Xueqin Mao;Jiahua Kou;Hong Qi; "A Novel Method for Low-Contrast and High-Noise Vessel Segmentation and Location in Venipuncture," vol.36(11), pp.2216-2227, Nov. 2017. Blood sampling is the most common medical technique, and vessel detection is of crucial interest for automated venipuncture systems. In this paper, we propose a new convex-regional-based gradient model that uses contextually related regional information, including vessel width size and gray distribution, to segment and locate vessels in a near-infrared image. A convex function with the interval size of vessel width is constructed and utilized for its edge-preserving superiority. Moreover, white and linear noise independences are derived. The region-based gradient decreases the number of local extreme in the cross-sectional profile of the vessel to realize its single global minimum in a low-contrast, noisy image. We demonstrate the performance of the proposed model via quantitative tests and comparisons between different methods. Results show the advantages of the model on the continuity and smoothness of segmented vessel. The proposed model is evaluated with receiver operating characteristic curves, which have a corresponding area under the curve of 88.8%. The proposed model will be a powerful method in automated venipuncture system and medical image analysis.

Ilkay Oksuz;Anirban Mukhopadhyay;Rohan Dharmakumar;Sotirios A. Tsaftaris; "Unsupervised Myocardial Segmentation for Cardiac BOLD," vol.36(11), pp.2228-2238, Nov. 2017. A fully automated 2-D+time myocardial segmentation framework is proposed for cardiac magnetic resonance (CMR) blood-oxygen-level-dependent (BOLD) data sets. Ischemia detection with CINE BOLD CMR relies on spatio-temporal patterns in myocardial intensity, but these patterns also trouble supervised segmentation methods, the de facto standard for myocardial segmentation in cine MRI. Segmentation errors severely undermine the accurate extraction of these patterns. In this paper, we build a joint motion and appearance method that relies on dictionary learning to find a suitable subspace. Our method is based on variational pre-processing and spatial regularization using Markov random fields, to further improve performance. The superiority of the proposed segmentation technique is demonstrated on a data set containing cardiac phase-resolved BOLD MR and standard CINE MR image sequences acquired in baseline and ischemic condition across ten canine subjects. Our unsupervised approach outperforms even supervised state-of-the-art segmentation techniques by at least 10% when using Dice to measure accuracy on BOLD data and performs at par for standard CINE MR. Furthermore, a novel segmental analysis method attuned for BOLD time series is utilized to demonstrate the effectiveness of the proposed method in preserving key BOLD patterns.

M. Mehdi Farhangi;Hichem Frigui;Albert Seow;Amir A. Amini; "3-D Active Contour Segmentation Based on Sparse Linear Combination of Training Shapes (SCoTS)," vol.36(11), pp.2239-2249, Nov. 2017. SCoTS captures a sparse representation of shapes in an input image through a linear span of previously delineated shapes in a training repository. The model updates shape prior over level set iterations and captures variabilities in shapes by a sparse combination of the training data. The level set evolution is therefore driven by a data term as well as a term capturing valid prior shapes. During evolution, the shape prior influence is adjusted based on shape reconstruction, with the assigned weight determined from the degree of sparsity of the representation. For the problem of lung nodule segmentation in X-ray CT, SCoTS offers a unified framework, capable of segmenting nodules of all types. Experimental validations are demonstrated on 542 3-D lung nodule images from the LIDC-IDRI database. Despite its generality, SCoTS is competitive with domain specific state of the art methods for lung nodule segmentation.

Luís F. R. Lucas;Nuno M. M. Rodrigues;Luis A. da Silva Cruz;Sérgio M. M. de Faria; "Lossless Compression of Medical Images Using 3-D Predictors," vol.36(11), pp.2250-2260, Nov. 2017. This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3-D-MRP, is based on the principle of minimum rate predictors (MRPs), which is one of the state-of-the-art lossless compression technologies presented in the data compression literature. The main features of the proposed method include the use of 3-D predictors, 3-D-block octree partitioning and classification, volume-based optimization, and support for 16-b-depth images. Experimental results demonstrate the efficiency of the 3-D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8- and 16-bit-depth contents, respectively, when compared with JPEG-LS, JPEG2000, CALIC, and HEVC, as well as other proposals based on the MRP algorithm.

David Larsson;Jeannette H. Spühler;Sven Petersson;Tim Nordenfur;Massimiliano Colarieti-Tosti;Johan Hoffman;Reidar Winter;Matilda Larsson; "Patient-Specific Left Ventricular Flow Simulations From Transthoracic Echocardiography: Robustness Evaluation and Validation Against Ultrasound Doppler and Magnetic Resonance Imaging," vol.36(11), pp.2261-2275, Nov. 2017. The combination of medical imaging with computational fluid dynamics (CFD) has enabled the study of 3-D blood flow on a patient-specific level. However, with models based on gated high-resolution data, the study of transient flows, and any model implementation into routine cardiac care, is challenging. This paper presents a novel pathway for patient-specific CFD modelling of the left ventricle (LV), using 4-D transthoracic echocardiography (TTE) as input modality. To evaluate the clinical usability, two sub-studies were performed. First, a robustness evaluation was performed, where repeated models with alternating input variables were generated for six subjects and changes in simulated output quantified. Second, a validation study was carried out, where the pathway accuracy was evaluated against pulsed-wave Doppler (100 subjects), and 2-D through-plane phase-contrast magnetic resonance imaging measurements over seven intraventricular planes (6 subjects). The robustness evaluation indicated a model deviation of <;12%, with highest regional and temporal deviations at apical segments and at peak systole, respectively. The validation study showed an error of <;11% (velocities <;10 cm/s) for all subjects, with no significant regional or temporal differences observed. With the patient-specific pathway shown to provide robust output with high accuracy, and with the pathway dependent only on 4-D TTE, the method has a high potential to be used within future clinical studies on 3-D intraventricular flow patterns. To this, future model developments in the form of e.g., anatomically accurate LV valves may further enhance the clinical value of the simulations.

Marie Bieth;Loic Peter;Stephan G. Nekolla;Matthias Eiber;Georg Langs;Markus Schwaiger;Bjoern Menze; "Segmentation of Skeleton and Organs in Whole-Body CT Images via Iterative Trilateration," vol.36(11), pp.2276-2286, Nov. 2017. Whole body oncological screening using CT images requires a good anatomical localisation of organs and the skeleton. While a number of algorithms for multi-organ localisation have been presented, developing algorithms for a dense anatomical annotation of the whole skeleton, however, has not been addressed until now. Only methods for specialised applications, e.g., in spine imaging, have been previously described. In this work, we propose an approach for localising and annotating different parts of the human skeleton in CT images. We introduce novel anatomical trilateration features and employ them within iterative scale-adaptive random forests in a hierarchical fashion to annotate the whole skeleton. The anatomical trilateration features provide high-level long-range context information that complements the classical local context-based features used in most image segmentation approaches. They rely on anatomical landmarks derived from the previous element of the cascade to express positions relative to reference points. Following a hierarchical approach, large anatomical structures are segmented first, before identifying substructures. We develop this method for bone annotation but also illustrate its performance, although not specifically optimised for it, for multi-organ annotation. Our method achieves average dice scores of 77.4 to 85.6 for bone annotation on three different data sets. It can also segment different organs with sufficient performance for oncological applications, e.g., for PET/CT analysis, and its computation time allows for its use in clinical practice.

João Pedrosa;Sandro Queirós;Olivier Bernard;Jan Engvall;Thor Edvardsen;Eike Nagel;Jan D’hooge; "Fast and Fully Automatic Left Ventricular Segmentation and Tracking in Echocardiography Using Shape-Based B-Spline Explicit Active Surfaces," vol.36(11), pp.2287-2296, Nov. 2017. Cardiac volume/function assessment remains a critical step in daily cardiology, and 3-D ultrasound plays an increasingly important role. Fully automatic left ventricular segmentation is, however, a challenging task due to the artifacts and low contrast-to-noise ratio of ultrasound imaging. In this paper, a fast and fully automatic framework for the full-cycle endocardial left ventricle segmentation is proposed. This approach couples the advantages of the B-spline explicit active surfaces framework, a purely image information approach, to those of statistical shape models to give prior information about the expected shape for an accurate segmentation. The segmentation is propagated throughout the heart cycle using a localized anatomical affine optical flow. It is shown that this approach not only outperforms other state-of-the-art methods in terms of distance metrics with a mean average distances of 1.81±0.59 and 1.98±0.66 mm at end-diastole and end-systole, respectively, but is computationally efficient (in average 11 s per 4-D image) and fully automatic.

Ukash Nakarmi;Yanhua Wang;Jingyuan Lyu;Dong Liang;Leslie Ying; "A Kernel-Based Low-Rank (KLR) Model for Low-Dimensional Manifold Recovery in Highly Accelerated Dynamic MRI," vol.36(11), pp.2297-2307, Nov. 2017. While many low rank and sparsity-based approaches have been developed for accelerated dynamic magnetic resonance imaging (dMRI), they all use low rankness or sparsity in input space, overlooking the intrinsic nonlinear correlation in most dMRI data. In this paper, we propose a kernel-based framework to allow nonlinear manifold models in reconstruction from sub-Nyquist data. Within this framework, many existing algorithms can be extended to kernel framework with nonlinear models. In particular, we have developed a novel algorithm with a kernel-based low-rank model generalizing the conventional low rank formulation. The algorithm consists of manifold learning using kernel, low rank enforcement in feature space, and preimaging with data consistency. Extensive simulation and experiment results show that the proposed method surpasses the conventional low-rank-modeled approaches for dMRI.

Naren Naik;Nishigandha Patil;Yamini Yadav;Jerry Eriksson;Asima Pradhan; "Fully Nonlinear ${SP}_{3}$ Approximation Based Fluorescence Optical Tomography," vol.36(11), pp.2308-2318, Nov. 2017. In fluorescence optical tomography, many works in the literature focus on the linear reconstruction problem to obtain the fluorescent yield or the linearized reconstruction problem to obtain the absorption coefficient. The nonlinear reconstruction problem, to reconstruct the fluorophore absorption coefficient, is of interest in imaging studies as it presents the possibility of better reconstructions owing to a more appropriate model. Accurate and computationally efficient forward models are also critical in the reconstruction process. The SPN approximation to the radiative transfer equation (RTE) is gaining importance for tomographic reconstructions owing to its computational advantages over the full RTE while being more accurate and applicable than the commonly used diffusion approximation. This paper presents Gauss-Newton-based fully nonlinear reconstruction for the SP3 approximated fluorescence optical tomography problem with respect to shape as well as the conventional finite-element method-based representations. The contribution of this paper is the Frechet derivative calculations for this problem and demonstration of reconstructions in both representations. For the shape reconstructions, radial-basis-function represented level-set-based shape representations are used. We present reconstructions for tumor-mimicking test objects in scattering and absorption dominant settings, respectively,for moderately noisy data sets in order to demonstrate the viability of the formulation. Comparisons are presented between the nonlinear and linearized reconstruction schemes in an element wise setting to illustrate the benefits of using the former especially for absorption dominant media.

Seyed Sadegh Mohseni Salehi;Deniz Erdogmus;Ali Gholipour; "Auto-Context Convolutional Neural Network (Auto-Net) for Brain Extraction in Magnetic Resonance Imaging," vol.36(11), pp.2319-2330, Nov. 2017. Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonan- e imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.

Lucas R. Borges;Igor Guerrero;Predrag R. Bakic;Alessandro Foi;Andrew D. A. Maidment;Marcelo A. C. Vieira; "Method for Simulating Dose Reduction in Digital Breast Tomosynthesis," vol.36(11), pp.2331-2342, Nov. 2017. This paper proposes a new method of simulating dose reduction in digital breast tomosynthesis, starting from a clinical image acquired with a standard radiation dose. It considers both signal-dependent quantum and signal-independent electronic noise. Furthermore, the method accounts for pixel crosstalk, which causes the noise to be frequency-dependent, thus increasing the simulation accuracy. For an objective assessment, simulated and real images were compared in terms of noise standard deviation, signal-to-noise ratio (SNR) and normalized noise power spectrum (NNPS). A two-alternative forcedchoice (2-AFC) study investigated the similarity between the noise strength of low-dose simulated and real images. Six experienced medical physics specialists participated on the study, with a total of 2160 readings. Objective assessment showed no relevant trends with the simulated noise. The relative error in the standard deviation of the simulated noise was less than 2% for every projection angle. The relative error of the SNR was less than 1.5%, and the NNPS of the simulated images had errors less than 2.5%. The 2-AFC human observer experiment yielded no statistically significant difference (p=0.84) in the perceived noise strength between simulated and real images. Furthermore, the observer study also allowed the estimation of a dose difference at which the observer perceived a just-noticeable difference (JND) in noise levels. The estimated JND value indicated that a change of 17% in the current-time product was sufficient to cause a noticeable difference in noise levels. The observed high accuracy, along with the flexible calibration, make this method an attractive tool for clinical image-based simulations of dose reduction.

Yuan Gao;Kun Wang;Shixin Jiang;Yuhao Liu;Ting Ai;Jie Tian; "Bioluminescence Tomography Based on Gaussian Weighted Laplace Prior Regularization for In Vivo Morphological Imaging of Glioma," vol.36(11), pp.2343-2354, Nov. 2017. Bioluminescence tomography (BLT) is a powerful non-invasive molecular imaging tool for in vivo studies of glioma in mice. However, because of the light scattering and resulted ill-posed problems, it is challenging to develop a sufficient reconstruction method, which can accurately locate the tumor and define the tumor morphology in three-dimension. In this paper, we proposed a novel Gaussian weighted Laplace prior (GWLP) regularization method. It considered the variance of the bioluminescence energy between any two voxels inside an organ had a non-linear inverse relationship with their Gaussian distance to solve the over-smoothed tumor morphology in BLT reconstruction. We compared the GWLP with conventional Tikhonov and Laplace regularization methods through various numerical simulations and in vivo orthotopic glioma mouse model experiments. The in vivo magnetic resonance imaging and ex vivo green fluorescent protein images and hematoxylin-eosin stained images of whole head cryoslicing specimens were utilized as gold standards. The results demonstrated that GWLP achieved the highest accuracy in tumor localization and tumor morphology preservation. To the best of our knowledge, this is the first study that achieved such accurate BLT morphological reconstruction of orthotopic glioma without using any segmented tumor structure from any other structural imaging modalities as the prior for reconstruction guidance. This enabled BLT more suitable and practical for in vivo imaging of orthotopic glioma mouse models.

Gustavo Carneiro;Jacinto Nascimento;Andrew P. Bradley; "Automated Analysis of Unregistered Multi-View Mammograms With Deep Learning," vol.36(11), pp.2355-2365, Nov. 2017. We describe an automated methodology for the analysis of unregistered cranio-caudal (CC) and medio-lateral oblique (MLO) mammography views in order to estimate the patient's risk of developing breast cancer. The main innovation behind this methodology lies in the use of deep learning models for the problem of jointly classifying unregistered mammogram views and respective segmentation maps of breast lesions (i.e., masses and micro-calcifications). This is a holistic methodology that can classify a whole mammographic exam, containing the CC and MLO views and the segmentation maps, as opposed to the classification of individual lesions, which is the dominant approach in the field. We also demonstrate that the proposed system is capable of using the segmentation maps generated by automated mass and micro-calcification detection systems, and still producing accurate results. The semi-automated approach (using manually defined mass and micro-calcification segmentation maps) is tested on two publicly available data sets (INbreast and DDSM), and results show that the volume under ROC surface (VUS) for a 3-class problem (normal tissue, benign, and malignant) is over 0.9, the area under ROC curve (AUC) for the 2-class “benign versus malignant” problem is over 0.9, and for the 2-class breast screening problem (malignancy versus normal/benign) is also over 0.9. For the fully automated approach, the VUS results on INbreast is over 0.7, and the AUC for the 2-class “benign versus malignant” problem is over 0.78, and the AUC for the 2-class breast screening is 0.86.

Peter Mountney;Jonathan M. Behar;Daniel Toth;Maria Panayiotou;Sabrina Reiml;Marie-Pierre Jolly;Rashed Karim;Li Zhang;Alexander Brost;Christopher A. Rinaldi;Kawal Rhode; "A Planning and Guidance Platform for Cardiac Resynchronization Therapy," vol.36(11), pp.2366-2375, Nov. 2017. Patients with drug-refractory heart failure can greatly benefit from cardiac resynchronization therapy (CRT). A CRT device can resynchronize the contractions of the left ventricle (LV) leading to reduced mortality. Unfortunately, 30%-50% of patients do not respond to treatment when assessed by objective criteria such as cardiac remodeling. A significant contributing factor is the suboptimal placement of the LV lead. It has been shown that placing this lead away from scar and at the point of latest mechanical activation can improve response rates. This paper presents a comprehensive and highly automated system that uses scar and mechanical activation to plan and guide CRT procedures. Standard clinical preoperative magnetic resonance imaging is used to extract scar and mechanical activation information. The data are registered to a single 3-D coordinate system and visualized in novel 2-D and 3-D American Heart Association plots enabling the clinician to select target segments. During the procedure, the planning information is overlaid onto live fluoroscopic images to guide lead deployment. The proposed platform has been used during 14 CRT procedures and validated on synthetic, phantom, volunteer, and patient data.

Zhipeng Jia;Xingyi Huang;Eric I-Chao Chang;Yan Xu; "Constrained Deep Weak Supervision for Histopathology Image Segmentation," vol.36(11), pp.2376-2388, Nov. 2017. In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.

Okkyun Lee;Steffen Kappler;Christoph Polster;Katsuyuki Taguchi; "Estimation of Basis Line-Integrals in a Spectral Distortion-Modeled Photon Counting Detector Using Low-Rank Approximation-Based X-Ray Transmittance Modeling: K-Edge Imaging Application," vol.36(11), pp.2389-2403, Nov. 2017. Photon counting detectors (PCDs) provide multiple energy-dependent measurements for estimating basis line-integrals. However, the measured spectrum is distorted from the spectral response effect (SRE) via charge sharing, K-fluorescence emission, and so on. Thus, in order to avoid bias and artifacts in images, the SRE needs to be compensated. For this purpose, we recently developed a computationally efficient three-step algorithm for PCD-CT without contrast agents by approximating smooth X-ray transmittance using low-order polynomial bases. It compensated the SRE by incorporating the SRE model in a linearized estimation process and achieved nearly the minimum variance and unbiased (MVU) estimator. In this paper, we extend the three-step algorithm to K-edge imaging applications by designing optimal bases using a low-rank approximation to model X-ray transmittances with arbitrary shapes (i.e., smooth without the K-edge or discontinuous with the K-edge). The bases can be used to approximate the X-ray transmittance and to linearize the PCD measurement modeling and then the three-step estimator can be derived as in the previous approach: estimating the x-ray transmittance in the first step, estimating basis line-integrals including that of the contrast agent in the second step, and correcting for a bias in the third step. We demonstrate that the proposed method is more accurate and stable than the low-order polynomial-based approaches with extensive simulation studies using gadolinium for the K-edge imaging application. We also demonstrate that the proposed method achieves nearly MVU estimator, and is more stable than the conventional maximum likelihood estimator in high attenuation cases with fewer photon counts.

* "ICIP 2018 IEEE International Conference on Image Processing," vol.36(11), pp.2404-2404, Nov. 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "IEEE International Conference on Multimedia and Expo (ICME) 2018," vol.36(11), pp.2405-2405, Nov. 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "40th International Conference of the IEEE Engineering in Medicine and Biology Society," vol.36(11), pp.2406-2406, Nov. 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "IEEE International Symposium on Biomedical Imaging," vol.36(11), pp.2407-2407, Nov. 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "Biomedical and healh informatics (bhi) and the body sensor networks (bsn) conference," vol.36(11), pp.2408-2408, Nov. 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "IEEE Transactions on Medical Imaging information for authors," vol.36(11), pp.C3-C3, Nov. 2017.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

IET Image Processing - new TOC (2017 November 23) [Website]

Ajay Mittal;Rahul Hooda;Sanjeev Sofat; "Lung field segmentation in chest radiographs: a historical review, current status, and expectations from deep learning," vol.11(11), pp.937-952, 11 2017. Lung field defines a region-of-interest in which specific radiologic signs such as septal lines, pulmonary opacities, cavities, consolidations, and lung nodules are searched by a chest radiographic computer-aided diagnostic system. Thus, its precise segmentation is extremely important. To precisely segment it, numerous methods have been developed during the last four decades. However, no exclusive survey consolidating the advancements in these methods has been presented till date, thus indicating a void and the need. This study fills the void by presenting a comprehensive survey of these methods with a focus on their underlying principle, the dataset used, reported performance, and relative merits and demerits. It refrains from doing a hard comparative evaluation by bringing all of them on a common platform, since the datasets used in their development and testing are of varied quality, complexity, and are not publicly available. It also provides a glimpse of deep learning, the present state of deep-learning-based lung field segmentation methods, expectations from it, and the challenges ahead of it.

Neeti Singh;Thirusangu Thilagavathy;Ramasubramanian T. Lakshmipriya;Oorkavalan Umamaheswari; "Some studies on detection and filtering algorithms for the removal of random valued impulse noise," vol.11(11), pp.953-963, 11 2017. Removing random valued impulse noise (RVIN) is a challenging task in corrupted images. This article aims to study some detection and filtering algorithms which remove RVIN in images. In addition to some state-of-the-art detection and filtering algorithms, a new detection technique, measures of dispersion (MOD) algorithm, for removing very high-density RVIN proposed by authors is also compared with existing methods. In the detection stage, rank order absolute difference, rank order logarithmic difference, adaptive switching median, triangle-based linear interpolation, and MOD algorithms are considered. Median filter, fuzzy switching median filter, and fuzzy switching weighted median filter are used for filtering followed by the detection algorithms. Comparative studies in terms of peak signal-to-noise ratio and structural similarity have been devised to evaluate the performance of various filtering schemes.

Xiao-Xin Li;Lin He;Pengyi Hao;Zhiyong Liu;Jingjing Li; "Adaptive Weberfaces for occlusion-robust face representation and recognition," vol.11(11), pp.964-975, 11 2017. In order to deal with facial occlusion effectively, the authors propose a powerful but simple face representation method, called adaptive Weberfaces (AdapWeber), based on human visual perception change model and the Weber ratio R implied in Weber's law. Specifically, human perception is naturally highly selective and robust to occlusions, and the Weber ratio R is very important to enhance feature redundancy. As feature redundancy and locality are two guiding principles against facial occlusion, they further develop eight variants of AdapWeber, collectively referred to as single-scale and single-orientation (SSSO) AdapWeber, by shrinking the kernel locality and varying the kernel orientation of the original AdapWeber, and integrate them to formulate a multi-scale and multi-orientation (MSMO) AdapWeber. A natural by-product of MSMO AdapWeber is MSMO Weberfaces. Experiments on four benchmark databases, including Extended Yale B, AR, UMB-DB, and LFW, showed that MSMO AdapWeber/Weberfaces, rather than any variant of SSSO AdapWeber/Weberfaces, outperformed several popular feature extraction approaches in many scenarios, especially when the occlusion level is very high or the image dimension is very low. This result demonstrates that several occlusion-weak features can be combined together to construct an occlusion-robust feature.

Wenchao Cui;Guoqiang Gong;Ke Lu;Shuifa Sun;Fangmin Dong; "Convex-relaxed active contour model based on localised kernel mapping," vol.11(11), pp.976-985, 11 2017. Intensity inhomogeneity is one of the major obstacles for intensity-based segmentation in many applications. The recently proposed kernel mapping (KM) method has exhibited excellent performance on segmenting various types of noisy images while it is not effective to handle intensity inhomogeneity. To overcome this drawback, this study presents a localised KM (LKM) method based on the fact that intensity inhomogeneity can be ignored in a local neighbourhood. The authors' method first reconstructs the KM formulation of image segmentation in a neighbourhood of each pixel, and then such formulations for all pixels can be integrated together to derive the LKM energy functional. Minimisation of the energy functional is implemented by solving an equivalent convex-relaxed problem whose optimisation can be quickly achieved via the split Bregman method. Experimental results on two-phase segmentation and multiphase segmentation demonstrate competitive performance of the LKM method in the presence of intensity inhomogeneity and severe noise.

Gamal Fahmy;Mamdouh F. Fahmy;Omar M. Fahmy; "Micro-movement magnification in video signals using complex wavelet analysis," vol.11(11), pp.986-993, 11 2017. Magnifying micro-movements from natural video has recently been investigated by several computer vision researchers, due to its impact in numerous applications. In this study, the authors analyse video signals and try to magnify micro-movements/vibrations to make them visible. These micro-movements are typically undetectable and cannot be seen by basic human vision. They utilise complex wavelets to analyse sequential frames and detect any minor change in object's spatial position. They magnify some specific complex wavelet frequency bands by a multiplication factor and reconstruct back the video signal after some manipulation and modification to make these micro-movements seen and observable. They compare their work with recent techniques in micro-motion magnification (Freeman et al.) and try to show the merits of each technique. These micro-movements can later be utilised in different applications such as medical imaging, structural engineering, mechanical engineering, physical feature analysis and industrial engineering, as will be seen in their experiments.

Lifeng Liu;Yan Ma;Xiangfen Zhang;Yuping Zhang;Shunbao Li; "High discriminative SIFT feature and feature pair selection to improve the bag of visual words model," vol.11(11), pp.994-1001, 11 2017. The bag of visual words (BOW) model has been widely applied in the field of image recognition and image classification. However, all scale-invariant feature transform (SIFT) features are clustered to construct the visual words which result in a substantial loss of discriminative power for the visual words. The corresponding visual phrases will further render the generated BOW histogram sparse. In this study, the authors aim to improve the classification accuracy by extracting high discriminative SIFT features and feature pairs. First, high discriminative SIFT features are extracted with the within- and between-class correlation coefficients. Second, the high discriminative SIFT feature pairs are selected by using minimum spanning tree and its total cost. Next, high discriminative SIFT features and feature pairs are exploited to construct the visual word dictionary and visual phrase dictionary, respectively, which are concatenated to a joint histogram with different weights. Compared with the state-of-the-art BOW-based methods, the experimental results on Caltech 101 dataset show that the proposed method has higher classification accuracy.

Weiqing Wang;Junyong Ye;Tongqing Wang;Weifu Wang; "Reversible data hiding scheme based on significant-bit-difference expansion," vol.11(11), pp.1002-1014, 11 2017. This study presents a lossless robust data hiding scheme based on significance-bit-difference expansion. The original cover image can be recovered without any distortion after the hidden data have been extracted if the stego image remains intact, on the other hand, the hidden data can be robust against unintentional changes applying to the stego image, such as image compression and sometimes unavoidable addition of random noise which is below a certain level and does not change the content of an image. The proposed scheme decomposes pixels in a cover image into two parts, that is, the higher significant bits (HSB) and the least significant bits (LSB), and calculates the HSB difference between adjacent pixels. Bits are embedded into HSBs by shifting the HSB difference value histogram bins. The shift and shifting rule are fixed for all HSB difference values, and reversibility is achieved. Furthermore, owing to the separation of HSBs and LSBs, minor alteration applying to the stego image generated by non-malicious attacks such as Joint Photographic Experts Group (JPEG) compression, which will not change the HSB values as well as the HSB difference values, and robustness is achieved. Experimental results show that, compared with previous works, the performance of the proposed scheme is significantly improved.

Leninisha Shanmugam;Krithika Gunasekaran;Aishwarya Natarajan;Vani Kaliaperumal; "Quantitative growth analysis of pulp necrotic tooth (post-op) using modified region growing active contour model," vol.11(11), pp.1015-1019, 11 2017. In the field of dentistry, prospective clinical study reports affirm the need for approximate growth analysis of endodontic tooth post treatment. There is no difference in the frequency, appearance or extent of root resorption in the teeth. It is necessary to elucidate the role of endodontic treatment in the root resorption. Differences between the two samples (radiographs), which are taken with specified period of intervals, in terms of the frequency of growth changes in treated teeth is needed to observe accurately. This study mainly aims at this requirement by utilising slightly modified region-based growing active contour model for quantitative growth analysis of tooth. Recall radiographs of endodontic regeneration involving immature permanent teeth with pulp necrosis have been considered as input in this research. Image enhancing techniques such as dilation and erosion of mathematical morphology are then performed sequentially to emphasise the outlying pixels. Finally, the criterions which include the root length, apical diameter and the dentinal wall thickness are calibrated and given in experimental results. The visual sample results and along with its measurements proved the efficiency of the proposed algorithm.

Mohamed Boussif;Noureddine Aloui;Adnane Cherif; "Smartphone application for medical images secured exchange based on encryption using the matrix product and the exclusive addition," vol.11(11), pp.1020-1026, 11 2017. In this study, the authors present a secured transfer method for medical images, using smartphone, based on a proposed images encryption algorithm using the matrix product and the exclusive addition. The novelty of this study is to propose a low-complexity encryption algorithm running in real time on embedded system. Experimental results demonstrate that the proposed encryption method can achieve high security with a good performance.

Su Honglei;Liu Qi;Gong Hao;Wang Xiaohui;Yang Huan;Pan Zhenkuan; "Content-based bitrate model for perceived compression distortion evaluation of mobile video services," vol.11(11), pp.1027-1033, 11 2017. A novel bitrate model with low complexity is proposed for perceived compression distortion assessment of mobile video with low resolution, which is extremely useful in intermediate network nodes for quality monitoring. Without fully decoding, parameters are extracted by bitstream analysing, such as bitrate, frame type, quantisation parameter, DCT coefficient, motion vector. Bitrate is regarded as an essential parameter meanwhile the bitrate-MOS curve is determined by video content. Respectively, spatial factor is estimated using quantisation parameter and DCT coefficient and temporal factor is estimated using motion vector. Apart from bitrate, the spatial and temporal factors, which reflect the characteristic of video content, are considered in the proposed model to obtain a more accurate evaluation. Experimental results show that the overall performance of proposed model significantly outperforms that of the other five bitrate models in terms of widely used performance criteria, including the Pearson correlation coefficient (PCC), the Spearman rank-order correlation coefficient (SROCC), the root-mean-squared error (RMSE) and the outlier ratio (OR).

Guangming Lu;Yuanrong Xu; "[[http://ieeexplore.ieee.org/document/8089614][Fast pore matching method based on deterministic annealing algorithm][Name:_blank]]," vol.11(11), pp.1034-1040, 11 2017. High-resolution fingerprint identification system (HRFIS) has become a hot topic in the field of academic research. Compared to traditional automatic fingerprint identification system, HRFIS reduces the risk of being faked by using level 3 features, such as pores, which cannot be detected in lower resolution images. However, there is a serious problem in HRFIS: there are hundreds of sweat pores in one fingerprint image, which will spend a considerable amount of time for direct fingerprint matching. The authors propose a method to match pores in two fingerprint images based on deterministic annealing algorithm. In this method, fingerprints are aligned using singular points. Then minutiae are matched based on the alignment result. To reduce the impact of deformation, they build a convex hull for each of these fingerprints. Pores in these convex hulls are used for matching. In the experiments, their method is compared with random sample consensus method, minutia and ICP-based method, and direct pore matching method. The results show that the proposed method is more efficient.

Fatemeh Fakhari;Mohammad.R. Mosavi;Mehdi.M. Lajvardi; "Image fusion based on multi-scale transform and sparse representation: an image energy approach," vol.11(11), pp.1041-1049, 11 2017. Image fusion is a process to enhance the human perception of different images from the same scene. Nowadays, two popular methods in the signal/image fusion, namely, multi-scale transform (MST) and sparse representation (SR) are being used. This study uses an image energy approach to enhance a fusion rule based on the combination of MST and SR methods. Each source image is first decomposed to its sub-bands using the selected MST method. Then, SR is applied to the low-pass band and maximum absolute (max-abs) rule merges the high-pass bands. The activity level of the sparse coefficients is measured based on the energy differences of the source images. When the gap energy is high enough, a coefficient with maximum L2-norm is selected; otherwise, maximum L1-norm is considered. Finally, by applying inverse MST to the attained bands, the fused image is reconstructed. The popular MSTs, such as discrete wavelet transform, dual-tree complex wavelet transform and non-sub-sampled contourlet are used. The experiments are carried out on several standard and real-life images. The measurement results confirm that the proposed method has enhanced the contrast, clarity and visual information of the fused results.

Bangjun Wang;Li Zhang;Fanzhang Li; "Supervised orthogonal discriminant projection based on double adjacency graphs for image classification," vol.11(11), pp.1050-1058, 11 2017. This study proposes a supervised orthogonal discriminant projection (SODP) based on double adjacency graphs (DAGs). SODP based on DAG (SODP-DAG) aims to minimise the local within-class scatter and simultaneously maximise both the local between-class scatter and the non-local scatter, where the local between-class scatter and the local within-class scatter are constructed by applying the DAG structure. By doing so, SODP-DAG can keep the local within-class structure for original data and find the optimal discriminant directions effectively. Moreover, four schemes are designed for constructing weight matrices in SODP-DAG. To validate the performance of SODP-DAG, the authors compared it with orthogonal discriminant projection, SODP and others on several publicly available datasets. Experimental results show the feasibility and effectiveness of SODP-DAG.

Uche Afam Nnolim; "Improved partial differential equation-based enhancement for underwater images using local–global contrast operators and fuzzy homomorphic processes," vol.11(11), pp.1059-1067, 11 2017. This study describes improved partial differential equation (PDE)-based formulations for combined global and local contrast operators for underwater image enhancement. The algorithms remove the limitations of conventional closed-form approaches and the problem of optimal stopping time of earlier PDE-based approaches. Main features include improved simultaneous combination, augmentation and control of various individual local and global processes, guided by optimisation of multiple image metrics. This ensures optimal operation of the algorithms in terms of visual and numerical results and proposed approaches ensure faster and fully automated processing of various images. Additional contributions include the incorporation of a colour space converter, fuzzy homomorphic and post-contrast enhancement processes for certain problematic images. Experimental comparisons indicate that the improved approaches yield better results than several of the proposed works from the literature for underwater image enhancement.

Xin Liu;Lu Xie;Bineng Zhong;Ji-Xiang Du;Qinmu Peng; "Automatic facial flaw detection and retouching via discriminative structure tensor," vol.11(11), pp.1068-1076, 11 2017. Facial retouching has been increasingly applied in current social media and entertainment industries. In this study, the authors propose an efficient approach to automatically detect and retouch the facial flaws by using discriminative structure tensor. First, a non-linear structure tensor associated with saliency model is exploited to discriminatively and automatically detect the significant facial flaws. Then, a Gaussian skin model is constructed in YCbCr space and the OSTU operation is simultaneously utilised to precisely mark the facial skin regions, in which the mouth, eyebrows and nostril parts are excluded. Subsequently, diverse structure tensor is employed to discriminatively adjust the inpainting priority and propose a structure tensor-based inpainting algorithm to retouch the detected flaws. Without manual intervention, the extensive experiments have shown its effectiveness in marking the freckles, blemishes and moles in face images, and the retouching performance is visually pleasing in comparison with state-of-the-art counterparts.

Haiyong Zheng;Nan Wang;Zhibin Yu;Zhaorui Gu;Bing Zheng; "Robust and automatic cell detection and segmentation from microscopic images of non-setae phytoplankton species," vol.11(11), pp.1077-1085, 11 2017. Saliency-based marker-controlled watershed method was proposed to detect and segment phytoplankton cells from microscopic images of non-setae species. This method first improved IG saliency detection method by combining saturation feature with colour and luminance feature to detect cells from microscopic images uniformly and then produced effective internal and external markers by removing various specific noises in microscopic images for efficient performance of watershed segmentation automatically. The authors built the first benchmark dataset for cell detection and segmentation, including 240 microscopic images across multiple phytoplankton species with pixel-wise cell regions labelled by a taxonomist, to evaluate their method. They compared their cell detection method with seven popular saliency detection methods and their cell segmentation method with six commonly used segmentation methods. The quantitative comparison validates that their method performs better on cell detection in terms of robustness and uniformity and cell segmentation in terms of accuracy and completeness. The qualitative results show that their improved saliency detection method can detect and highlight all cells, and the following marker selection scheme can remove the corner noise caused by illumination, the small noise caused by specks, and debris, as well as deal with blurred edges.

Qiuchen Du;Rongke Liu;Yu Pan; "Depth extraction for a structured light system based on mismatched image pair rectification using a virtual camera," vol.11(11), pp.1086-1093, 11 2017. Structured light systems have become an effective tool for reconstructing three-dimensional models of objects due to the advent of low-price, high-speed depth cameras such as Kinect. However, this kind of active depth sensor extracts low-quality depth maps because of its inaccurate image matching process. This study proposes a depth extraction method based on image rectification for accurate image matching. Due to the sizes of the projected patterns and the captured images are usually different, a virtual camera is defined, through which the rectified images are generated to match the images in the real camera at pixel level. Experiments on simulated and hardware platforms demonstrate that the proposed method achieves efficient rectification and obtains better-quality depth maps.

Lingcheng Kong;Hui Zhang;Yuhui Zheng;Yunjie Chen;Jiezhong Zhu;Qingming M. Jonathan Wu; "Image segmentation using a hierarchical student's-t mixture model," vol.11(11), pp.1094-1102, 11 2017. As a significant tool, finite mixture models (FMMs) have been widely used for image segmentation. However, there are two problems with standard FMMs: first, the conditional probability is sensitive to outliers. Second, the robustness to image noise is inadequate. In this study, the authors present a novel hierarchical Student's-t MM (HSMM), which includes standard FMMs as a sub-problem. Additionally, to incorporate more image spatial information, they apply a mean template not only to the prior/posterior probability, but also to the sub-conditional distribution. Thus, their HSMM is more robust to outliers and image noise owing to the spatial constraints from the mean template. In the standard SMM, a t-distribution is used to calculate the conditional probability. In this study, the authors present a novel hierarchical student's-t mixture model (HSMM), which includes the standard FMM as a sub-problem. Finally, though they use Student's-t-distribution to solve the image segment problems of this study, their HSMM achieves excellent performance, is elastic and can encompass any other model that is based on FMMs. Experimental results demonstrate that their proposed method is robust and effective.

Navid Rabbani;Behzad Nazari;Saeid Sadri;Reyhaneh Rikhtehgaran; "Efficient Bayesian approach to saliency detection based on Dirichlet process mixture," vol.11(11), pp.1103-1113, 11 2017. Saliency detection has shown a great role in many image processing applications. This study introduces a new Bayesian framework for saliency detection. In this framework, image saliency is computed as product of three saliencies: location-based, feature-based and centre-surround saliencies. Each of these saliencies is estimated using statistical approaches. The centre-surround saliency is estimated using Dirichlet process mixture model. The authors evaluate their method using five different databases and it is shown that it outperform state-of-the-art methods. Also, they show that the proposed method has a low computational cost.

Yusuf Akhtar;Dipti Prasad Mukherjee; "Reconstruction of three-dimensional linear structures in the breast from craniocaudal and mediolateral oblique mammographic views," vol.11(11), pp.1114-1121, 11 2017. A three-dimensional (3D) representation of the linear structures in a breast has been constructed by utilising information of the ridges in the craniocaudal (CC) and the mediolateral oblique (MLO) mammographic views of the breast. Blood vessels and ducts appear as ridges in the mammogram. The position and shape of the ridges in the breast are an indicator of malignancy. In the first stage of the proposed 3D reconstruction problem, an algorithm has been developed to find out which linear structure in the CC view and a linear structure in the MLO view correspond to the same 3D structure in the breast. The 3D view of the linear structures is constructed from the abovementioned correspondences. The positional error (per unit length of the reconstructed linear structure) between the manual reconstruction and the algorithmic reconstruction of a linear structure turns out to be better by 20% than a competing approach.

IEEE Transactions on Signal Processing - new TOC (2017 November 23) [Website]

Zhijin Qin;Yue Gao;Mark D. Plumbley; "Malicious User Detection Based on Low-Rank Matrix Completion in Wideband Spectrum Sensing," vol.66(1), pp.5-17, Jan.1, 1 2018. In cognitive radio networks, cooperative spectrum sensing (CSS) has been a promising approach to improve sensing performance by utilizing spatial diversity of participating secondary users (SUs). In current CSS networks, all cooperative SUs are assumed to be honest and genuine. However, the presence of malicious users sending out dishonest data can severely degrade the performance of CSS networks. In this paper, a framework with high detection accuracy and low costs of data acquisition at SUs is developed, with the purpose of mitigating the influences of malicious users. More specifically, a low-rank matrix completion based malicious user detection framework is proposed. In the proposed framework, in order to avoid requiring any prior information about the CSS network, a rank estimation algorithm and an estimation strategy for the number of corrupted channels are proposed. Numerical results show that the proposed malicious user detection framework achieves high detection accuracy with lower data acquisition costs in comparison with the conventional approach. After being validated by simulations, the proposed malicious user detection framework is tested on the real-world signals over TV white space spectrum.

Adarsh Patel;Hukma Ram;Aditya K. Jagannatham;Pramod K. Varshney; "Robust Cooperative Spectrum Sensing for MIMO Cognitive Radio Networks Under CSI Uncertainty," vol.66(1), pp.18-33, Jan.1, 1 2018. This paper considers the problem of cooperative spectrum sensing in multiuser multiple-input multiple-output cognitive radio networks considering the presence of uncertainty in the channel state information (CSI) of the secondary user channels available at the fusion center. Several schemes are proposed that employ cooperative decision rules based on local sensor decisions transmitted to the fusion center by the cooperating nodes over an orthogonal multiple access channel. First, fusion rules are derived under perfect CSI at the fusion center for both antipodal and nonantipodal signaling. Then, a robust detector, termed the uncertainty statistics-based likelihood ratio test, which optimally combines the decisions of different secondary users, is obtained for scenarios with CSI uncertainty. A generalized likelihood ratio test based robust detector is also derived for this scenario. Closed-form expressions are obtained to characterize the probabilities of false alarm <inline-formula><tex-math notation="LaTeX">$(P_{text{FA}})$ </tex-math></inline-formula> and detection <inline-formula><tex-math notation="LaTeX">$(P_D)$</tex-math> </inline-formula> at the fusion center. Simulation results are presented to compare the performance of the proposed schemes with that of the conventional uncertainty agnostic detectors and also to corroborate the analytical expressions developed.

Luiz F. O. Chamon;Alejandro Ribeiro; "Greedy Sampling of Graph Signals," vol.66(1), pp.34-47, Jan.1, 1 2018. Sampling is a fundamental topic in graph signal processing, having found applications in estimation, clustering, and video compression. In contrast to traditional signal processing, the irregularity of the signal domain makes selecting a sampling set nontrivial and hard to analyze. Indeed, though conditions for graph signal interpolation from noiseless samples exist, they do not lead to a unique sampling set. The presence of noise makes choosing among these sampling sets a hard combinatorial problem. Although greedy sampling schemes are commonly used in practice, they have no performance guarantee. This work takes a twofold approach to address this issue. First, universal performance bounds are derived for the Bayesian estimation of graph signals from noisy samples. In contrast to currently available bounds, they are not restricted to specific sampling schemes and hold for any sampling sets. Second, this paper provides near-optimal guarantees for greedy sampling by introducing the concept of approximate submodularity and updating the classical greedy bound. It then provides explicit bounds on the approximate supermodularity of the interpolation mean-square error showing that it can be optimized with worst case guarantees using greedy search even though it is not supermodular. Simulations illustrate the derived bound for different graph models and show an application of graph signal sampling to reduce the complexity of kernel principal component analysis.

Isabel Schlangen;Emmanuel D. Delande;Jérémie Houssineau;Daniel E. Clark; "A Second-Order PHD Filter With Mean and Variance in Target Number," vol.66(1), pp.48-63, Jan.1, 1 2018. The Probability Hypothesis Density (PHD) and Cardinalized PHD (CPHD) filters are popular solutions to the multitarget tracking problem due to their low complexity and ability to estimate the number and states of targets in cluttered environments. The PHD filter propagates the first-order moment (i.e. mean) of the number of targets while the CPHD propagates the cardinality distribution in the number of targets, albeit for a greater computational cost. Introducing the Panjer point process, this paper proposes a Second-Order PHD (SO-PHD) filter, propagating the second-order moment (i.e., variance) of the number of targets alongside its mean. The resulting algorithm is more versatile in the modeling choices than the PHD filter, and its computational cost is significantly lower compared to the CPHD filter. This paper compares the three filters in statistical simulations which demonstrate that the proposed filter reacts more quickly to changes in the number of targets, i.e., target births and target deaths, than the CPHD filter. In addition, a new statistic for multiobject filters is introduced in order to study the correlation between the estimated number of targets in different regions of the state space, and propose a quantitative analysis of the spooky effect for the three filters.

An Liu;Vincent K. N. Lau; "Two-Timescale User-Centric RRH Clustering and Precoding Optimization for Cloud RAN via Local Stochastic Cutting Plane," vol.66(1), pp.64-76, Jan.1, 1 2018. In a cloud radio access network (C-RAN), many distributed remote radio heads (RRHs) are connected to a centralized baseband unit pool via high-speed fronthaul links. Such an architecture improves the spectral efficiency but suffers from huge implementation costs. We propose a mixed timescale radio interference processing framework to optimize the tradeoff between the average weighted sum rate and the implementation cost in the C-RAN downlink. The radio interference processing is decomposed into short-term precoding and long-term user-centric RRH clustering (UCRC) subproblems. The short-term precoding subproblem can be solved using a modified weighted minimum mean squared error approach. To solve the challenging UCRC subproblem, we first propose a novel approximate stochastic cutting plane algorithm. Then, we bound the optimality gap of the proposed overall solution, and establish its asymptotic optimality in the weak interference and high SNR regimes. Simulations show that the proposed two-timescale solution achieves a better tradeoff performance than the baselines.

Weizhi Lu;Tao Dai;Shu-Tao Xia; "Binary Matrices for Compressed Sensing," vol.66(1), pp.77-85, Jan.1, 1 2018. For an <inline-formula><tex-math notation="LaTeX">$mtimes n$</tex-math></inline-formula> binary matrix with <inline-formula><tex-math notation="LaTeX">$d$</tex-math></inline-formula> nonzero elements per column, it is interesting to identify the minimal column degree <inline-formula><tex-math notation="LaTeX">$d$</tex-math> </inline-formula> that corresponds to the best recovery performance. Consider this problem is hard to be addressed with currently known performance parameters, we propose a new performance parameter, the average of nonzero correlations between normalized columns. The parameter is proved to perform better than the known coherence parameter, namely the maximum correlation between normalized columns, when used to estimate the performance of binary matrices with high compression ratios <inline-formula><tex-math notation="LaTeX">$n/m$</tex-math> </inline-formula> and low column degrees <inline-formula><tex-math notation="LaTeX">$d$</tex-math></inline-formula>. By optimizing the proposed parameter, we derive an ideal column degree <inline-formula><tex-math notation="LaTeX"> $d=lceil sqrt{m}rceil$</tex-math></inline-formula>, around which the best recovery performance is expected to be obtained. This is verified by simulations. Given the ideal number <inline-formula><tex-math notation="LaTeX">$d$ </tex-math></inline-formula> of nonzero elements in each column, we further determine their specific distribution by minimizing the coherence with a greedy method. The resulting binary matrices achieve comparable or even better recovery performance than random binary matrices.

Engin Masazade;Abdulkadir Kose; "A Proportional Time Allocation Algorithm to Transmit Binary Sensor Decisions for Target Tracking in a Wireless Sensor Network," vol.66(1), pp.86-100, Jan.1, 1 2018. In this paper, we study the target tracking problem in a wireless sensor network. A sensor receives a measurement from an energy emitting target and employs binary quantization to the received measurement to generate its decision. A sinusoidal waveform with a certain duration is then used to transmit the sensor decision to the fusion center (FC). All sensor decisions are transmitted to the FC over erroneous wireless channels based on a time division multiple access scheme. We introduce the proportional time allocation (PTA) algorithm where at each time step of tracking, PTA jointly determines the sensors binary quantization thresholds and their time allocations devoted for the transmissions of binary sensor decisions. Simulation results show that, PTA optimally and dynamically distributes the available transmission time among the sensors near the target so that the decisions of such sensors become less subject to channel errors, and turns off the non-informative sensors located far away from the target. Hence, PTA both saves from the number of sensors transmitting to the FC and provides better estimation performance as compared to ad hoc equal time allocation approaches.

Bashir Sadeghi;Runyi Yu;Ruili Wang; "Shifting Interpolation Kernel Toward Orthogonal Projection," vol.66(1), pp.101-112, Jan.1, 1 2018. Orthogonal projection offers the optimal solution for many sampling-reconstruction problems in terms of the least square error. In the standard interpolation setting where the sampling is assumed to be ideal, however, the projection is impossible unless the interpolation kernel is related to the sinc function and the input is bandlimited. In this paper, we propose a notion of shifting kernel toward the orthogonal projection. For a given interpolation kernel, we formulate optimization problems whose solutions lead to shifted interpolations that, while still being interpolatory, are closest to the orthogonal projection in the sense of the minimax regret. The quality of interpolation is evaluated in terms of the average approximation error over input shift. For the standard linear interpolation, we obtain several values of optimal shift, dependent on a priori information on input signals. For evaluation, we apply the new shifted linear interpolations to a Gaussian signal, an ECG signal, a speech signal, a two-dimensional signal, and three natural images. Significant improvements are observed over the standard and the 0.21-shifted linear interpolation proposed early.

Arthur Mensch;Julien Mairal;Bertrand Thirion;Gaël Varoquaux; "Stochastic Subsampling for Factorizing Huge Matrices," vol.66(1), pp.113-128, Jan.1, 1 2018. We present a matrix-factorization algorithm that scales to input matrices with both huge number of rows and columns. Learned factors may be sparse or dense and/or nonnegative, which makes our algorithm suitable for dictionary learning, sparse component analysis, and nonnegative matrix factorization. Our algorithm streams matrix columns while subsampling them to iteratively learn the matrix factors. At each iteration, the row dimension of a new sample is reduced by subsampling, resulting in lower time complexity compared to a simple streaming algorithm. Our method comes with convergence guarantees to reach a stationary point of the matrix-factorization problem. We demonstrate its efficiency on massive functional magnetic resonance imaging data (2 TB), and on patches extracted from hyperspectral images (103 GB). For both problems, which involve different penalties on rows and columns, we obtain significant speed-ups compared to state-of-the-art algorithms.

Renbo Zhao;Vincent Y. F. Tan; "A Unified Convergence Analysis of the Multiplicative Update Algorithm for Regularized Nonnegative Matrix Factorization," vol.66(1), pp.129-138, Jan.1, 1 2018. The multiplicative update (MU) algorithm has been extensively used to estimate the basis and coefficient matrices in nonnegative matrix factorization (NMF) problems under a wide range of divergences and regularizers. However, theoretical convergence guarantees have only been derived for a few special divergences without regularization. In this work, we provide a conceptually simple, self-contained, and unified proof for the convergence of the MU algorithm applied on NMF with a wide range of divergences and regularizers. Our main result shows the sequence of iterates (i.e., pairs of basis and coefficient matrices) produced by the MU algorithm converges to the set of stationary points of the nonconvex NMF optimization problem. Our proof strategy has the potential to open up new avenues for analyzing similar problems in machine learning and signal processing.

Le Zheng;Marco Lops;Xiaodong Wang;Emanuele Grossi; "Joint Design of Overlaid Communication Systems and Pulsed Radars," vol.66(1), pp.139-154, Jan.1, 1 2018. The focus of this paper is on coexistence between a communication system and a pulsed radar sharing the same bandwidth. Based on the fact that the interference generated by the radar onto the communication receiver is intermittent and depends on the density of scattering objects (such as, e.g., targets), we first show that the communication system is equivalent to a set of independent parallel channels, whereby precoding on each channel can be introduced as a new degree of freedom. We introduce a new figure of merit, named the compound rate, which is a convex combination of rates with and without interference, to be optimized under constraints concerning the signal-to-interference-plus-noise ratio (including signal-dependent interference due to clutter) experienced by the radar and obviously the powers emitted by the two systems: the degrees of freedom are the radar waveform and the aforementioned encoding matrix for the communication symbols. We provide closed-form solutions for the optimum transmit policies for both systems under two basic models for the scattering produced by the radar onto the communication receiver, and account for possible correlation of the signal-independent fraction of the interference impinging on the radar. We also discuss the region of the achievable communication rates with and without interference. A thorough performance assessment shows the potentials and the limitations of the proposed co-existing architecture.

Konstantinos Benidis;Yiyong Feng;Daniel P. Palomar; "Sparse Portfolios for High-Dimensional Financial Index Tracking," vol.66(1), pp.155-170, Jan.1, 1 2018. Index tracking is a popular passive portfolio management strategy that aims at constructing a portfolio that replicates or tracks the performance of a financial index. The tracking error can be minimized by purchasing all the assets of the index in appropriate amounts. However, to avoid small and illiquid positions and large transaction costs, it is desired that the tracking portfolio consists of a small number of assets, i.e., a sparse portfolio. The optimal asset selection and capital allocation can be formulated as a combinatorial problem. A commonly used approach is to use mixed-integer programming (MIP) to solve small sized problems. Nevertheless, MIP solvers can fail for high-dimensional problems while the running time can be prohibiting for practical use. In this paper, we propose efficient and fast index tracking algorithms that automatically perform asset selection and capital allocation under a set of general convex constraints. A special consideration is given to the case of the nonconvex holding constraints and to the downside risk tracking measure. Furthermore, we derive specialized algorithms with closed-form updates for particular sets of constraints. Numerical simulations show that the proposed algorithms match or outperform existing methods in terms of performance, while their running time is lower by many orders of magnitude.

Oskari Tervo;Harri Pennanen;Dimitrios Christopoulos;Symeon Chatzinotas;Björn Ottersten; "Distributed Optimization for Coordinated Beamforming in Multicell Multigroup Multicast Systems: Power Minimization and SINR Balancing," vol.66(1), pp.171-185, Jan.1, 1 2018. This paper considers coordinated multicast beamforming in a multicell multigroup multiple-input single-output system. Each base station (BS) serves multiple groups of users by forming a single beam with common information per group. We propose centralized and distributed beamforming algorithms for two different optimization targets. The first objective is to minimize the total transmission power of all the BSs while guaranteeing the user-specific minimum quality-of-service targets. The semidefinite relaxation (SDR) method is used to approximate the nonconvex multicast problem as a semidefinite program (SDP), which is solvable via centralized processing. Subsequently, two alternative distributed methods are proposed. The first approach turns the SDP into a two-level optimization via primal decomposition. At the higher level, intercell interference powers are optimized for fixed beamformers, whereas the lower level locally optimizes the beamformers by minimizing BS-specific transmit powers for the given intercell interference constraints. The second distributed solution is enabled via an alternating direction method of multipliers, where the intercell interference optimization is divided into a local and a global optimization by forcing the equality via consistency constraints. We further propose a centralized and a simple distributed beamforming design for the signal-to-interference-plus-noise ratio (SINR) balancing problem in which the minimum SINR among the users is maximized with given per-BS power constraints. This problem is solved via the bisection method as a series of SDP feasibility problems. The simulation results show the superiority of the proposed coordinated beamforming algorithms over traditional noncoordinated transmission schemes, and illustrate the fast convergence of the distributed methods.

Yang Liu;John R. Buck; "Gaussian Source Detection and Spatial Spectral Estimation Using a Coprime Sensor Array With the Min Processor," vol.66(1), pp.186-199, Jan.1, 1 2018. A coprime sensor array (CSA) interleaves two undersampled uniform linear arrays with coprime undersampling factors and has recently found broad applications in signal detection and estimation. CSAs commonly use the product processor by multiplying the scanned responses of two colinear subarrays to estimate the spatial power spectral density (PSD) of the received signal. This paper proposes a new CSA processor, the CSAmin processor, which chooses the minimum between the two CSA subarray periodograms at each bearing to estimate the spatial PSD. The proposed CSAmin processor resolves the CSA subarray spatial aliasing equally well as the product processor. For an extended aperture CSA, the CSAmin reduces the peak sidelobe height and total sidelobe area over the product processor for the same CSA geometry. Moreover, unlike the PSD estimate from the product processor, the PSD estimate from the CSAmin is guaranteed to be positive semidefinite. This paper derives the probability density function, the complementary cumulative distribution function (CCDF, or tail distribution), and the first two moments of the CSAmin PSD estimator in closed form for Gaussian sources in white Gaussian noise. Numerical simulations verify the derived CSAmin statistics and demonstrate that the CSAmin improves the performance over the product processor in detecting narrowband Gaussian sources in the presence of loud interferers and noise. The CSAmin spatial PSD estimate achieves lower variance than the product processor estimate, and keeps the PSD estimate unbiased for white Gaussian processes and asymptotically unbiased for nonwhite Gaussian processes.

Ronghua Gui;Wen-Qin Wang;Can Cui;Hing Cheung So; "Coherent Pulsed-FDA Radar Receiver Design With Time-Variance Consideration: SINR and CRB Analysis," vol.66(1), pp.200-214, Jan.1, 1 2018. Different from conventional phased-array providing only angle-dependent beampattern, frequency diverse array (FDA) produces angle-range-dependent and time-variant transmit beampattern. Existing investigations show that FDA offers improved performance in interference suppression and target localization, but the time-variant beampattern will bring interferences to subsequent matched filtering. More seriously, the range-dependent signal phase may be canceled out in the filtering process. In fact, traditional single-channel receiver does not fully exploit the multicarrier feature in FDA signals. In this paper, we propose a multichannel matched filtering structure with considering the time-variance property for receiving pulsed-FDA signals. A coherent pulsed-FDA radar signal model to deal with the angle-range-dependent and time-variance problem is devised under additive colored Gaussian noise scenarios, followed by the corresponding waveform design principle. Moreover, closed-form expressions of the output signal-to-interference-plus-noise ratio and Cramér–Rao bounds for angle and range are derived. The proposed receiver design approach and corresponding theoretical performance derivations are verified by extensive numerical results.

Nikolai Dokuchaev; "A Closed Equation in Time Domain for Band-Limited Extensions of One-Sided Sequences," vol.66(1), pp.215-223, Jan.1, 1 2018. This paper suggests a method of optimal extension of one-sided semi-infinite sequences of a general type by traces of band-limited sequences in deterministic setting, i.e., without probabilistic assumptions. The method requires to solve a closed linear equation in the time domain connecting the past observations of the underlying process with the future values of the band-limited process. Robustness of the solution with respect to the input errors and data truncation is established in the framework of Tikhonov regularization.

Samith Abeywickrama;Tharaka Samarasinghe;Chin Keong Ho;Chau Yuen; "Wireless Energy Beamforming Using Received Signal Strength Indicator Feedback," vol.66(1), pp.224-235, Jan.1, 1 2018. Multiple antenna techniques that allow energy beamforming have been looked upon as a possible candidate for increasing the transfer efficiency between the energy transmitter (ET) and the energy receiver in wireless power transfer. This paper introduces a novel scheme that facilitates energy beamforming by utilizing received signal strength indicator (RSSI) values to estimate the channel. First, in the training stage, the ET will transmit using each beamforming vector in a codebook, which is predefined using a Cramer–Rao lower bound analysis. RSSI value corresponding to each beamforming vector is fed back to the ET, and these values are used to estimate the channel through a maximum likelihood analysis. The results that are obtained are remarkably simple, requires minimal processing, and can be easily implemented. The paper also validates the analytical results numerically, as well as experimentally, and it is shown that the proposed method achieves impressive results.

Greg Ongie;Sampurna Biswas;Mathews Jacob; "Convex Recovery of Continuous Domain Piecewise Constant Images From Nonuniform Fourier Samples," vol.66(1), pp.236-250, Jan.1, 1 2018. We consider the recovery of a continuous domain piecewise constant image from its nonuniform Fourier samples using a convex matrix completion algorithm. We assume the discontinuities/edges of the image are localized to the zero level set of a bandlimited function. This assumption induces linear dependencies between the Fourier coefficients of the image, which results in a two-fold block Toeplitz matrix constructed from the Fourier coefficients being low rank. The proposed algorithm reformulates the recovery of the unknown Fourier coefficients as a structured low-rank matrix completion problem, where the nuclear norm of the matrix is minimized subject to structure and data constraints. We show that the exact recovery is possible with high probability when the edge set of the image satisfies an incoherency property. We also show that the incoherency property is dependent on the geometry of the edge set curve, implying higher sampling burden for smaller curves. This paper generalizes recent work on the super-resolution recovery of isolated Diracs or signals with finite rate of innovation to the recovery of piecewise constant images.

Jinane Harmouche;Dominique Fourer;François Auger;Pierre Borgnat;Patrick Flandrin; "The Sliding Singular Spectrum Analysis: A Data-Driven Nonstationary Signal Decomposition Tool," vol.66(1), pp.251-263, Jan.1, 1 2018. Singular spectrum analysis (SSA) is a signal decomposition technique that aims at expanding signals into interpretable and physically meaningful components (e.g., sinusoids, noise, etc.). This paper presents new theoretical and practical results about the separability of the SSA and introduces a new method called sliding SSA. First, the SSA is combined with an unsupervised classification algorithm to provide a fully automatic data-driven component extraction method for which we investigate the limitations for components separation in a theoretical study. Second, the detailed automatic SSA method is used to design an approach based on a sliding analysis window, which provides better results than the classical SSA method when analyzing nonstationary signals with a time-varying number of components. Finally, the proposed sliding SSA method is compared to the empirical mode decomposition and to the synchrosqueezed short-time Fourier transform, applied on both synthetic and real-world signals.

Kaushik Mahata;Md. Mashud Hyder; "Fast Frequency Estimation With Prior Information," vol.66(1), pp.264-273, Jan.1, 1 2018. A fast gridless method for sparse frequency estimation in the presence of prior information is presented. It allows arbitrarily sampled data and has a significantly smaller complexity than the existing gridless sparse recovery methods. The proposed method achieves good frequency estimation accuracy and resolution performance in numerical simulations. It is also useful where no prior knowledge is available.

IEEE Signal Processing Letters - new TOC (2017 November 23) [Website]

Junli Liang;Petre Stoica;Yang Jing;Jian Li; "Phase Retrieval via the Alternating Direction Method of Multipliers," vol.25(1), pp.5-9, Jan. 2018. We derive a phase retrieval algorithm using the alternating direction method of multipliers. For the cost function obtained from the maximum likelihood criterion, we introduce auxiliary amplitude and phase variables to avoid the absolute value operator and decouple the determination of the auxiliary phase variables from that of the auxiliary amplitude variables. As a result, a phase retrieval algorithm consisting only of two least-squares steps is derived. The performance of the proposed algorithm is investigated via numerical examples, as well as an application to flat-spectrum periodic unimodular sequence design.

Du Liu;Markus Flierl; "Temporal Signal Basis for Hierarchical Block Motion in Image Sequences," vol.25(1), pp.10-14, Jan. 2018. In classic data compression, the optimal transform for energy compaction is the Karhunen–Loève transform with the eigenvectors of the covariance matrix. In coding applications, neither the covariance matrix nor the eigenvectors can be easily transmitted to the decoder. In this letter, we introduce a covariance matrix model based on graphs determined by hierarchical block motion in image sequences and use its eigenvector matrix for compression. The covariance matrix model is defined using the graph distance matrix, where the graph is determined by block motion. As the proposed covariance matrix is closely related to the graph, the relation between the covariance matrix and the Laplacian matrix is studied and their eigenvector matrices are discussed. From our assumptions, we show that our covariance model can be viewed as a Gaussian graphical model where the signal is described by the second order statistics and the zeros in the precision matrix indicate missing edges in the graph. To assess the compression performance, we relate the coding gain due to the eigenbasis of the covariance model to that of the Laplacian eigenbasis. The experimental results show that the eigenbasis of our covariance model is advantageous for tree-structured block motion in image sequences.

Lin Zhu;Yuanhong Hao;Yuejin Song; "${L_{1/2}}$ Norm and Spatial Continuity Regularized Low-Rank Approximation for Moving Object Detection in Dynamic Background," vol.25(1), pp.15-19, Jan. 2018. Low-rank modeling-based moving object detection approaches proposed so far use fixed <inline-formula> <tex-math notation="LaTeX">${{boldsymbol{l}}_1}$</tex-math></inline-formula>-norm penalty to capture the sparse nature of foreground in video, and thus, hardly adapt readily to the statistical variability of underlying foreground pixels in dynamic background. Additionally, they ignore the spatial continuity prior among the neighbor foreground pixels. Consequently, they cannot offer a satisfactory performance in practical dynamic background. In this letter, we present a unified regularization framework, namely <inline-formula><tex-math notation="LaTeX">${{boldsymbol{l}}_{1/2}}$ </tex-math></inline-formula>-norm and spatial continuity regularized low-rank approximation (SCLR-<inline-formula> <tex-math notation="LaTeX">${{boldsymbol{l}}_{1/2}}$</tex-math></inline-formula>), to solve this problem. First, in order to promote accuracy, we introduce an <inline-formula><tex-math notation="LaTeX">${{boldsymbol{l}}_{1/2}}$ </tex-math></inline-formula> constraint into the framework. Second, to guarantee the continuity among the neighbor foreground pixels, we introduce a spatial continuity regularization term, motivated by total variation. Finally, we generalize our framework to the <inline-formula><tex-math notation="LaTeX">${{boldsymbol{l}}_{boldsymbol{q}}}$ </tex-math></inline-formula>-norm penalized case (SCLR-<inline-formula><tex-math notation="LaTeX"> ${{boldsymbol{l}}_{boldsymbol{q}}}$</tex-math></inline-formula>). By adjusting the shrinkage parameter <inline-formula><tex-math notation="LaTeX">${boldsymbol{q}}$</tex-math></inline-formula>, the framework gets better flexibility to choose a reasonable sparse domain. To deal with the present constrained minimization problem, the augmented Lagrange multiplier method is employed and extended with the help of the alternating direction minimizing strategy. Experimental results show that the proposed method outperforms- some state-of-the-art algorithms especially for the cases with dynamic backgrounds.

El Hadji S. Diop;Radjesvarane Alexandre;Abdel.-O. Boudraa; "Two-Dimensional Curvature-Based Analysis of Intrinsic Mode Functions," vol.25(1), pp.20-24, Jan. 2018. A novel approach in modeling the empirical mode decomposition (EMD) is proposed here, allowing a perfect image recovery and, for instance, a straightforward extension for multidimensional <inline-formula> <tex-math notation="LaTeX">$mathbb{R}^{n}$</tex-math></inline-formula> signals. In fact, thanks to a new sifting process modeling, where the two-dimensional (2-D) local mean envelope is formulated with the morphological median operator, we prove its consistency with a mean curvature motion-like partial differential equation. We provide both theoretical contributions and noticeable improvements in terms of quality of decomposition modes and computation times; our approach is illustrated on synthetic and real images, and compared to state of art 2-D EMD algorithms.

Gangsheng Li;Liping Zeng;Ling Zhang;Q. M. Jonathan Wu; "State Identification of Duffing Oscillator Based on Extreme Learning Machine," vol.25(1), pp.25-29, Jan. 2018. As an important weak target detection method, Duffing oscillator is very effective in detecting signals with very low signal-to-noise ratio. However, the accurate discrimination between chaotic and periodic states is a crucial problem and that is the prerequisite for using the Duffing oscillator. Conventionally, the Lyapunov exponent is used as an index to identify different states, but as this indicator has the problem of heavy computation cost, slow convergence rate, and requires a mass of data, its application becomes seriously limits. To solve this problem, a novel method for state identification of the Duffing oscillator based on extreme learning machine (ELM) is proposed. The feature data, as the input of ELM, are extracted from the phase diagram and the time series of the Duffing oscillator. Three effective features are extracted in this letter, i.e., ratio of points in and out of the closed region, average distance, and power spectrum. Computer simulations are presented to validate the proposed method and demonstrate that the state classification performance is superior to other related methods with higher computation efficiency, faster convergence rate, and better accuracy.

Yingbin Zheng;Hao Ye;Li Wang;Jian Pu; "Learning Multiviewpoint Context-Aware Representation for RGB-D Scene Classification," vol.25(1), pp.30-34, Jan. 2018. Effective visual representation plays an important role in the scene classification systems. While many existing methods are focused on the generic descriptors extracted from the RGB color channels, we argue the importance of depth context, since scenes are composed with spatial variability and depth is an essential component in understanding the geometry. In this letter, we present a novel depth representation for RGB-D scene classification based on a specific designed convolutional neural network (CNN). Contrast to previous deep models that transfer from pretrained RGB CNN models, we harness model by using the multiviewpoint depth image augmentation to overcome the data scarcity problem. The proposed CNN framework contains the dilated convolutions to expand the receptive field and a subsequent spatial pooling to aggregate multiscale contextual information. The combination of contextual design and multiviewpoint depth images are important toward a more compact representation, compared to directly using original depth images or off-the-shelf networks. Through extensive experiments on SUN RGB-D dataset, we demonstrate that the representation outperforms recent state of the arts, and combining it with standard CNN-based RGB features can lead to further improvements.

Yuanjie Shao;Nong Sang;Changxin Gao; "Representation Space-Based Discriminative Graph Construction for Semisupervised Hyperspectral Image Classification," vol.25(1), pp.35-39, Jan. 2018. Graph-based semisupervised learning methods have been successfully applied in hyperspectral image (HSI) classification with limited labeled samples. The critical step of graph-based methods is to learn a similarity graph, and numerous graph construction methods have been developed in recent years. However, existing approaches usually return a similarity matrix from the raw data space. In this letter, we propose a representation space-based discriminative graph for semisupervised HSI classification, which can learn the representations of samples and the similarity matrix of representations simultaneously. Moreover, we explicitly incorporate the probabilistic class relationship between sample and class, which can be estimated by the partial label information, into the above model to further boost the discriminability of graph. The experimental results on Hyperion and AVIRIS hyperspectral data demonstrate the effectiveness of the proposed approach.

Minglei Yang;Lei Sun;Xin Yuan;Baixiao Chen; "A New Nested MIMO Array With Increased Degrees of Freedom and Hole-Free Difference Coarray," vol.25(1), pp.40-44, Jan. 2018. We propose a new antenna array design approach for a multiple-input and multiple-output (MIMO) radar, which has closed-form expressions for the sensor locations and the number of achievable degrees of freedom (DOFs). This new approach utilizes the nested array as transmitting and receiving arrays. We employ the difference coarray of the sum coarray (DCSC) of the MIMO radar to obtain more DOFs for direction-of-arrival (DOA) estimation. Via properly designing the interelement spacings of the transmitting and receiving arrays, we can obtain a hole-free DCSC. The characteristics of array geometries are analyzed and the optimal numbers of sensors in transmitting/receiving antenna array are derived when given the total number of physical sensors. Simulations are conducted to demonstrate the advantages of the proposed array in terms of the number of DOFs, the number of resolvable sources, and the DOA estimation performance over the coprime MIMO array.

Xiaoyi Gu;Shenyinying Tu;Hao-Jun Michael Shi;Mindy Case;Deanna Needell;Yaniv Plan; "Optimizing Quantization for Lasso Recovery," vol.25(1), pp.45-49, Jan. 2018. This letter is focused on quantized compressed sensing, assuming that Lasso is used for signal estimation. Leveraging recent work, we propose a constrained Lloyd–Max-like framework to optimize the quantization function in this setting, and show that when the number of observations is high, this method of quantization gives a significantly better recovery rate than standard Lloyd–Max quantization. We support our theoretical analysis with numerical simulations.

Talha Cihad Gulcu;Haldun M. Ozaktas; "Universal Lower Bound for Finite-Sample Reconstruction Error and Its Relation to Prolate Spheroidal Functions," vol.25(1), pp.50-54, Jan. 2018. We consider the problem of representing a finite-energy signal with a finite number of samples. When the signal is interpolated via sinc function from the samples, there will be a certain reconstruction error since only a finite number of samples are used. Without making any additional assumptions, we derive a lower bound for this error. This error bound depends on the number of samples but nothing else, and is thus represented as a universal curve of error versus number of samples. Furthermore, the existence of a function that achieves the bound shows that this is the tightest such bound possible.

Dong Yang;Jian Sun; "BM3D-Net: A Convolutional Neural Network for Transform-Domain Collaborative Filtering," vol.25(1), pp.55-59, Jan. 2018. Denoising is a fundamental task in image processing with wide applications for enhancing image qualities. BM3D is considered as an effective baseline for image denoising. Although learning-based methods have been dominant in this area recently, the traditional methods are still valuable to inspire new ideas by combining with learning-based approaches. In this letter, we propose a new convolutional neural network inspired by the classical BM3D algorithm, dubbed as BM3D-Net. We unroll the computational pipeline of BM3D algorithm into a convolutional neural network structure, with “extraction” and “aggregation” layers to model block matching stage in BM3D. We apply our network to three denoising tasks: gray-scale image denoising, color image denoising, and depth map denoising. Experiments show that BM3D-Net significantly outperforms the basic BM3D method, and achieves competitive results compared with state of the art on these tasks.

Liang Zhao;Zhikui Chen;Z. Jane Wang; "Unsupervised Multiview Nonnegative Correlated Feature Learning for Data Clustering," vol.25(1), pp.60-64, Jan. 2018. Multiview data, which provide complementary information for consensus grouping, are very common in real-world applications. However, synthesizing multiple heterogeneous features to learn a comprehensive description of the data samples is challenging. To tackle this problem, many methods explore the correlations among various features across different views by the assumption that all views share the common semantic information. Following this line, in this letter, we propose a new unsupervised multiview nonnegative correlated feature learning (UMCFL) method for data clustering. Different from the existing methods that only focus on projecting features from different views to a shared semantic subspace, our method learns view-specific features and captures inter-view feature correlations in the latent common subspace simultaneously. By separating the view-specific features from the shared feature representation, the effect of the individual information of each view can be removed. Thus, UMCFL can capture flexible feature correlations hidden in multiview data. A new objective function is designed and efficient optimization processes are derived to solve the proposed UMCFL. Extensive experiments on real-world multiview datasets demonstrate that the proposed UMCFL method is superior to the state-of-the-art multiview clustering methods.

Sundeep Prabhakar Chepuri; "Factor Analysis From Quadratic Sampling," vol.25(1), pp.65-69, Jan. 2018. Factor analysis decomposition, i.e., decomposition of a covariance matrix as a sum of a low-rank positive semidefinite matrix and a diagonal matrix is an important problem in a variety of areas, such as signal processing, machine learning, system identification, and statistical inference. In this letter, the focus is on computing the factor analysis decomposition from a set of quadratic (or symmetric rank-one) measurements of a covariance matrix. Commonly used minimum trace factor analysis heuristic can be adapted to solve this problem when all the measurements are available. However, the resulting convex program is not suitable for processing large-scale or streaming data. Therefore, this letter presents a low-complexity iterative algorithm, which recovers the unknowns through a series of rank-one updates. The iterative algorithm performs better than the convex program when only a finite number of data snapshots are available.

IEEE Journal of Selected Topics in Signal Processing - new TOC (2017 November 23) [Website]

* "Frontcover," vol.11(8), pp.C1-C1, Dec. 2017.* Presents the front cover for this issue of the publication.

* "IEEE Journal of Selected Topics in Signal Processing publication information," vol.11(8), pp.C2-C2, Dec. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Table of Contents," vol.11(8), pp.1235-1235, Dec. 2017.* Presents the table of contents for this issue of the publication.

* "Blank Page," vol.11(8), pp.B1233-B1233, Dec. 2017.* This page or pages intentionally left blank.

* "Blank Page," vol.11(8), pp.B1234-B1234, Dec. 2017.* This page or pages intentionally left blank.

Bhuvana Ramabhadran;Nancy F. Chen;Mary P. Harper;Brian Kingsbury;Kate Knill; "Introduction to the Special Issue on End-to-End Speech and Language Processing," vol.11(8), pp.1237-1239, Dec. 2017. The eleven papers in this special section focus on end-to-end speech and language processing (SLP) which is a series of sequence-to-sequence learning problems. Conventional SLP systems map input to output sequences through module-based architectures where each module is independently trained. These have a number of limitations including local optima, assumptions about intermediate models and features, and complex expert knowledge driven steps. It can be difficult for non-experts to use and develop new applications. Integrated End-to-End (E2E) systems aim to simplify the solution to these problems through a single network architecture to map an input sequence directly to the desired output sequence without the need for intermediate module representations. E2E models rely on flexible and powerful machine learning models such as recurrent neural networks. The emergence of models for end-to-end speech processing has lowered the barriers to entry into serious speech research. This special issue showcases the power of novel machine learning methods in end-to-end speech and language processing.

Shinji Watanabe;Takaaki Hori;Suyoun Kim;John R. Hershey;Tomoki Hayashi; "Hybrid CTC/Attention Architecture for End-to-End Speech Recognition," vol.11(8), pp.1240-1253, Dec. 2017. Conventional automatic speech recognition (ASR) based on a hidden Markov model (HMM)/deep neural network (DNN) is a very complicated system consisting of various modules such as acoustic, lexicon, and language models. It also requires linguistic resources, such as a pronunciation dictionary, tokenization, and phonetic context-dependency trees. On the other hand, end-to-end ASR has become a popular alternative to greatly simplify the model-building process of conventional ASR systems by representing complicated modules with a single deep network architecture, and by replacing the use of linguistic resources with a data-driven learning method. There are two major types of end-to-end architectures for ASR; attention-based methods use an attention mechanism to perform alignment between acoustic frames and recognized symbols, and connectionist temporal classification (CTC) uses Markov assumptions to efficiently solve sequential problems by dynamic programming. This paper proposes hybrid CTC/attention end-to-end ASR, which effectively utilizes the advantages of both architectures in training and decoding. During training, we employ the multiobjective learning framework to improve robustness and achieve fast convergence. During decoding, we perform joint decoding by combining both attention-based and CTC scores in a one-pass beam search algorithm to further eliminate irregular alignments. Experiments with English (WSJ and CHiME-4) tasks demonstrate the effectiveness of the proposed multiobjective learning over both the CTC and attention-based encoder–decoder baselines. Moreover, the proposed method is applied to two large-scale ASR benchmarks (spontaneous Japanese and Mandarin Chinese), and exhibits performance that is comparable to conventional DNN/HMM ASR systems based on the advantages of both multiobjective learning and joint decoding without linguistic resources.

Hao Tang;Liang Lu;Lingpeng Kong;Kevin Gimpel;Karen Livescu;Chris Dyer;Noah A. Smith;Steve Renals; "End-to-End Neural Segmental Models for Speech Recognition," vol.11(8), pp.1254-1264, Dec. 2017. Segmental models are an alternative to frame-based models for sequence prediction, where hypothesized path weights are based on entire segment scores rather than a single frame at a time. Neural segmental models are segmental models that use neural network-based weight functions. Neural segmental models have achieved competitive results for speech recognition, and their end-to-end training has been explored in several studies. In this work, we review neural segmental models, which can be viewed as consisting of a neural network-based acoustic encoder and a finite-state transducer decoder. We study end-to-end segmental models with different weight functions, including ones based on frame-level neural classifiers and on segmental recurrent neural networks. We study how reducing the search space size impacts performance under different weight functions. We also compare several loss functions for end-to-end training. Finally, we explore training approaches, including multistage versus end-to-end training and multitask training that combines segmental and frame-level losses.

Patrick Doetsch;Mirko Hannemann;Ralf Schlüter;Hermann Ney; "Inverted Alignments for End-to-End Automatic Speech Recognition," vol.11(8), pp.1265-1273, Dec. 2017. In this paper, we propose an inverted alignment approach for sequence classification systems like automatic speech recognition (ASR) that naturally incorporates discriminative, artificial-neural-network-based label distributions. Instead of aligning each input frame to a state label as in the standard hidden Markov model (HMM) derivation, we propose to inversely align each element of an HMM state label sequence to a segment-wise encoding of several consecutive input frames. This enables an integrated discriminative model that can be trained end-to-end from scratch or starting from an existing alignment path. The approach does not assume the usual decomposition into a separate (generative) acoustic model and a language model, and allows for a variety of model assumptions, including statistical variants of attention. Following our initial paper with proof-of-concept experiments on handwriting recognition, the focus of this paper was the investigation of integrated training and an inverted decoding approach, whereas the acoustic modeling still remains largely similar to standard hybrid modeling. We provide experiments on the CHiME-4 noisy ASR task. Our results show that we can reach competitive results with inverted alignment and decoding strategies.

Tsubasa Ochiai;Shinji Watanabe;Takaaki Hori;John R. Hershey;Xiong Xiao; "Unified Architecture for Multichannel End-to-End Speech Recognition With Neural Beamforming," vol.11(8), pp.1274-1288, Dec. 2017. This paper proposes a unified architecture for end-to-end automatic speech recognition (ASR) to encompass microphone-array signal processing such as a state-of-the-art neural beamformer within the end-to-end framework. Recently, the end-to-end ASR paradigm has attracted great research interest as an alternative to conventional hybrid paradigms with deep neural networks and hidden Markov models. Using this novel paradigm, we simplify ASR architecture by integrating such ASR components as acoustic, phonetic, and language models with a single neural network and optimize the overall components for the end-to-end ASR objective: generating a correct label sequence. Although most existing end-to-end frameworks have mainly focused on ASR in clean environments, our aim is to build more realistic end-to-end systems in noisy environments. To handle such challenging noisy ASR tasks, we study multichannel end-to-end ASR architecture, which directly converts multichannel speech signal to text through speech enhancement. This architecture allows speech enhancement and ASR components to be jointly optimized to improve the end-to-end ASR objective and leads to an end-to-end framework that works well in the presence of strong background noise. We elaborate the effectiveness of our proposed method on the multichannel ASR benchmarks in noisy environments (CHiME-4 and AMI). The experimental results show that our proposed multichannel end-to-end system obtained performance gains over the conventional end-to-end baseline with enhanced inputs from a delay-and-sum beamformer (i.e., BeamformIT) in terms of character error rate. In addition, further analysis shows that our neural beamformer, which is optimized only with the end-to-end ASR objective, successfully learned a noise suppression function.

Bo Wu;Kehuang Li;Fengpei Ge;Zhen Huang;Minglei Yang;Sabato Marco Siniscalchi;Chin-Hui Lee; "An End-to-End Deep Learning Approach to Simultaneous Speech Dereverberation and Acoustic Modeling for Robust Speech Recognition," vol.11(8), pp.1289-1300, Dec. 2017. We propose an integrated end-to-end automatic speech recognition (ASR) paradigm by joint learning of the front-end speech signal processing and back-end acoustic modeling. We believe that “only good signal processing can lead to top ASR performance” in challenging acoustic environments. This notion leads to a unified deep neural network (DNN) framework for distant speech processing that can achieve both high-quality enhanced speech and high-accuracy ASR simultaneously. Our goal is accomplished by two techniques, namely: (i) a reverberation-time-aware DNN based speech dereverberation architecture that can handle a wide range of reverberation times to enhance speech quality of reverberant and noisy speech, followed by (ii) DNN-based multicondition training that takes both clean-condition and multicondition speech into consideration, leveraging upon an exploitation of the data acquired and processed with multichannel microphone arrays, to improve ASR performance. The final end-to-end system is established by a joint optimization of the speech enhancement and recognition DNNs. The recent REverberant Voice Enhancement and Recognition Benchmark (REVERB) Challenge task is used as a test bed for evaluating our proposed framework. We first report on superior objective measures in enhanced speech to those listed in the 2014 REVERB Challenge Workshop on the simulated data test set. Moreover, we obtain the best single-system word error rate (WER) of 13.28% on the 1-channel REVERB simulated data with the proposed DNN-based pre-processing algorithm and clean-condition training. Leveraging upon joint training with more discriminative ASR features and improved neural network based language models, a low single-system WER of 4.46% is attained. Next, a new multi-channel-condition joint learning and testing scheme delivers a state-of-the-art WER of 3.76% on the 8-channel simulated data with- a single ASR system. Finally, we also report on a preliminary yet promising experimentation with the REVERB real test data.

Panagiotis Tzirakis;George Trigeorgis;Mihalis A. Nicolaou;Björn W. Schuller;Stefanos Zafeiriou; "End-to-End Multimodal Emotion Recognition Using Deep Neural Networks," vol.11(8), pp.1301-1309, Dec. 2017. Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human–computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a convolutional neural network (CNN) to extract features from the speech, while for the visual modality a deep residual network of 50 layers is used. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, long short-term memory networks are utilized. The system is then trained in an end-to-end fashion where—by also taking advantage of the correlations of each of the streams—we manage to significantly outperform, in terms of concordance correlation coefficient, traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.

Tzeviya Fuchs;Joseph Keshet; "Spoken Term Detection Automatically Adjusted for a Given Threshold," vol.11(8), pp.1310-1317, Dec. 2017. Spoken term detection (STD) is the task of determining whether and where a given word or phrase appears in a given segment of speech. Algorithms for STD are often aimed at maximizing the gap between the scores of positive and negative examples. As such they are focused on ensuring that utterances where the term appears are ranked higher than utterances where the term does not appear. However, they do not determine a detection threshold between the two. In this paper, we propose a new approach for setting an absolute detection threshold for all terms by introducing a new calibrated loss function. The advantage of minimizing this loss function during training is that it aims at maximizing not only the relative ranking scores, but also adjusts the system to use a fixed threshold and thus maximizes the detection accuracy rates. We use the new loss function in the structured prediction setting and extend the discriminative keyword spotting algorithm for learning the spoken term detector with a single threshold for all terms. We further demonstrate the effectiveness of the new loss function by training a deep neural Siamese network in a weakly supervised setting for template-based STD, again with a single fixed threshold. Experiments with the TIMIT, Wall Street Journal (WSJ), and Switchboard corpora showed that our approach not only improved the accuracy rates when a fixed threshold was used but also obtained higher area under curve (AUC).

Batuhan Gündoğdu;Bolaji Yusuf;Murat Saraçlar; "Joint Learning of Distance Metric and Query Model for Posteriorgram-Based Keyword Search," vol.11(8), pp.1318-1328, Dec. 2017. In this paper, we propose a novel approach to keyword search (KWS) in low-resource languages, which provides an alternative method for retrieving the terms of interest, especially for the out of vocabulary (OOV) ones. Our system incorporates the techniques of query-by-example retrieval tasks into KWS and conducts the search by means of the subsequence dynamic time warping (sDTW) algorithm. For this, text queries are modeled as sequences of feature vectors and used as templates in the search. A Siamese neural network-based model is trained to learn a frame-level distance metric to be used in sDTW and the proper query model frame representations for this learned distance. Experiments conducted on Intelligence Advanced Research Projects Activity Babel Program's Turkish, Pashto, and Zulu datasets demonstrate the effectiveness of our approach. In each of the languages, the proposed system outperforms the large vocabulary continuous speech recognition (LVCSR) based baseline for OOV terms. Furthermore, the fusion of the proposed system with the baseline system provides an average relative actual term weighted value (ATWV) improvement of 13.9% on all terms and, more significantly, the fusion yields an average relative ATWV improvement of 154.5% on OOV terms. We show that this new method can be used as an alternative to conventional LVCSR-based KWS systems, or in combination with them, to achieve the goal of closing the gap between OOV and in-vocabulary retrieval performances.

Hongjie Chen;Cheung-Chi Leung;Lei Xie;Bin Ma;Haizhou Li; "Multitask Feature Learning for Low-Resource Query-by-Example Spoken Term Detection," vol.11(8), pp.1329-1339, Dec. 2017. We propose a novel technique that learns a low-dimensional feature representation from unlabeled data of a target language, and labeled data from a nontarget language. The technique is studied as a solution to query-by-example spoken term detection (QbE-STD) for a low-resource language. We extract low-dimensional features from a bottle-neck layer of a multitask deep neural network, which is jointly trained with speech data from the low-resource target language and resource-rich nontarget language. The proposed feature learning technique aims to extract acoustic features that offer phonetic discriminability. It explores a new way of leveraging cross-lingual speech data to overcome the resource limitation in the target language. We conduct QbE-STD experiments using the dynamic time warping distance of the multitask bottle-neck features between the query and the search database. The QbE-STD process does not rely on an automatic speech recognition pipeline of the target language. We validate the effectiveness of multitask feature learning through a series of comparative experiments.

Cristina España-Bonet;Ádám Csaba Varga;Alberto Barrón-Cedeño;Josef van Genabith; "An Empirical Analysis of NMT-Derived Interlingual Embeddings and Their Use in Parallel Sentence Identification," vol.11(8), pp.1340-1350, Dec. 2017. End-to-end neural machine translation has overtaken statistical machine translation in terms of translation quality for some language pairs, specially those with large amounts of parallel data. Besides this palpable improvement, neural networks provide several new properties. A single system can be trained to translate between many languages at almost no additional cost other than training time. Furthermore, internal representations learned by the network serve as a new semantic representation of words—or sentences—which, unlike standard word embeddings, are learned in an essentially bilingual or even multilingual context. In view of these properties, the contribution of the present paper is twofold. First, we systematically study the neural machine translation (NMT) context vectors, i.e., output of the encoder, and their power as an interlingua representation of a sentence. We assess their quality and effectiveness by measuring similarities across translations, as well as semantically related and semantically unrelated sentence pairs. Second, as extrinsic evaluation of the first point, we identify parallel sentences in comparable corpora, obtaining an <inline-formula><tex-math notation="LaTeX">$F_1=98.2%$</tex-math></inline-formula> on data from a shared task when using only NMT context vectors. Using context vectors jointly with similarity measures <inline-formula> <tex-math notation="LaTeX">$F_1$</tex-math></inline-formula> reaches <inline-formula><tex-math notation="LaTeX"> $98.9%$</tex-math></inline-formula>.

Kartik Audhkhasi;Andrew Rosenberg;Abhinav Sethy;Bhuvana Ramabhadran;Brian Kingsbury; "End-to-End ASR-Free Keyword Search From Speech," vol.11(8), pp.1351-1359, Dec. 2017. Conventional keyword search (KWS) systems for speech databases match the input text query to the set of word hypotheses generated by an automatic speech recognition (ASR) system from utterances in the database. Hence, such KWS systems attempt to solve the complex problem of ASR as a precursor. Training an ASR system itself is a time-consuming process requiring transcribed speech data. Our prior work presented an ASR-free end-to-end system that needed minimal supervision and trained significantly faster than an ASR-based KWS system. The ASR-free KWS system consisted of three subsystems. The first subsystem was a recurrent neural network based acoustic encoder that extracted a finite-dimensional embedding of the speech utterance. The second subsystem was a query encoder that produced an embedding of the input text query. The acoustic and query embeddings were input to a feedforward neural network that predicted whether the query occurred in the acoustic utterance or not. This paper extends our prior work in several ways. First, we significantly improve upon our previous ASR-free KWS results by nearly 20% relative through improvements to the acoustic encoder. Next, we show that it is possible to train the acoustic encoder on languages other than the language of interest with only a small drop in KWS performance. Finally, we attempt to predict the location of the detected keywords by training a location-sensitive KWS network.

* "List of Reviewers," vol.11(8), pp.1360-1364, Dec. 2017.*

* "IEEE Journal of Selected Topics in Signal Processing information for authors," vol.11(8), pp.1365-1366, Dec. 2017.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "2017 Index IEEE Journal of Selected Topics in Signal Processing Vol. 11," vol.11(8), pp.1367-1382, Dec. 2017.* Presents the 2017 subject/author index for this publication.

* "IEEE Signal Processing Society Information," vol.11(8), pp.C3-C3, Dec. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Blank Page," vol.11(8), pp.C4-C4, Dec. 2017.* This page or pages intentionally left blank.

IEEE Signal Processing Magazine - new TOC (2017 November 23) [Website]

* "Front Cover," vol.34(6), pp.C1-C1, Nov. 2017.* Presents the front cover for this issue of the publication.

* "ICIP 2018," vol.34(6), pp.C2-C2, Nov. 2017.* Describes the above-named upcoming conference event. May include topics to be covered or calls for papers.

* "Table of Contents," vol.34(6), pp.1-2, Nov. 2017.* Presents the table of contents for this issue of the publication.

* "Staff Listing," vol.34(6), pp.2-2, Nov. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Min Wu; "Signals and Signal Processing: The Invisibles and the Everlastings [From the Editor[Name:_blank]]," vol.34(6), pp.4-7, Nov. 2017. Presents the introductory editorial for this issue of the publication.

Rabab Ward; "Signal Processing Is More than Its Beloved Name [President's Message[Name:_blank]]," vol.34(6), pp.5-7, Nov. 2017. Presents the President’s message for this issue of the publication.

Yonina C. Eldar;Alfred O. Hero III;Li Deng;Jeff Fessler;Jelena Kovacevic;H. Vincent Poor;Steve Young; "Challenges and Open Problems in Signal Processing: Panel Discussion Summary from ICASSP 2017 [Panel and Forum[Name:_blank]]," vol.34(6), pp.8-23, Nov. 2017. This column summarizes the panel on open problems in signal processing, which took place on 5 March 2017 at the International Conference on Acoustics, Speech, and Signal Processing (ICASSP) in New Orleans, Louisiana. The goal of the panel was to draw attention to some of the challenges and open problems in various areas of signal processing and generate discussion on future research areas that can be of major significance and impact in signal processing. Five leading experts representing diverse areas within signal processing made up the panel.

Andres Kwasinski;Min Wu; "What Is the Future of Signal Processing?: Views Across Our Community [Community Voices[Name:_blank]]," vol.34(6), pp.14-25, Nov. 2017. Discusses issues that deal with the future of signal processing. The goal of articles in this column is to encourage and share reflections from diverse members of the signal processing community on questions that are interest to those in the SP industry.

John Edwards; "Medical optical imaging: Signal processing leads to new methods of detecting life-threatening situations [Special Reports[Name:_blank]]," vol.34(6), pp.17-20, Nov. 2017. Optical imaging, a medical technique that is used to obtain detailed images of organs, tissues, cells, and molecules in the presence of visible light, is an emerging technology with the potential to enhance patient treatment, diagnosis, and disease prevention. Offering numerous advantages over radiological imaging techniques, optical imaging uses nonionizing radiation to reduce a patient's radiation exposure, thereby allowing for more frequent studies over time. Signal processing is now helping to make optical imaging even more useful and versatile, allowing more detailed images to be captured and expanding the technology’s use into new patient treatment and medical research areas. The biggest signal processing-related challenges facing researchers are enabling effective noise filtering, fast image display, and saving image data to a hard disk, if necessary.

Chungshui Zhang; "Top Downloads in IEEE Xplore [Reader's Choice[Name:_blank]]," vol.34(6), pp.21-23, Nov. 2017. Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles.

Fatih Porikli;Shiguang Shan;Cees Snoek;Rahul Sukthankar;Xiaogang Wang; "Deep Learning for Visual Understanding [From the Guest Editors[Name:_blank]]," vol.34(6), pp.24-25, Nov. 2017. The articles in this special section survey articles on the latest advances in deep learning for visual understanding. Its objective it to encourage a diverse audience of researchers and enthusiasts toward an effective participation in the solution of analogous problems in other signal processing fields by inseminating similar ideas.

Kai Arulkumaran;Marc Peter Deisenroth;Miles Brundage;Anil Anthony Bharath; "Deep Reinforcement Learning: A Brief Survey," vol.34(6), pp.26-38, Nov. 2017. Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.

Seunghoon Hong;Suha Kwak;Bohyung Han; "Weakly Supervised Learning with Deep Convolutional Neural Networks for Semantic Segmentation: Understanding Semantic Layout of Images with Minimum Human Supervision," vol.34(6), pp.39-49, Nov. 2017. Semantic segmentation is a popular visual recognition task whose goal is to estimate pixel-level object class labels in images. This problem has been recently handled by deep convolutional neural networks (DCNNs), and the state-of-theart techniques achieve impressive records on public benchmark data sets. However, learning DCNNs demand a large number of annotated training data while segmentation annotations in existing data sets are significantly limited in terms of both quantity and diversity due to the heavy annotation cost. Weakly supervised approaches tackle this issue by leveraging weak annotations such as image-level labels and bounding boxes, which are either readily available in existing large-scale data sets for image classification and object detection or easily obtained thanks to their low annotation costs. The main challenge in weakly supervised semantic segmentation then is the incomplete annotations that miss accurate object boundary information required to learn segmentation. This article provides a comprehensive overview of weakly supervised approaches for semantic segmentation. Specifically, we describe how the approaches overcome the limitations and discuss research directions worthy of investigation to improve performance.

Alhussein Fawzi;Seyed-Mohsen Moosavi-Dezfooli;Pascal Frossard; "The Robustness of Deep Networks: A Geometrical Perspective," vol.34(6), pp.50-62, Nov. 2017. Deep neural networks have recently shown impressive classification performance on a diverse set of visual tasks. When deployed in real-world (noise-prone) environments, it is equally important that these classifiers satisfy robustness guarantees: small perturbations applied to the samples should not yield significant loss to the performance of the predictor. The goal of this article is to discuss the robustness of deep networks to a diverse set of perturbations that may affect the samples in practice, including adversarial perturbations, random noise, and geometric transformations. This article further discusses the recent works that build on the robustness analysis to provide geometric insights on the classifier's decision surface, which help in developing a better understanding of deep networks. Finally, we present recent solutions that attempt to increase the robustness of deep networks. We hope this review article will contribute to shed ding light on the open research challenges in the robustness of deep networks and stir interest in the analysis of their fundamental properties.

Damien Teney;Qi Wu;Anton van den Hengel; "Visual Question Answering: A Tutorial," vol.34(6), pp.63-75, Nov. 2017. The task of visual question answering (VQA) is receiving increasing interest from researchers in both the computer vision and natural language processing fields. Tremendous advances have been seen in the field of computer vision due to the success of deep learning, in particular on low- and midlevel tasks, such as image segmentation or object recognition. These advances have fueled researchers' confidence for tackling more complex tasks that combine vision with language and high-level reasoning. VQA is a prime example of this trend. This article presents the ongoing work in the field and the current approaches to VQA based on deep learning. VQA constitutes a test for deep visual understanding and a benchmark for general artificial intelligence (AI). While the field of VQA has seen recent successes, it remains a largely unsolved task.

Jiwen Lu;Junlin Hu;Jie Zhou; "Deep Metric Learning for Visual Understanding: An Overview of Recent Advances," vol.34(6), pp.76-84, Nov. 2017. Metric learning aims to learn a distance function to measure the similarity of samples, which plays an important role in many visual understanding applications. Generally, the optimal similarity functions for different visual understanding tasks are task specific because the distributions for data used in different tasks are usually different. It is generally believed that learning a metric from training data can obtain more encouraging performances than handcrafted metrics [1]-[3], e.g., the Euclidean and cosine distances. A variety of metric learning methods have been proposed in the literature [2]-[5], and many of them have been successfully employed in visual understanding tasks such as face recognition [6], [7], image classification [2], [3], visual search [8], [9], visual tracking [10], [11], person reidentification [12], cross-modal matching [13], image set classification [14], and image-based geolocalization [15]-[17].

Michael T. McCann;Kyong Hwan Jin;Michael Unser; "Convolutional Neural Networks for Inverse Problems in Imaging: A Review," vol.34(6), pp.85-95, Nov. 2017. In this article, we review recent uses of convolutional neural networks (CNNs) to solve inverse problems in imaging. It has recently become feasible to train deep CNNs on large databases of images, and they have shown outstanding performance on object classification and segmentation tasks. Motivated by these successes, researchers have begun to apply CNNs to the resolution of inverse problems such as denoising, deconvolution, superresolution, and medical image reconstruction, and they have started to report improvements over state-of-the-art methods, including sparsity-based techniques such as compressed sensing. Here, we review the recent experimental work in these areas, with a focus on the critical design decisions.

Dhanesh Ramachandram;Graham W. Taylor; "Deep Multimodal Learning: A Survey on Recent Advances and Trends," vol.34(6), pp.96-108, Nov. 2017. The success of deep learning has been a catalyst to solving increasingly complex machine-learning problems, which often involve multiple data modalities. We review recent advances in deep multimodal learning and highlight the state-of the art, as well as gaps and challenges in this active research field. We first classify deep multimodal learning architectures and then discuss methods to fuse learned multimodal representations in deep-learning architectures. We highlight two areas of research–regularization strategies and methods that learn or optimize multimodal fusion structures–as exciting areas for future work.

Xiaodong He;Li Deng; "Deep Learning for Image-to-Text Generation: A Technical Overview," vol.34(6), pp.109-116, Nov. 2017. Generating a natural language description from an image is an emerging interdisciplinary problem at the intersection of computer vision, natural language processing, and artificial intelligence (AI). This task, often referred to as image or visual captioning, forms the technical foundation of many important applications, such as semantic visual search, visual intelligence in chatting robots, photo and video sharing in social media, and aid for visually impaired people to perceive surrounding visual content. Thanks to the recent advances in deep learning, the AI research community has witnessed tremendous progress in visual captioning in recent years. In this article, we will first summarize this exciting emerging visual captioning area. We will then analyze the key development and the major progress the community has made, their impact in both research and industry deployment, and what lies ahead in future breakthroughs.

Hemanth Venkateswara;Shayok Chakraborty;Sethuraman Panchanathan; "Deep-Learning Systems for Domain Adaptation in Computer Vision: Learning Transferable Feature Representations," vol.34(6), pp.117-129, Nov. 2017. Domain adaptation algorithms address the issue of transferring learning across computational models to adapt them to data from different distributions. In recent years, research in domain adaptation has been making great progress owing to the advancements in deep learning. Deep neural networks have demonstrated unrivaled success across multiple computer vision applications, including transfer learning and domain adaptation. This article outlines the latest research in domain adaptation using deep neural networks. It begins with an introduction to the concept of knowledge transfer in machine learning and the different paradigms of transfer learning. It provides a brief survey of nondeep-learning techniques and organizes the rapidly growing research in domain adaptation based on deep learning. It also highlights some drawbacks with the current state of research in this area and offers directions for future research.

Jongyoo Kim;Hui Zeng;Deepti Ghadiyaram;Sanghoon Lee;Lei Zhang;Alan C. Bovik; "Deep Convolutional Neural Models for Picture-Quality Prediction: Challenges and Solutions to Data-Driven Image Quality Assessment," vol.34(6), pp.130-141, Nov. 2017. Convolutional neural networks (CNNs) have been shown to deliver standout performance on a wide variety of visual information processing applications. However, this rapidly developing technology has only recently been applied with systematic energy to the problem of picture-quality prediction, primarily because of limitations imposed by a lack of adequate ground-truth human subjective data. This situation has begun to change with the development of promising data-gathering methods that are driving new approaches to deep-learning-based perceptual picture-quality prediction. Here, we assay progress in this rapidly evolving field, focusing, in particular, on new ways to collect large quantities of ground-truth data and on recent CNN-based picture-quality prediction models that deliver excellent results in a large, real-world, picture-quality database.

Stefano Fortunati;Fulvio Gini;Maria S. Greco;Christ D. Richmond; "Performance Bounds for Parameter Estimation under Misspecified Models: Fundamental Findings and Applications," vol.34(6), pp.142-157, Nov. 2017. Inferring information from a set of acquired data is the main objective of any signal processing (SP) method. The common problem of estimating the value of a vector of parameters from a set of noisy measurements is at the core of a plethora of scientific and technological advances in recent decades, including wireless communications, radar and sonar, biomedicine, image processing, and seismology.

Soo-Chang Pei;Kuo-Wei Chang; "The Mystery Curve: A Signal Processing Point of View [Lecture Notes[Name:_blank]]," vol.34(6), pp.158-163, Nov. 2017. In the first chapter of a recently published book on artful mathematics, a linear combination of harmonic signals called mystery curves were introduced. Although their Fourier-based analysis brings interesting results, this lecture note provides a different and important perspective, especially useful for our signal processing community. Based on polar coordinate systems and low-pass filtering approaches, the patterns of the curves can be designed by locally curve tracing instead of trial and error, including not only two-dimensional (2-D) but also three-dimensional (3-D) modeling. Concrete examples and online MATLAB codes are provided so that applications from art and logo design to amplitude modulation (AM)-frequency modulation (FM) signal analysis are realizable.

Vicente Torres;Javier Valls;Richard Lyons; "Fast- and Low-Complexity atan2(a,b) Approximation [Tips and Tricks[Name:_blank]]," vol.34(6), pp.164-169, Nov. 2017. This article presents a new entry to the class of published algorithms for the fast computation of the arctangent of a complex number. Our method uses a look-up table (LUT) to reduce computational errors. We also show how to convert a large-sized LUT addressed by two variables to an equivalent-performance smaller-sized LUT addressed by only one variable. In addition, we demonstrate how and why the use of follow-on LUTs applied to other simple arctan algorithms produce unexpected and interesting results.

* "Calendar [Dates Ahead[Name:_blank]]," vol.34(6), pp.170-170, Nov. 2017.* Presents the SPS society upcoming calendar of events.

* "IEEE SIGNAL PROCESSING CUP 2018," vol.34(6), pp.175-175, Nov. 2017.* Presents information on the IEEE Signap processing Cup 2018.

Xiao-Ping Steven Zhang; "To the Victor Go the Spoils: AI in Financial Markets [Perspectives[Name:_blank]]," vol.34(6), pp.176-173, Nov. 2017. The author argues that artificial intelligence (AI) may provide many new tools for information but if you are "one of us" - the majority of people who do not have extensive experience of the financial world - you should not expect AI to bring a quick buck. If however you are a seasoned financial professional who works closely with the market, you have a tough job but there's a chance to make a fortune.

* "2017 Index IEEE Signal Processing Magazine Vol. 34," vol.34(6), pp.1-1, Nov. 2017.*

IET Signal Processing - new TOC (2017 November 23) [Website]

Huijun Hou;Xingpeng Mao;Yongtan Liu; "Oblique projection for direction-of-arrival estimation of hybrid completely polarised and partially polarised signals with arbitrary polarimetric array configuration," vol.11(8), pp.893-900, 10 2017. This study deals with the direction-of-arrival (DOA) estimation problem for hybrid completely polarised (CP) and partially polarised (PP) source signals using arbitrary polarimetric antenna arrays. An oblique projection-based polarisation insensitive direction estimation (OPPIDE) algorithm is proposed by exploiting the spatial-sparsity property of the sources. The OP technique is utilised to provide spatial filters, which are insensitive to the state of polarisation of signals, so that the potential source signals in the spatial domain can be separated later. The DOA estimation is finally implemented by identifying the sources' spatially sparse structure with the separated signals. Theoretical analysis indicates that the OPPIDE is applicable to any hybrid CP and PP signals, and is independent of special polarimetric array configurations. The effectiveness and superiority of the proposed OPPIDE are substantiated through making performance comparison with the present counterpart algorithms.

Mehrdad Abolbashari;Sun Myong Kim;Gelareh Babaie;Jonathan Babaie;Faramarz Farahi; "Fractional bispectrum transform: definition and properties," vol.11(8), pp.901-908, 10 2017. A signal with discrete frequency components has a zero bispectrum if no addition or subtraction of any of the frequencies equals one of the frequency components. The authors introduce the fractional bispectrum (FBS) transform in which for signals with zero bispectrum the FBS could be non-zero. It is shown that FBS has the same property as the bispectrum for signals with a Gaussian probability density function (PDF). The FBS of a zero mean signal with a Gaussian PDF is zero. Therefore, it can be used to significantly reduce the Gaussian noise.

Yu Wang;Yunhe Cao;Zhigang Peng;Hongtao Su; "Clutter suppression and GMTI for hypersonic vehicle borne SAR system with MIMO antenna," vol.11(8), pp.909-915, 10 2017. This study proposes a clutter suppression approach and the corresponding ground moving target imaging algorithm for hypersonic vehicle (HSV) borne synthetic aperture radar (SAR) system with multiple-input-multiple-output (MIMO) antenna. HSV-borne radar platforms fly with a high speed, which can lead to severe Doppler ambiguity, and the radar system usually cannot provide enough channel freedom degree for clutter suppression. In this study, an SAR ground moving target indication (GMTI) approach with MIMO antenna is presented for HSV-borne radar. Compared with the traditional multichannel SAR GMTI methods, the proposed approach can provide more space freedom degree and obtain a wider imaging swath without decreasing pulse repetition frequency. Besides, the improved deramp space-time adaptive processing method decreases the ambiguity times of the ground clutter and focuses the moving target. The simulation results validate the effectiveness of the proposed method.

Hamzeh Ghasemzadeh;Meisam Khalil Arjmandi; "Universal audio steganalysis based on calibration and reversed frequency resolution of human auditory system," vol.11(8), pp.916-922, 10 2017. Calibration and higher-order statistics are standard components of image steganalysis. However, these techniques have not yet found adequate attention in audio steganalysis. Specifically, most of current studies are either non-calibrated or only based on noise removal. The goal of this study is to fill these gaps and to show that calibrated features based on re-embedding technique improve performance of audio steganalysis. Furthermore, the authors show that least significant bit is the most sensitive bit plane to data hiding algorithms, and therefore it can be employed as a universal embedding method. The proposed features also benefit from an efficient model which is tailored to the needs for audio steganalysis and represent the maximum deviation from human auditory system. Performance of the proposed method is evaluated on a wide range of data hiding algorithms in both targeted and universal paradigms. The results show the effectiveness of the proposed method in detecting the finest traces of data hiding algorithms in very low embedding rates. The system detects Steghide at capacity of 0.06 bit per symbol with sensitivity of 98.6% (music) and 78.5% (speech). These figures are, respectively, 7.1% and 27.5% higher than the state-of-the-art results based on R-Mel-frequency cepstral coefficient features.

Alex Miyamoto Mussi;Taufik Abrão; "Message passing detection for large-scale MIMO systems: damping factor analysis," vol.11(8), pp.923-935, 10 2017. A message passing detector based on belief propagation (BP) algorithm for Markov random fields (MRF-BP) and factor graph (FG-BP) graphical models is analysed under different large-scale (LS) multiple-input multiple-output (MIMO) scenarios, including system parameters, such as damping factor (DF), number of users and number of antennas, from N=20 to 200 antennas. Specifically, the DF variation under different number of antennas configuration and signal-to-noise ratio (SNR) regions is extensively evaluated; bit error rate (BER) performance and computational complexity are assessed over different scenarios. Numerical results lead to a great performance gain with damped MRF-BP approach, overcoming FG-BP scheme in specific scenarios, with no extra computational complexity. Also, message damping (MD) method resulted in faster convergence of MRF-BP algorithm in LS scenarios, evidencing that, besides the performance gain, MD technique can lead to a computational complexity reduction. Specifically under low number of transmit antennas scenarios, the DF value needs to be carefully chosen. Furthermore, based on the proposed analysis, optimal value for the DF is determined considering wide LS antennas scenarios and SNR regions.

Abolfazl Saghafi;Chris P. Tsokos;Hamidreza Farhidzadeh; "Common spatial pattern method for real-time eye state identification by using electroencephalogram signals," vol.11(8), pp.936-941, 10 2017. Cross-channel maximum and minimum are used to monitor real-time electroencephalogram signals in 14 channels. On detection of a possible change, multivariate empirical mode decomposed the last 2 s of the signal into narrow-band intrinsic mode functions. Common spatial pattern is then utilised to create discriminating features for classification purpose. Logistic regression, artificial neural network, and support vector machine classifiers all could detect the eye state change with 83.4% accuracy in <;2 s. This algorithm provides a valuable improvement in comparison with a recent procedure that took about 20 min to classify new instances with 97.3% accuracy. Application of the introduced algorithm in the real-time eye state classification is promising. Increasing the training examples could even improve the accuracy of the classification analytics.

Roozbeh Dehghannasiri;Xiaoning Qian;Edward R. Dougherty; "Optimal experimental design in the context of canonical expansions," vol.11(8), pp.942-951, 10 2017. In a wide variety of engineering applications, the mathematical model cannot be fully identified. Therefore, one would like to construct robust operators (filters, classifiers, controllers etc.) that perform optimally relative to incomplete knowledge. Improving model identification through determining unknown parameters can enhance the performance of robust operators. One would like to perform the experiment that provides the most information relative to the engineering objective. The authors present an experimental design framework for parameter estimation in signal processing when the random process model is in the form of canonical expansions. The proposed experimental design is based on the concept of the mean objective cost of uncertainty, which quantifies model uncertainty by taking into account the performance degradation of the designed operator owing to the presence of uncertainty. They provide the general framework for experimental design in the context of canonical expansions and solve it for two major signal processing problems: optimal linear filtering and signal detection.

Rui Hu;Yuli Fu;Youjun Xiang;Rong Rong; "Performance guarantees of signal recovery via block-OMP with thresholding," vol.11(8), pp.952-960, 10 2017. Block-sparsity is an extension of the ordinary sparsity in the realm of the sparse signal representation. Exploiting the block structure of the sparsity pattern, recovery may be possible under more general conditions. In this study, a block version of the orthogonal matching pursuit with thresholding (block-OMPT) algorithm is proposed. Compared with the block version of the orthogonal matching pursuit (block-OMP), block-OMPT works in a less greedy fashion in order to improve the efficiency of the support estimation in iterations. Using the block restrict isometry property (block-RIP), some performance guarantees of block-OMPT are discussed for the bounded noise case and Gaussian noise case. A relationship between block-RIP and block-coherence is obtained. Numerical experiments are provided to illustrate the validity of the authors' main results.

Mahboubeh Zarei-Jalalabadi;Seyed Mohammad-Bagher Malaek; "[[http://ieeexplore.ieee.org/document/8056556][Practical method to predict an upper bound for minimum variance track-to-track fusion][Name:_blank]]," vol.11(8), pp.961-968, 10 2017. This study deals with the problem of track-to-track fusion in a sensor network when the correlation terms between the estimates of the agents are unknown. The proposed method offers an upper bound for the optimal minimum variance fusion rule through construction of the correlation terms according to an optimisation scheme. In general, the upper bound filter provides an estimate that is more conservative than the optimal estimate generated by the minimum variance fusion rule, while at the same time is less conservative than one obtained by the widely used covariance intersection method. From the geometrical viewpoint, the upper bound filter results in the inscribed largest volume ellipsoid within the intersection region defined by the ellipsoids corresponding to the fused estimates while the covariance intersection leads to the external minimum volume ellipsoid over the intersection region. The authors demonstrate the superiority of the proposed method through analysing estimation error and consistency of the fusion filter over Monte-Carlo simulations for a multi-dimensional system.

Xuesong Lu;Xiaomeng Li;Mao-sheng Fu;Haixian Wang; "Robust maximum signal fraction analysis for blind source separation," vol.11(8), pp.969-974, 10 2017. Blind source separation (BSS) is an active research topic in the fields of biomedical signal processing and brain-computer interface. As a representative technique, maximum signal fraction analysis (MSFA) has been recently developed for the problem of BSS. However, MSFA is formulated based on the L2-norm, and thus is prone to be negatively affected by outliers. In this study, the authors propose a robust alternative to MSFA based on the L1-norm, termed as MSFA-L1. Specifically, they re-define the objective function of MSFA, in which the energy quantities of both the signal and the noise are defined with the L1-norm rather than the L2-norm. By adopting the L1-norm, MSFA-L1 alleviates the negative influence of large deviations that are usually associated with outliers. Computationally, they design an iterative algorithm to optimise the objective function of MSFA-L1. The iterative procedure is shown to converge under the framework of bound optimisation. Experimental results on both synthetic data and real biomedical data demonstrate the effectiveness of the proposed MSFA-L1 approach.

Naveed Ishtiaq Chaudhary;Muhammad Saeed Aslam;Muhammad Asif Zahoor Raja; "Modified Volterra LMS algorithm to fractional order for identification of Hammerstein non-linear system," vol.11(8), pp.975-985, 10 2017. In this study, a new non-linear recursive mechanism for Volterra least mean square (VLMS) algorithm is proposed in the domain of non-linear adaptive signal processing and control. The proposed adaptive scheme is developed by applying concepts and theories of fractional calculus in weight adaptation structure of standard VLMS approach. The design scheme based on fractional VLMS (F-VLMS) algorithm is applied to parameter estimation problem of non-linear Hammerstein Box-Jenkins system for different noise and step size variations. The adaptive variables of F-VLMS are compared from actual parameters of the system as well as with the results of conventional VLMS for each case to verify its correctness. Comprehensive statistical analyses are conducted based on sufficient large number of independent runs and performance indices in terms of mean square error, variance account for and Nash-Sutcliffe efficiency establish the worth and effectiveness of the scheme.

Atiyeh Keshavarz-Mohammadiyan;Hamid Khaloozadeh; "Adaptive consensus-based distributed state estimator for non-linear systems in the presence of multiplicative noise," vol.11(8), pp.986-997, 10 2017. The problem of consensus-based distributed state estimation of a non-linear dynamical system in the presence of multiplicative observation noise is investigated in this study. Generalised extended information filter (GEIF) is developed for non-linear state estimation in the information-space framework. To fuse the information contribution of local estimators, an average consensus algorithm is employed. To achieve faster convergence towards consensus, a novel technique is proposed to modify the consensus weights, adaptively. Computational complexity of the proposed estimator is also analysed theoretically to demonstrate the computational advantage of the adaptive consensus-based distributed GEIF over the centralised counterpart. Moreover, stability of local estimators in terms of mean-square boundedness of state estimation error is guaranteed, in the presence of multiplicative noise. Simulation results are provided to evaluate performance of the proposed adaptive distributed estimator for a target-tracking problem in a wireless sensor network.

Mostafa Ghorbandoost;Valiallah Saba; "Non-parallel training for voice conversion using background-based alignment of GMMs and INCA algorithm," vol.11(8), pp.998-1005, 10 2017. Most of the voice conversion (VC) researches have used parallel training corpora to train the conversion function. However, in practice it is not always possible to gather parallel corpora, so the need for non-parallel training methods arises. As a successful non-parallel method, nearest neighbour search step and a conversion step alignment method (INCA) algorithm has attracted a lot of attention in recent years. In this study, the authors propose a new method of non-parallel VC which is based on the INCA algorithm. The authors' method effectively solves the initialisation problem of INCA algorithm. Their proposed initialisation for INCA is done with alignment of Gaussian mixture models (GMM) using universal background model. Results of objective and subjective experiments determined that the authors' proposed method improves the INCA algorithm. It is observed that this superiority holds for different sizes of training material from 10 to 50 training sentences. In terms of mean opinion score, the authors' method scores 0.25 higher in the case of quality and 0.2 higher in the case of similarity to the target speaker compared with traditional INCA. It seems that the authors' proposed method is a suitable frame alignment method for non-parallel corpora in VC task.

Mojtaba Hajiabadi;Hossein Khoshbin;Ghosheh Abed Hodtani; "Cooperative spectrum estimation over large-scale cognitive radio networks," vol.11(8), pp.1006-1014, 10 2017. Spectrum sensing is a significant issue in cognitive radio networks which enables estimation of the frequency spectrum and hence provides frequency reuse. In the large-scale cognitive radio networks, secondary users cannot share a common spectrum since the coverage area of primary users is limited. In this study, the authors suggest a diffusion adaptive learning algorithm based on correntropy cooperation policy, which first categorises received data of secondary users into several groups, and then learns a common spectrum inside each group. The mean-square performance of proposed algorithm is analysed and supported by simulations. Experimental results show that, in a multitask cognitive network, the proposed algorithm can achieve a better mean-square deviation learning performance both in transient and steady-state regimes in comparison with other conventional algorithms.

Ruirui Chen;Hailin Zhang;Yanguo Zhou; "Bidirectional wireless information and power transfer for decode-and-forward relay systems," vol.11(8), pp.1015-1020, 10 2017. In this study, the authors investigate the bidirectional wireless information and power transfer (BWIPT) in decode-and-forward (DF) relay systems, where the bidirectional relay can decode and forward information from the user to the access point (AP), and assist the wireless power transfer from the AP to the user. The relay employs the power splitting (PS) protocol to coordinate the received signal energy for information transmission and energy harvesting. By converting the multi-relay system information rate maximisation problem into a convex optimisation problem, the distributed power allocation scheme is obtained to maximise the information rate. Particularly, for single-relay systems, the authors derive the closed-form expression of the optimal PS factor, which can maximise the information rate. Simulation results show that the BWIPT for DF relay systems outperforms the BWIPT for amplify-and-forward relay systems.

IEEE Transactions on Geoscience and Remote Sensing - new TOC (2017 November 23) [Website]

* "Front Cover," vol.55(12), pp.C1-C1, Dec. 2017.*

* "IEEE Transactions on Geoscience and Remote Sensing publication information," vol.55(12), pp.C2-C2, Dec. 2017.*

* "Table of contents," vol.55(12), pp.6681-7200, Dec. 2017.*

Yanqiao Chen;Licheng Jiao;Yangyang Li;Jin Zhao; "Multilayer Projective Dictionary Pair Learning and Sparse Autoencoder for PolSAR Image Classification," vol.55(12), pp.6683-6694, Dec. 2017. Polarimetric synthetic aperture radar (PolSAR) image classification is a vital application in remote sensing image processing. In general, PolSAR image classification is actually a high-dimensional nonlinear mapping problem. The methods based on sparse representation and deep learning have shown a great potential for PolSAR image classification. Therefore, a novel PolSAR image classification method based on multilayer projective dictionary pair learning (MDPL) and sparse autoencoder (SAE) is proposed in this paper. First, MDPL is used to extract features, and the abstract degree of the extracted features is high. Second, in order to get the nonlinear relationship between elements of feature vectors in an adaptive way, SAE is also used in this paper. Three PolSAR images are used to test the effectiveness of our method. Compared with several state-of-the-art methods, our method achieves very competitive results in PolSAR image classification.

Christopher R. Mannix;David P. Belcher;Paul S. Cannon; "Measurement of Ionospheric Scintillation Parameters From SAR Images Using Corner Reflectors," vol.55(12), pp.6695-6702, Dec. 2017. Space-based low-frequency (L-band and below) synthetic aperture radar (SAR) is affected by the ionosphere. In particular, the phase scintillation causes the sidelobes to rise in a manner that can be predicted by an analytical theory of the point spread function (PSF). In this paper, the results of an experiment, in which a 5 m corner reflector on Ascension Island, was repeatedly imaged by PALSAR-2 in the spotlight mode are described. Many examples of the effect of scintillation on the SAR PSF were obtained, and all fit the theoretical model. This theoretical model of the PSF has then been used to determine two ionospheric turbulence parameters p and <inline-formula> <tex-math notation="LaTeX">$text {C}_{text {k}}text {L}$ </tex-math></inline-formula> from the SAR PSF. The values obtained have been compared with those obtained from simultaneous GPS measurements. Although the comparison shows that the two measures are strongly correlated, the differing spatial and temporal scales of SAR and GPS make exact comparison difficult.

Prabu Dheenathayalan;Miguel Caro Cuenca;Peter Hoogeboom;Ramon F. Hanssen; "Small Reflectors for Ground Motion Monitoring With InSAR," vol.55(12), pp.6703-6712, Dec. 2017. In recent years, synthetic aperture radar interferometry has become a recognized geodetic tool for observing ground motion. For monitoring areas with low density of coherent targets, artificial corner reflectors (CRs) are usually introduced. The required size of a reflector depends on radar wavelength and resolution and on the required deformation accuracy. CRs have been traditionally used to provide a high signal-to-clutter ratio (SCR). However, large dimensions can make the reflector bulky, difficult to install and maintain. Furthermore, if a large number of reflectors are needed for long infrastructure, such as vegetation-covered dikes, the total price of the reflectors can become unaffordable. On the other hand, small reflectors have the advantage of easy installation and low cost. In this paper, we design and study the use of small reflectors with low SCR for ground motion monitoring. In addition, we propose a new closed-form expression to estimate the interferometric phase precision of resolution cells containing a (strong or weak) point target and a clutter. Through experiments, we demonstrate that the small reflectors can also deliver displacement estimates with an accuracy of a few millimeters. To achieve this, we apply a filtering method for reducing clutter noise.

Jun Su Kim;Konstantinos P. Papathanassiou;Hiroatsu Sato;Shaun Quegan; "Detection and Estimation of Equatorial Spread F Scintillations Using Synthetic Aperture Radar," vol.55(12), pp.6713-6725, Dec. 2017. A significant amount of the data acquired by sun-synchronous space-borne low-frequency synthetic aperture radars (SARs) through the postsunset equatorial sector are distorted by the ionospheric scintillations due to the presence of plasma irregularities and their zonal and vertical drift. In the focused SAR images, the distortions due to the postsunset equatorial ionospheric scintillations appear in the form of amplitude and/or phase “stripe” patterns of high spatial frequency aligned to the projection of the geomagnetic field onto the SAR image plane. In this paper, a methodology to estimate the height and the drift velocity of the scintillations from the “stripe” patterns detected in the SAR images is proposed. The analysis is based on the fact that the zonal and vertical drift of the plasma irregularities are, at the equatorial zone, perpendicular to the geomagnetic field which is almost parallel aligned to the orbit. The methodology takes advantage of the time lapse and change of imaging geometry across azimuth subapertures. The obtained height estimates agree well with the reference measurements and independent estimates reported in the literature, while the drift velocities appear slightly overestimated. This can be attributed to a suboptimum geometry configuration but also to a decoupling of the ambient ionosphere and the plasma irregularities.

Blake M. Rankin;Joseph Meola;Michael T. Eismann; "Spectral Radiance Modeling and Bayesian Model Averaging for Longwave Infrared Hyperspectral Imagery and Subpixel Target Identification," vol.55(12), pp.6726-6735, Dec. 2017. Hyperspectral imagery (HSI) exploitation typically requires spectral signatures for target detection and identification algorithms. As the longwave infrared (LWIR) region of the electromagnetic spectrum is dominated by thermal emission, spectral radiance measurements are influenced by object temperature, and thus, estimates of target temperature may be necessary for emissivity retrieval to support these algorithms. Therefore, lack of accurate temperature information poses a significant challenge for HSI target detection and identification. Previous studies have demonstrated LWIR hyperspectral unmixing in both radiance and emissivity domains using in-scene target signatures. Here, a radiance-domain LWIR material identification algorithm for subpixel target identification of solid materials is developed by combining spectral radiance and linear mixing models with Bayesian model averaging. Application to experimental LWIR HSI illustrates that the algorithm effectively distinguishes between solid materials with a high degree of spectral similarity and reduces the probability of false alarms by at least one order of magnitude over a standard adaptive coherence estimator detector. Limits of identification are inferred from the imagery and found to depend on material type, target size, and target geometry. For the sensor and materials in this paper, the results imply that targets of nominally 5 m2 in size with strong spectral features can be identified for ground sampling distances (GSDs) on the order of 5–10 m (with abundances as low as ~10%) whereas blackbody-like materials are difficult to distinguish for GSDs larger than approximately 3 m.

Rayn Sakaguchi;Kenneth D. Morton;Leslie M. Collins;Peter A. Torrione; "A Comparison of Feature Representations for Explosive Threat Detection in Ground Penetrating Radar Data," vol.55(12), pp.6736-6745, Dec. 2017. The automatic detection of buried threats in ground penetrating radar (GPR) data is an active area of research due to GPR’s ability to detect both metal and nonmetal subsurface objects. Recent work on algorithms designed to distinguish between threats and nonthreats in GPR data has utilized computer vision methods to advance the state-of-the-art detection and discrimination performance. Feature extractors, or descriptors, from the computer vision literature have exhibited excellent performance in representing 2-D GPR image patches and allow for robust classification of threats from nonthreats. This paper aims to perform a broad study of feature extraction methods in order to identify characteristics that lead to improved classification performance under controlled conditions. The results presented in this paper show that gradient-based features, such as the edge histogram descriptor and the scale invariant feature transform, provide the most robust performance across a large and varied data set. These results indicate that various techniques from the computer vision literature can be successfully applied to target detection in GPR data and that more advanced techniques from the computer vision literature may provide further performance improvements.

Alexis A. Mouche;Bertrand Chapron;Biao Zhang;Romain Husson; "Combined Co- and Cross-Polarized SAR Measurements Under Extreme Wind Conditions," vol.55(12), pp.6746-6755, Dec. 2017. During summer 2016, the European Space Agency (ESA) set up the Satellite Hurricane Observations Campaign, a campaign dedicated to hurricane observations with Sentinel-1 synthetic aperture radar (SAR) in both vertical-vertical (VV) and vertical-horizontal (VH) polarizations acquired in wide swath modes. Among the 70 Sentinel-1 passes scheduled by the ESA mission planning team, more than 20 observations over hurricane eyes were acquired and tropical cyclones were captured at different development stages. This enables us to detail the sensitivity difference of VH and VV normalized radar cross section (NRCS) to the response of intense ocean surface winds. As found, the sensitivity of the VH-NRCS computed at 3-km resolution is reported to be more than 3.5 times larger than in VV. Taking opportunity of SAR high resolution, we also show that the decrease in resolution (up to 25 km) does not dramatically change the sensitivity difference between VV and VH polarizations. For wind speeds larger than 25 m/s, a new geophysical model function (MS1A) to interpret cross-polarized signal is proposed. Both channels are then combined to get ocean surface wind vectors. SAR winds are further compared at 40-km resolution against L-band soil moisture active and passive mission (SMAP) radiometer winds with co-locations less than 30 min. Overall excellent consistency is found between SMAP and this new SAR winds. This paper opens perspectives for MetOp-SG SCA, the next-generation C-band scatterometer with co- and cross-polarization capability.

Massimo Donelli;Federico Viani; "Remote Inspection of the Structural Integrity of Engineering Structures and Materials With Passive MST Probes," vol.55(12), pp.6756-6766, Dec. 2017. This paper presents a method for the remote inspection of the structural integrity of engineering structures and materials, based on passive modulated scattering probes. In particular, a set of passive modulated scattering technique (MST) probes with diagnostic capabilities are embedded in structures, such as reinforced concrete slabs, with the objective of detecting water infiltrations, chemical deterioration, such as carbonation, and damages of the material structure. An external reader is used to provide the interrogating electromagnetic wave and to receive the signal generated by the MST probes. The material/structure integrity can be retrieved from the signal retransmitted by the probes. A set of preliminary experiments have been carried out to assess the potentialities of the method and to demonstrate how this system can be implemented for practical applications. The obtained results are quite promising.

Dingsheng Hu;Stian Normann Anfinsen;Xiaolan Qiu;Anthony Paul Doulgeris;Bin Lei; "Unsupervised Mixture-Eliminating Estimation of Equivalent Number of Looks for PolSAR Data," vol.55(12), pp.6767-6779, Dec. 2017. This paper addresses the impact of mixtures between classes on equivalent number of looks (ENL) estimation. We propose an unsupervised ENL estimator for polarimetric synthetic aperture radar (PolSAR) data, which is based on small sample estimates but incorporates a mixture-eliminating (ME) procedure to automatically assess the uniformity of the estimation windows. A statistical feature derived from a combination of linear and logarithmic moments is investigated and adopted in the procedure, as it has different mean values for samples from uniform and nonuniform windows. We introduce an approach to extract the approximated sampling distribution of this test statistic for uniform windows. Then the detection is conducted by a hypothesis test with adaptive thresholds determined by a nonuniformity ratio. Finally the experiments are performed on both simulated and real SAR data. The capability of the unsupervised ME procedure is verified with simulated data. In the real data experiments, the ENL estimates of Flevoland and San Francisco PolSAR images are analyzed, which show the robustness of the proposed ENL estimation for SAR scenes with different complexities.

Alexander Gruber;Wouter Arnoud Dorigo;Wade Crow;Wolfgang Wagner; "Triple Collocation-Based Merging of Satellite Soil Moisture Retrievals," vol.55(12), pp.6780-6792, Dec. 2017. We propose a method for merging soil moisture retrievals from spaceborne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from using triple collocation analysis (TCA). In regions where TCA is deemed unreliable, we use correlation significance levels (<inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>-values) as indicator for retrieval quality to decide whether to use active data only, passive data only, or an unweighted average. We apply the proposed merging scheme to active retrievals from advanced scatterometer and passive retrievals from the Advanced Microwave Scanning Radiometer—Earth Observing System using Global Land Data Assimilation System-Noah to complement the triplet required for TCA. The merged time series is evaluated against soil moisture estimates from ERA-Interim/Land and in situ measurements from the International Soil Moisture Network using the European Space Agency’s (ESA’s) current Climate Change Initiative—Soil Moisture (ESA CCI SM) product version v02.3 as benchmark merging scheme. Results show that the <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>-value classification provides a robust basis for decisions regarding using either active or passive data alone, or an unweighted average in cases where relative weights cannot be estimated reliably, and that the weights estimated from TCA in almost all cases outperform the ternary decision upon which the ESA CCI SM v02.3 is based. The proposed method forms the basis for the new ESA CCI SM product version v03.x and higher.

Seunghyun Son;Menghua Wang; "Ice Detection for Satellite Ocean Color Data Processing in the Great Lakes," vol.55(12), pp.6793-6804, Dec. 2017. Satellite remote-sensing data are essential for monitoring and quantifying water properties in the Great Lakes, providing useful monitoring and management tools for understanding water optical, biological, and ecological processes and phenomena. However, during the winter season, large parts of the Great Lakes are often covered by ice, which can cause significant uncertainties in satellite-measured water quality products. Although some developed radiance-based ice-detection algorithms for satellite ocean color data processing can eliminate most of the ice pixels in a region, there are still some significant errors due to misidentification of ice-contaminated pixels, particularly for the thin ice-covered regions. Therefore, it is necessary to improve the ice-detection methods for satellite ocean color data processing in the Great Lakes. In this paper, impacts of ice contamination on satellite-derived ocean color products in the Great Lakes are investigated, and a refined regional ice-detection algorithm which is based on the radiance spectra and normalized water-leaving radiance at the wavelength of 551 nm, <inline-formula> <tex-math notation="LaTeX">${nL}_{{{w}}}$ </tex-math></inline-formula>(551), is developed and assessed for satellite ocean color data processing in the Great Lakes. Results show that this proposed ice-detection method can reasonably identify ice-contaminated pixels, including those in very thin ice-covered regions, and provide accurate satellite ocean color products for the winter season in the Great Lakes.

Yiting Tao;Miaozhong Xu;Fan Zhang;Bo Du;Liangpei Zhang; "Unsupervised-Restricted Deconvolutional Neural Network for Very High Resolution Remote-Sensing Image Classification," vol.55(12), pp.6805-6823, Dec. 2017. As the acquisition of very high resolution (VHR) satellite images becomes easier owing to technological advancements, ever more stringent requirements are being imposed on automatic image interpretation. Moreover, per-pixel classification has become the focus of research interests in this regard. However, the efficient and effective processing and the interpretation of VHR satellite images remain a critical task. Convolutional neural networks (CNNs) have recently been applied to VHR satellite images with considerable success. However, the prevalent CNN models accept input data of fixed sizes and train the classifier using features extracted directly from the convolutional stages or the fully connected layers, which cannot yield pixel-to-pixel classifications. Moreover, training a CNN model requires large amounts of labeled reference data. These are challenging to obtain because per-pixel labeled VHR satellite images are not open access. In this paper, we propose a framework called the unsupervised-restricted deconvolutional neural network (URDNN). It can solve these problems by learning an end-to-end and pixel-to-pixel classification and handling a VHR classification using a fully convolutional network and a small number of labeled pixels. In URDNN, supervised learning is always under the restriction of unsupervised learning, which serves to constrain and aid supervised training in learning more generalized and abstract feature. To some degree, it will try to reduce the problems of overfitting and undertraining, which arise from the scarcity of labeled training data, and to gain better classification results using fewer training samples. It improves the generality of the classification model. We tested the proposed URDNN on images from the Geoeye and Quickbird sensors and obtained satisfactory results with the highest overall accuracy (OA) achieved as 0.977 and 0.989, respectively. Experiments showed that the combined effects of additional kernels and stages may ha- e produced better results, and two-stage URDNN consistently produced a more stable result. We compared URDNN with four methods and found that with a small ratio of selected labeled data items, it yielded the highest and most stable results, whereas the accuracy values of the other methods quickly decreased. For some categories with fewer training pixels, accuracy for categories from other methods was considerably worse than that in URDNN, with the largest difference reaching almost 10%. Hence, the proposed URDNN can successfully handle the VHR image classification using a small number of labeled pixels. Furthermore, it is more effective than state-of-the-art methods.

Paul Pincus;Mark Preiss;Alvin S. Goh;Douglas Gray; "Polarimetric Calibration of Circularly Polarized Synthetic Aperture Radar Data," vol.55(12), pp.6824-6839, Dec. 2017. Two novel aspects of polarimetric calibration for fully polarimetric imaging radar systems are addressed. First, the radar system model is formulated in the context of two generic transmitter designs, either a single amplifier followed by a high-power switch or a low-power switch followed by two amplifiers. In the latter case, it is shown that a particular factorization of the polarimetric distortion matrix leads to a significant simplification of the cross-talk representation, from the standard four parameters to two reciprocal parameters, one for each of the antennas. Various system models from the literature are thus placed in a unified framework. Second, calibration techniques for circularly polarized antennas are derived, using either corner reflectors or clutter. However, where standard linear-basis algorithms estimate the cross-talk by its first-order distortion of reflection-symmetric clutter, no equivalent algorithm has been found for the circular basis; indeed, it is shown that the distortion caused, to first-order, by circular-basis cross-talk does not permit the individual cross-talk parameters to be identified. The calibration techniques are applied to fully polarimetric data acquired by the Ingara L-band radar using left- and right-polarized helical antennas.

Yansong Duan;Xiao Ling;Yongjun Zhang;Zuxun Zhang;Xinyi Liu;Kun Hu; "A Simple and Efficient Method for Radial Distortion Estimation by Relative Orientation," vol.55(12), pp.6840-6848, Dec. 2017. In order to solve the accuracy problem caused by lens distortions of nonmetric digital cameras mounted on an unmanned aerial vehicle, the estimation for initial values of lens distortion must be studied. Based on the fact that radial lens distortions are the most significant of lens distortions, a simple and efficient method for radial lens distortion estimation is proposed in this paper. Starting from the coplanar equation, the geometric characteristics of the relative orientation equations are explored. This paper further proves that the radial lens distortion can be linearly estimated in a continuous relative orientation model. The proposed procedure only requires a sufficient number of point correspondences between two or more images obtained by the same camera; thus it is suitable for a natural scene where the lack of straight lines and calibration objects precludes most previous techniques. Both computer simulation and real data have been used to test the proposed method; the experimental results show that the proposed method is easy to use and flexible.

Manuel F. Rios Gaona;Aart Overeem;A. M. Brasjen;Jan Fokke Meirink;Hidde Leijnse;Remko Uijlenhoet; "Evaluation of Rainfall Products Derived From Satellites and Microwave Links for The Netherlands," vol.55(12), pp.6849-6859, Dec. 2017. High-resolution inputs of rainfall are important in hydrological sciences, especially for urban hydrology. This is mainly because heavy rainfall-induced events such as flash floods can have a tremendous impact on society given their destructive nature and the short time scales in which they develop. With the development of technologies such as radars, satellites and (commercial) microwave links (CMLs), the spatiotemporal resolutions at which rainfall can be retrieved are becoming higher and higher. For the land surface of The Netherlands, we evaluate here four rainfall products, i.e., link-derived rainfall maps, Integrated Multisatellite Retrievals for Global Precipitation Measurement (IMERG) Final Run (IMERG—Global Precipitation Measurement mission), Meteosat Second Generation Cloud Physical Properties (CPP), and Nighttime Infrared Precipitation Estimation (NIPE). All rainfall products are compared against gauge-adjusted radar data, considered as the ground truth given its high quality, resolution, and availability. The evaluation is done for seven months at 30 min and 24h. Overall, we found that link-derived rainfall maps outperform the satellite products and that IMERG outperforms CPP and NIPE. We also explore the potential of a CML network to validate satellite rainfall products. Usually, satellite derived products are validated against radar or rain gauge networks. If data from CMLs would be available, this would be highly relevant for ground validation in areas with scarce rainfall observations, since link-derived rainfall is truly independent of satellite-derived rainfall. The large worldwide coverage of CMLs potentially offers a more extensive platform for the ground validation of satellite estimates over the land surface of the Earth.

Wei Wei;Lei Zhang;Chunna Tian;Antonio Plaza;Yanning Zhang; "Structured Sparse Coding-Based Hyperspectral Imagery Denoising With Intracluster Filtering," vol.55(12), pp.6860-6876, Dec. 2017. Sparse coding can exploit the intrinsic sparsity of hyperspectral images (HSIs) by representing it as a group of sparse codes. This strategy has been shown to be effective for HSI denoising. However, how to effectively exploit the structural information within the sparse codes (structured sparsity) has not been widely studied. In this paper, we propose a new method for HSI denoising, which uses structured sparse coding and intracluster filtering. First, due to the high spectral correlation, the HSI is represented as a group of sparse codes by projecting each spectral signature onto a given dictionary. Then, we cast the structured sparse coding into a covariance matrix estimation problem. A latent variable-based Bayesian framework is adopted to learn the covariance matrix, the sparse codes, and the noise level simultaneously from noisy observations. Although the considered strategy is able to perform denoising through accurately reconstructing spectral signatures, an inconsistent recovery of sparse codes may corrupt the spectral similarity in each spatial homogeneous cluster within the scene. To address this issue, an intracluster filtering scheme is further employed to restore the spectral similarity in each spatial cluster, which results in better denoising results. Our experimental results, conducted using both simulated and real HSIs, demonstrate that the proposed method outperforms several state-of-the-art denoising methods.

R. Renju;C. Suresh Raju;M. K. Mishra;N. Mathew;K. Rajeev;K. Krishna Moorthy; "Atmospheric Boundary Layer Characterization Using Multiyear Ground-Based Microwave Radiometric Observations Over a Tropical Coastal Station," vol.55(12), pp.6877-6882, Dec. 2017. The continuous ground-based microwave radiometer profiler (MRP) observations of lower atmospheric temperature and humidity profiles are used to investigate the diurnal evolution of atmospheric boundary layer height (BLH) over a tropical coastal station. The BLH estimated from the MRP observations is compared with concurrent and collocated measurements of mixing layer height using a Micropulse Lidar and the BLH derived from radiosonde ascends. The monthly mean diurnal variation of the BLH derived from the multiyear (2010–2013) MRP observations exhibits strong diurnal variation with the highest around the local afternoon (~12:00–15:00 IST) and the lowest during the nighttime (~100–200 m). The daytime convective BLH is maximum during the premonsoon season (March–May) with the peak value (~1300 m) occurring in April and minimum in the month of July (~600 m). This paper presents the potential of MRP observations to investigate the continuous diurnal evolution of the BLH over a tropical coastal region manifested by a thermal internal boundary layer (TIBL) at much better time resolution, which is essential for understanding the rapid growth of the boundary layer and the TIBL during the forenoon period.

Hongkun Li;Joel T. Johnson; "On the Amplitude Distributions of Bistatic Scattered Fields From Rough Surfaces," vol.55(12), pp.6883-6892, Dec. 2017. Non-Rayleigh distributed radar clutter is widely reported in studies of radar scattering from sea and land surfaces. Existing models of scattered field amplitude distributions have been developed primarily through empirical fits to the statistics of radar backscatter measurements. In contrast, this paper investigates a physics-based approach to determine the amplitude distributions of fields scattered from rough surfaces using Monte Carlo simulations and analytical methods, for both backscattering and bistatic configurations. The rough surface is represented using a “two-scale” model. An individual surface facet contains “small-scale” roughness, for which scattered fields are evaluated using the second-order small slope approximation. Individual surface facets are tilted by the slopes of the “large-scale” roughness in a given observation. The results show that non-Rayleigh amplitude distributions are obtained when tilting is performed, and that the departure from the Rayleigh distribution becomes more significant as the variance of the tilting slope increases. Further analysis shows that this departure results from variations in the mean scattering amplitude from a facet (the texture) as tilting occurs. The distribution of the texture is studied and compared with existing models. Finally, the distribution of the scattered field amplitude is modeled through the compound Gaussian model, first using the distribution of the texture, and then in terms of the probability density function of tilting slopes (which avoids the requirement of the knowledge of the texture distribution). The results from the above two methods are in good agreement and both agree well with the Monte Carlo simulation.

Leandro Pralon;Gabriel Vasile;Mauro Dalla Mura;Jocelyn Chanussot; "Evaluation of the New Information in the ${H}/alpha$ Feature Space Provided by ICA in PolSAR Data Analysis," vol.55(12), pp.6893-6909, Dec. 2017. The Cloude and Pottier <inline-formula> <tex-math notation="LaTeX">$H/alpha $ </tex-math></inline-formula> feature space is one of the most employed methods for unsupervised polarimetric synthetic aperture radar (PolSAR) data classification based on incoherent target decomposition (ICTD). The method can be split in two stages: the retrieval of the canonical scattering mechanisms present in an image cell and their parameterization. The association of the coherence matrix eigenvectors to the most dominant scattering mechanisms in the analyzed pixel introduces unfeasible regions in the <inline-formula> <tex-math notation="LaTeX">$H/alpha $ </tex-math></inline-formula> plane. This constraint can compromise the performance of detection, classification, and geophysical parameter inversion algorithms that are based on the investigation of this feature space. The independent component analysis (ICA), recently proposed as an alternative to eigenvector decomposition, provides promising new information to better interpret non-Gaussian heterogeneous clutter (inherent to high-resolution SAR systems) in the frame of polarimetric ICTDs. Not constrained to any orthogonality between the estimated scattering mechanisms that compose the clutter under analysis, ICA does not introduce any unfeasible region in the <inline-formula> <tex-math notation="LaTeX">$H/alpha $ </tex-math></inline-formula> plane, increasing the range of possible natural phenomena depicted in the aforementioned feature space. This paper addresses the potential of the new information provided by the ICA as an ICTD method with respect to Cloude and Pottier <inline-formula> <tex-math notation="LaTeX">$H/alpha $ </tex-math></inline-formula> feature space. A PolSAR data set acquired in October 2006 by the E-SAR system over the upper part of the Tacul glacier from the Chamonix Mont Blanc test site, France, and a RAMSES X-band image acquired over Brétigny, France, are taken into consideration to in- estigate the characteristics of pixels that may fall outside the feasible regions in the <inline-formula> <tex-math notation="LaTeX">$H/alpha $ </tex-math></inline-formula> plane that arise when the eigenvector approach is employed.

Wei Yang;Jie Chen;Wei Liu;Pengbo Wang;Chunsheng Li; "A Modified Three-Step Algorithm for TOPS and Sliding Spotlight SAR Data Processing," vol.55(12), pp.6910-6921, Dec. 2017. There are two challenges for efficient processing of both the sliding spotlight and terrain observation by progressive scans (TOPS) data using full-aperture algorithms. First, to overcome the Doppler spectrum aliasing, zero-padding is required for azimuth up sampling, increasing the computation burden; second, the azimuth deramp operation for avoiding synthetic aperture radar (SAR) image folding leads to azimuth time shift along the range dimension, and in turn the appearance of ghost targets and azimuth resolution reduction at the scene edge, especially in the wide-swath case. In this paper, a novel three-step algorithm is proposed for processing the sliding spotlight and TOPS data. In the first step, a modified derotation is derived in detail based on the chirp z-transform (CZT), avoiding zero-padding; then, the chirp scaling algorithm kernel is adopted for precise focusing in the second step; and in the third step, instead of the traditional range-independent deramp, a range-dependent deramp is applied to compensate for the time shift. Moreover, the SAR image geometry distortion caused by range-dependent deramp is corrected by employing a range-dependent CZT. Experimental results based on both simulated data and real data are provided to validate the proposed algorithm.

Yan Wang;Jing-Wen Li;Jian Yang; "Wide Nonlinear Chirp Scaling Algorithm for Spaceborne Stripmap Range Sweep SAR Imaging," vol.55(12), pp.6922-6936, Dec. 2017. The spaceborne stripmap range sweep synthetic aperture radar (SS-RSSAR) is a new concept spaceborne SAR system that images the region of interest (ROI) with ROI-orientated strips, which, unlike the traditional spaceborne SAR, are allowed to be not parallel with the satellite orbit. The SS-RSSAR imaging is a challenging problem because echoes of a wide region have strong spatial varieties, especially in high-squint geometries, and are hard to be focused by a single swath. The traditional imaging algorithms could solve this problem by cost-ineffectively dividing an ROI into many subswaths for separate processing. In this paper, a new wide nonlinear chirp scaling (W-NLCS) algorithm is proposed to efficiently image the SS-RSSAR data in a single swath. Comparing with the traditional nonlinear chirp scaling algorithm, the W-NLCS algorithm is superior in three major aspects: the nonlinear bulk range migration compensation (RMC), the interpolation-based residual RMC, and the modified azimuth frequency perturbation. Specifically, the interpolation for the residual RMC, the most significant step in achieving the wide-swath imaging performance, is made innovatively in the time domain. The derivation of the W-NLCS algorithm, as well as the performance analyses of the W-NLCS algorithm in aspects of the azimuth resolution, accuracy, and complexity, are all provided. The presented approach is evaluated by the point target simulations.

Liang Fei;Li Yan;Changhai Chen;Zhiyun Ye;Jiantong Zhou; "OSSIM: An Object-Based Multiview Stereo Algorithm Using SSIM Index Matching Cost," vol.55(12), pp.6937-6949, Dec. 2017. Multiview stereo (MVS) is a crucial process in image-based automatic 3-D reconstruction and mapping applications. In a dense matching process, the matching cost is generally computed between image pairs, making the efficiency low due to the large number of stereo pairs. This paper presents a novel object-based MVS algorithm using structural similarity (SSIM) index matching cost in a coarse-to-fine workflow. As far as we know, this is the first time SSIM index is introduced to calculate the matching cost of MVS applications. In contrast to classical stereo methods, the proposed object-based structural similarity (OSSIM) method computes only a depth map for each image. Thus, the efficiency can be greatly improved when the overlap between images is large. To obtain an optimized depth map, the winner-take-all and semi-global matching strategies are implemented. Moreover, an object-based multiview consistency checking strategy is also proposed to eliminate wrong matches and perform pixelwise view selection. The proposed method was successfully applied on a close-range Fountain-P11 data set provided by EPFL and aerial data sets of Vaihingen and Zürich by the ISPRS. Experimental results demonstrate that the proposed method can deliver matches at high completeness and accuracy. For the Vaihingen data set, the correctness and completeness rate were 71.12% and 95.99% with an RMSE of 2.8 GSD. For the Foutain-P11 data set, the proposed method outperformed the other existing methods with the ratio of pixels less than 2 cm. Extensive comparison using Zürich data set shows that it can derive results comparable to the state-of-the-art software (PhotoScan, Pix4d, and Smart3D) in urban buildings areas.

Tianzhu Liu;Yanfeng Gu;Jocelyn Chanussot;Mauro Dalla Mura; "Multimorphological Superpixel Model for Hyperspectral Image Classification," vol.55(12), pp.6950-6963, Dec. 2017. With the development of hyperspectral sensors, nowadays, we can easily acquire large amount of hyperspectral images (HSIs) with very high spatial resolution, which has led to a better identification of relatively small structures. Owing to the high spatial resolution, there are much less mixed pixels in the HSIs, and the boundaries between these categories are much clearer. However, the high spatial resolution also leads to complex and fine geometrical structures and high inner-class variability, which make the classification results very “noisy.” In this paper, we propose a multimorphological superpixel (MMSP) method to extract the spectral and spatial features and address the aforementioned problems. To reduce the difference within the same class and obtain multilevel spatial information, morphological features (multistructuring element extended morphological profile or multiattribute filter extended multi-attribute profiles) are first obtained from the original HSI. After that, simple linear iterative clustering segmentation method is performed on each morphological feature to acquire the MMSPs. Then, uniformity constraint is used to merge the MMSPs belonging to the same class which can avoid introducing the information from different classes and acquire spatial structures at object level. Subsequently, mean filtering is utilized to extract the spatial features within and among MMSPs. At last, base kernels are obtained from the spatial features and original HSI, and several multiple kernel learning methods are used to obtain the optimal kernel to incorporate into the support vector machine. Experiments conducted on three widely used real HSIs and compared with several well-known methods demonstrate the effectiveness of the proposed model.

Yiwen Zhou;Roger H. Lang;Emmanuel P. Dinnat;David M. Le Vine; "L-Band Model Function of the Dielectric Constant of Seawater," vol.55(12), pp.6964-6974, Dec. 2017. This paper describes a new model of the seawater dielectric constant as a function of salinity and temperature at L-band. The model function is developed by fitting the accurate measurement data made at 1.413 GHz to a third-order polynomial. The purpose of this study is to provide an accurate model for earth-observing satellites to retrieve seawater salinities from remote sensing data. In this paper, the development of the model function is introduced along with an analysis of the goodness of fit. The model function is then compared with the model functions of Klein–Swift and Meissner–Wentz. Finally, the comparison is made between the retrieved salinity from the satellite data with the in situ data measured by Argo floats.

Marco Chini;Renaud Hostache;Laura Giustarini;Patrick Matgen; "A Hierarchical Split-Based Approach for Parametric Thresholding of SAR Images: Flood Inundation as a Test Case," vol.55(12), pp.6975-6988, Dec. 2017. Parametric thresholding algorithms applied to synthetic aperture radar (SAR) imagery typically require the estimation of two distribution functions, i.e., one representing the target class and one its background. They are eventually used for selecting the threshold that allows binarizing the image in an optimal way. In this context, one of the main difficulties in parameterizing these functions originates from the fact that the target class often represents only a small fraction of the image. Under such circumstances, the histogram of the image values is often not obviously bimodal and it becomes difficult, if not impossible, to accurately parameterize distribution functions. Here we introduce a hierarchical split-based approach that searches for tiles of variable size allowing the parameterization of the distributions of two classes. The method is integrated into a flood-mapping algorithm in order to evaluate its capacity for parameterizing distribution functions attributed to floodwater and changes caused by floods. We analyzed a data set acquired during a flood event along the Severn River (U.K.) in 2007. It is composed of moderate (ENVISAT-WS) and high (TerraSAR-X)-resolution SAR images. The obtained classification accuracies as well as the similarity of performance levels to a benchmark obtained with an established method based on the manual selection of tiles indicate the validity of the new method.

Xudong Guan;Gaohuan Liu;Chong Huang;Qingsheng Liu;Chunsheng Wu;Yan Jin;Yafei Li; "An Object-Based Linear Weight Assignment Fusion Scheme to Improve Classification Accuracy Using Landsat and MODIS Data at the Decision Level," vol.55(12), pp.6989-7002, Dec. 2017. Landsat satellite images are extensively used in land-use studies due to their relatively high spatial resolution. However, the number of usable data sets is limited by the relatively long revisit interval and phenology effects can significantly reduce classification accuracy. Moderate Resolution Imaging Spectroradiometer (MODIS) images have higher temporal frequency and can provide extra time-series information. However, they are limited in their capability to classify heterogeneous landscapes due to their coarse spatial resolution. Fusion of different data sources is a potential solution for improving land-cover classification. This paper proposes a fusion scheme to combine Landsat and MODIS remote sensing data at the decision level. First, multiresolution segmentations on the two kinds of remote sensing data are performed to identify the landscape objects and are used as fusion units in subsequent steps. Then, fuzzy classifications are applied to each of the two different resolution data sets and the classification accuracies are evaluated. According to the performance of the two data sets in classification evaluation, a simple weight assignment technique based on the weighted sum of the membership of imaged objects is implemented in the final classification decision. The weighting factors are calculated based on a confusion matrix and the heterogeneity of detected land cover. The algorithm is capable of integrating the time-series spectral information of MODIS data with spatial contexts extracted from Landsat data, thus improving the land-cover classification accuracy. The overall classification accuracy using the fusion technique increased by 7.43% and 10.46% compared with the results from the individual Landsat and MODIS data, respectively.

Chenhong Sui;Yan Tian;Yiping Xu;Yong Xie; "Weighted Spectral-Spatial Classification of Hyperspectral Images via Class-Specific Band Contribution," vol.55(12), pp.7003-7017, Dec. 2017. Hyperspectral images (HSIs) have evident advantages in image understanding due to enormous spectral bands, and rich spatial information. Hundreds of spectral bands, however, actually play different roles in contributing to the class-specific classification. Then, treating each band equally may lead to the underuse or overuse of them. To address this issue, this paper introduces class-specific band contributions (BCs) into the spectral space, and proposes a weighted spectral-spatial classification method for HSIs. In the method, by incorporating BC characterized by F-measure into the distance-based posterior probability, a weighted spectral posterior probability (WSP) model is established. Furthermore, to exploit the spatial information, WSP is then combined with the spatial consistency constraint via an adaptive tradeoff parameter. Additionally, aimed at obtaining the class-dependent F-measures of each band, a semisupervised F-measure prediction method is also developed. Experiments on four hyperspectral data sets are conducted. Experimental results show the superiority of our proposed method over several state-of-the-art methods in terms of three widely used indexes.

Cunren Liang;Eric J. Fielding;Mong-Han Huang; "Estimating Azimuth Offset With Double-Difference Interferometric Phase: The Effect of Azimuth FM Rate Error in Focusing," vol.55(12), pp.7018-7031, Dec. 2017. Estimating azimuth offset with double-difference interferometric (DDI) phase, which is called multiple-aperture interferometric synthetic aperture radar (InSAR) or spectral diversity, is increasingly used in recent years to measure azimuth deformation or to accurately coregister a pair of InSAR images. We analyze the effect of frequency modulation (FM) rate error in focusing on the DDI phase with an emphasis on the azimuth direction. We first comprehensively analyze the errors in various focusing results caused by the FM rate error. We then derive the DDI phase error considering different acquisition modes including stripmap, ScanSAR, and TOPS modes. For stripmap mode, typical DDI phase error is a range ramp, while for burst modes including ScanSAR and TOPS modes it is an azimuth ramp within a burst. The correction methods for the DDI phase error are suggested for different acquisition modes.

Dong Chen;Ruisheng Wang;Jiju Peethambaran; "Topologically Aware Building Rooftop Reconstruction From Airborne Laser Scanning Point Clouds," vol.55(12), pp.7032-7052, Dec. 2017. This paper presents a novel topologically aware 2.5-D building modeling methodology from airborne laser scanning point clouds. The building reconstruction process consists of three main steps: primitive clustering, boundary representation, and geometric modeling. In primitive clustering, we propose an enhanced probability density clustering algorithm to cluster the rooftop primitives by taking into account the topological consistency among primitives. In the second step, we employ a novel Voronoi subgraph-based algorithm to seamlessly trace the primitive boundaries. This algorithm guarantees the production of geometric models without crack defects among adjacent primitives. The primitive boundaries are further divided into multiple linear segments, from which the key points are generated. These key points help to form a hybrid representation of the boundary by combining the projected points with part of the original boundary points. The model representation by the hybrid key points is flexible and well captures the rooftop details to generate lightweight and highly regular building models. Finally, we assemble the primitive boundaries to form the topologically correct entities, which are regarded as the basic units for primitive triangulation. The reconstructed models not only have accurate geometry and correct topology but more importantly have abundant semantics, by which five levels of building models can be generated in real time. The proposed reconstruction method has been comprehensively evaluated on Toronto data set in terms of model compactness, multilevel model representation, and geometric accuracy.

Song Zhou;Lei Yang;Lifan Zhao;Guoan Bi; "Quasi-Polar-Based FFBP Algorithm for Miniature UAV SAR Imaging Without Navigational Data," vol.55(12), pp.7053-7065, Dec. 2017. Because of flexible geometric configuration and trajectory designation, time-domain algorithms become popular for unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) applications. In this paper, a new quasi-polar-coordinate-based fast factorized back-projection (FFBP) algorithm combined with data-driven motion compensation is proposed for miniature UAV-SAR imaging. By utilizing wavenumber decomposition, the analytical spectrum of a quasi-polar grid image is obtained, where the phase errors arising from the trajectory deviations can be conveniently investigated and the phase autofocusing can be compatibly incorporated. Different from the conventional FFBP based on a polar coordinate system, the proposed algorithm operates in a quasi-polar coordinate system, where the phase errors become spacial invariant and can be accurately estimated and easily compensated. Moreover, the relationship between phase errors and nonsystematic range cell migration (NsRCM) is revealed according to the analytical image spectrum, based on which the NsRCM correction is developed to further improve the image focusing quality for high-resolution SAR applications. Promising experimental results from the raw data experiments of miniature UAV-SAR test bed are presented and analyzed to validate the advantages of the proposed algorithm.

Wei Zhao;Zhirui Wang;Maoguo Gong;Jia Liu; "Discriminative Feature Learning for Unsupervised Change Detection in Heterogeneous Images Based on a Coupled Neural Network," vol.55(12), pp.7066-7080, Dec. 2017. With the application requirement, the technique for change detection based on heterogeneous remote sensing images is paid more attention. However, detecting changes between two heterogeneous images is challenging as they cannot be compared in low-dimensional space. In this paper, we construct an approximately symmetric deep neural network with two sides containing the same number of coupled layers to transform the two images into the same feature space. The two images are connected with the two sides and transformed into the same feature space, in which their features are more discriminative and the difference image can be generated by comparing paired features pixel by pixel. The network is first built by stacked restricted Boltzmann machines, and then, the parameters are updated in a special way based on clustering. The special way, motivated by that two heterogeneous images share the same reality in unchanged areas and retain respective properties in changed areas, shrinks the distance between paired features transformed from unchanged positions, and enlarges the distance between paired features extracted from changed positions. It is achieved through introducing two types of labels and updating parameters by adaptively changed learning rate. This is different from the existing methods based on deep learning that just do operations on positions predicted to be unchanged and extract only one type of labels. The whole process is completely unsupervised without any priori knowledge. Besides, the method can also be applied to homogeneous images. We test our method on heterogeneous images and homogeneous images. The proposed method achieves quite high accuracy.

Yi Liao;Qing Huo Liu; "Modified Chirp Scaling Algorithm for Circular Trace Scanning Synthetic Aperture Radar," vol.55(12), pp.7081-7091, Dec. 2017. For circular trace scanning synthetic aperture radar (CTSSAR) with a circular track, the conventional hyperbolic equation becomes inadequate to express the range history of a point target accurately, and when it comes to the wide swath observation and imaging, the range variance makes it even harder to focus the target on the edge of the scene. Thus, an expression with high-order terms is needed to approximate the range history and the range variance should also be considered in the imaging algorithm. In this paper, based on the method of series reversion, a fourth-order approximated range model is established for the CTSSAR processing and the 2-D spectrum is derived for the echo signal in CTSSAR with circular trajectory. At the same time, in order to deal with the range-variant range cell migration problem in large-area CTSSAR imaging, a modified chirp scaling algorithm is proposed to realize precise wide swath CTSSAR focusing. Experiments and analyses are performed to validate the effectiveness of the proposed algorithm.

Emmanuel Maggiori;Yuliya Tarabalka;Guillaume Charpiat;Pierre Alliez; "High-Resolution Aerial Image Labeling With Convolutional Neural Networks," vol.55(12), pp.7092-7103, Dec. 2017. The problem of dense semantic labeling consists in assigning semantic labels to every pixel in an image. In the context of aerial image analysis, it is particularly important to yield high-resolution outputs. In order to use convolutional neural networks (CNNs) for this task, it is required to design new specific architectures to provide fine-grained classification maps. Many dense semantic labeling CNNs have been recently proposed. Our first contribution is an in-depth analysis of these architectures. We establish the desired properties of an ideal semantic labeling CNN, and assess how those methods stand with regard to these properties. We observe that even though they provide competitive results, these CNNs often underexploit properties of semantic labeling that could lead to more effective and efficient architectures. Out of these observations, we then derive a CNN framework specifically adapted to the semantic labeling problem. In addition to learning features at different resolutions, it learns how to combine these features. By integrating local and global information in an efficient and flexible manner, it outperforms previous techniques. We evaluate the proposed framework and compare it with state-of-the-art architectures on public benchmarks of high-resolution aerial image labeling.

Arne Schröder;Axel Murk;Richard Wylde;Dennis Schobert;Mike Winser; "Brightness Temperature Computation of Microwave Calibration Targets," vol.55(12), pp.7104-7112, Dec. 2017. A rigorous numerical technique to compute the brightness temperature of arbitrarily shaped microwave calibration targets is presented. The proposed method allows the brightness temperature of calibration targets to be investigated depending on frequency, absorber material, geometry, antenna pattern, field incidence, and temperature environment. We have validated the accuracy and studied the numerical complexity of the approach by means of analytical reference solutions. Fundamental brightness temperature investigations of pyramid absorbers are shown for various thermal environments in different frequency bands between 20 and 450 GHz. Based on these analyses, a novel pyramid geometry was designed, which features a superior electromagnetic and thermal performance compared with conventional pyramid designs. Using the theoretical findings, we have developed reduced-order models of pyramid targets for rapid brightness temperature studies.

Jianfei Liu;William J. Emery;Xiongbin Wu;Miao Li;Chuan Li;Lan Zhang; "Computing Ocean Surface Currents From GOCI Ocean Color Satellite Imagery," vol.55(12), pp.7113-7125, Dec. 2017. One of the significant challenges in physical oceanography is getting an adequate space/time description of the ocean surface currents. One possible solution is the maximum cross-correlation (MCC) method that we apply to hourly ocean color images from the Geostationary Ocean Color Imager (GOCI) over five years. Since GOCI provided a large number of image pairs, we introduce a new MCC search strategy to improve the computational efficiency of the MCC method saving 95% of the processing time. We also use an MCC current merging method to increase the total spatial coverage of the currents, proving a 25% increase. Five-year mean and seasonal time-average flows are computed to capture the major currents in the area of interest. The mean flows investigate the Kuroshio path, support the triple-branch pattern of the Tsushima Warm Current (TC), and reveal the origin of the TC. The evolution of a warm core ring shed by the Kuroshio near the northeast coast of Honshu, Japan, is clearly depicted by a sequence of three monthly MCC composites. We capture the evolution of the Kuroshio meander over seasonal, monthly, and weekly time scales. Three successive weekly MCC composite maps demonstrate how a large anticyclonic eddy, to the south of the Kuroshio meander, influences its formation and evolution in time and space. The unique ability to view short space/time scale changes in these strong current systems is a major benefit of the application of the MCC method to the high spatial resolution and rapid refresh GOCI data.

Jingbo Wei;Lizhe Wang;Peng Liu;Xiaodao Chen;Wei Li;Albert Y. Zomaya; "Spatiotemporal Fusion of MODIS and Landsat-7 Reflectance Images via Compressed Sensing," vol.55(12), pp.7126-7139, Dec. 2017. The fusion of remote sensing images with different spatial and temporal resolutions is needed for diverse Earth observation applications. A small number of spatiotemporal fusion methods that use sparse representation appear to be more promising than weighted- and unmixing-based methods in reflecting abruptly changing terrestrial content. However, none of the existing dictionary-based fusion methods consider the downsampling process explicitly, which is the degradation and sparse observation from high-resolution images to the corresponding low-resolution images. In this paper, the downsampling process is described explicitly under the framework of compressed sensing for reconstruction. With the coupled dictionary to constrain the similarity of sparse coefficients, a new dictionary-based spatiotemporal fusion method is built and named compressed sensing for spatiotemporal fusion, for the spatiotemporal fusion of remote sensing images. To deal with images with a high-resolution difference, typically Landsat-7 and Moderate Resolution Imaging Spectrometer (MODIS), the proposed model is performed twice to shorten the gap between the small block size and the large resolution rate. In the experimental procedure, the near-infrared, red, and green bands of Landsat-7 and MODIS are fused with root mean square errors to check the prediction accuracy. It can be concluded from the experiment that the proposed methods can produce higher quality than five state-of-the-art methods, which prove the feasibility of incorporating the downsampling process in the spatiotemporal model under the framework of compressed sensing.

Xudong Kang;Xuanlin Xiang;Shutao Li;Jón Atli Benediktsson; "PCA-Based Edge-Preserving Features for Hyperspectral Image Classification," vol.55(12), pp.7140-7151, Dec. 2017. Edge-preserving features (EPFs) obtained by the application of edge-preserving filters to hyperspectral images (HSIs) have been found very effective in characterizing significant spectral and spatial structures of objects in a scene. However, a direct use of the EPFs can be insufficient to provide a complete characterization of spatial information when objects of different scales are present in the considered images. Furthermore, the edge-preserving smoothing operation unavoidably decreases the spectral differences among objects of different classes, which may affect the following classification. To overcome these problems, in this paper, a novel principal component analysis (PCA)-based EPFs (PCA-EPFs) method for HSI classification is proposed, which consists of the following steps. First, the standard EPFs are constructed by applying edge-preserving filters with different parameter settings to the considered image, and the resulting EPFs are stacked together. Next, the spectral dimension of the stacked EPFs is reduced with the PCA, which not only can represent the EPFs in the mean square sense but also highlight the separability of pixels in the EPFs. Finally, the resulting PCA-EPFs are classified by a support vector machine (SVM) classifier. Experiments performed on several real hyperspectral data sets show the effectiveness of the proposed PCA-EPFs, which sharply improves the accuracy of the SVM classifier with respect to the standard edge-preserving filtering-based feature extraction method, and other widely used spectral-spatial classifiers.

Jiangtao Peng;Qian Du; "Robust Joint Sparse Representation Based on Maximum Correntropy Criterion for Hyperspectral Image Classification," vol.55(12), pp.7152-7164, Dec. 2017. Joint sparse representation (JSR) has been a popular technique for hyperspectral image classification, where a testing pixel and its spatial neighbors are simultaneously approximated by a sparse linear combination of all training samples, and the testing pixel is classified based on the joint reconstruction residual of each class. Due to the least-squares representation of the approximation error, the JSR model is usually sensitive to outliers, such as background, noisy pixels, and outlying bands. In order to eliminate such effects, we propose three correntropy-based robust JSR (RJSR) models, i.e., RJSR for handling pixel noise, RJSR for handling band noise, and RJSR for handling both pixel and band noise. The proposed RJSR models replace the traditional square of the Euclidean distance with the correntropy-based metric in measuring the joint approximation error. To solve the correntropy-based joint sparsity model, a half-quadratic optimization technique is developed to convert the original nonconvex and nonlinear optimization problem into an iteratively reweighted JSR problem. As a result, the optimization of our models can handle the noise in neighboring pixels and the noise in spectral bands. It can adaptively assign small weights to noisy pixels or bands and put more emphasis on noise-free pixels or bands. The experimental results using real and simulated data demonstrate the effectiveness of our models in comparison with the related state-of-the-art JSR models.

B. P. Salmon;D. S. Holloway;W. Kleynhans;J. C. Olivier;K. J. Wessels; "Applying Model Parameters as a Driving Force to a Deterministic Nonlinear System to Detect Land Cover Change," vol.55(12), pp.7165-7176, Dec. 2017. In this paper, we propose a new method for extracting features from time-series satellite data to detect land cover change. We propose to make use of the behavior of a deterministic nonlinear system driven by a time-dependent force. The driving force comprises a set of concatenated model parameters regressed from fitting a model to a Moderate Resolution Imaging Spectroradiometer time series. The goal is to create behavior in the nonlinear deterministic system, which appears predictable for the time series undergoing no change, while erratic for the time series undergoing land cover change. The differential equation used for the deterministic nonlinear system is that of a large-amplitude pendulum, where the displacement angle is observed over time. If there has been no change in the land cover, the mean driving force will approximate zero, and hence the pendulum will behave as if in free motion under the influence of gravity only. If, however, there has been a change in the land cover, this will for a brief initial period introduce a nonzero mean driving force, which does work on the pendulum, changing its energy and future evolution, which we demonstrate is observable. This we show is sufficient to introduce an observable change to the state of the pendulum, thus enabling change detection. We extend this method to a higher dimensional differential equation to improve the false alarm rate in our experiments. Numerical results show a change detection accuracy of nearly 96% when detecting new human settlements, with a corresponding false alarm rate of 0.2% (omission error rate of 4%). This compares very favorably with other published methods, which achieved less than 90% detection but with false alarm rates all above 9% (omission error rate of 66%).

Zhimian Zhang;Haipeng Wang;Feng Xu;Ya-Qiu Jin; "Complex-Valued Convolutional Neural Network and Its Application in Polarimetric SAR Image Classification," vol.55(12), pp.7177-7188, Dec. 2017. Following the great success of deep convolutional neural networks (CNNs) in computer vision, this paper proposes a complex-valued CNN (CV-CNN) specifically for synthetic aperture radar (SAR) image interpretation. It utilizes both amplitude and phase information of complex SAR imagery. All elements of CNN including input-output layer, convolution layer, activation function, and pooling layer are extended to the complex domain. Moreover, a complex backpropagation algorithm based on stochastic gradient descent is derived for CV-CNN training. The proposed CV-CNN is then tested on the typical polarimetric SAR image classification task which classifies each pixel into known terrain types via supervised training. Experiments with the benchmark data sets of Flevoland and Oberpfaffenhofen show that the classification error can be further reduced if employing CV-CNN instead of conventional real-valued CNN with the same degrees of freedom. The performance of CV-CNN is comparable to that of existing state-of-the-art methods in terms of overall classification accuracy.

Xiao-Kun Wei;Xingqi Zhang;Nectaria Diamanti;Wei Shao;Costas D. Sarris; "Subgridded FDTD Modeling of Ground Penetrating Radar Scenarios Beyond the Courant Stability Limit," vol.55(12), pp.7189-7198, Dec. 2017. This paper presents an efficient 3-D finite-difference time-domain (FDTD) subgridding scheme that is free of the Courant–Friedrichs–Lewy stability condition, for the modeling of ground-penetrating radar (GPR) scenarios in lossy dispersive media. Spatial filtering of FDTD fields within the subgrid is employed to render the time step independent of the cell size in the fine-cell subgrids. This process is applied with minimal modification of the original FDTD code, no implicit operations, and very modest computational overhead. Moreover, multiterm dispersion is included to model practical GPR scenarios involving the detection of realistic scatterers within dispersive soil. Several numerical examples are provided to demonstrate the potential of the proposed method as a powerful GPR modeling tool.

* "Introducing IEEE Collabratec," vol.55(12), pp.7199-7199, Dec. 2017.*

* "IEEE Transactions on Geoscience and Remote Sensing information for authors," vol.55(12), pp.C3-C3, Dec. 2017.*

* "IEEE Transactions on Geoscience and Remote Sensing institutional listings," vol.55(12), pp.C4-C4, Dec. 2017.*

IEEE Geoscience and Remote Sensing Letters - new TOC (2017 November 23) [Website]

* "Front Cover," vol.14(11), pp.C1-C1, Nov. 2017.* Presents the front cover for this issue of the publication.

* "IEEE Geoscience and Remote Sensing Letters publication information," vol.14(11), pp.C2-C2, Nov. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

* "Table of contents," vol.14(11), pp.1881-2172, Nov. 2017.* Presents the table of contents for this issue of the publication.

Xiao Wang;Feng Xu;Ya-Qiu Jin; "The Iterative Reweighted Alternating Direction Method of Multipliers for Separating Structural Layovers in SAR Tomography," vol.14(11), pp.1883-1887, Nov. 2017. Layover scatterers of tall building structures can be separated by synthetic aperture radar tomography (SAR-tomo). An iterative reweighted L1 minimization (IRL1) has been applied to enhance the sparsity in a tomographic inversion, where the basis pursuit (BP) technique was adopted to search for the solution. However, the IRL1 with BP is highly time-consuming, which may prevent its real application to large-scale data sets. In this letter, we propose the iterative reweighted alternating direction method of multipliers (IR-ADMM) for fast SAR-tomo imaging. We demonstrate and validate the enhanced sparsity and fast convergence of our IR-ADMM algorithm with experiments using both simulated data and TerraSAR-X Stripmap images of tall urban buildings. The experimental results show that compared with conventional IR-BP, the IR-ADMM greatly reduces the computation time without substantial performance degradation.

Drew B. Gonsalves;Lawrence H. Winner;Joseph N. Wilson; "Improvement of Handheld Radar Operators’ Hazard Detection Performance Using 3-D Visualization," vol.14(11), pp.1888-1892, Nov. 2017. Handheld ground-penetrating radar systems are employed in both military and humanitarian demining operations. Radar system operators are given the difficult job of determining the nature of subsurface objects from signal reflections in real time. Current systems require operators to multitask both collection and classification. This letter tested a 3-D data visualization method against a 2-D method. The 3-D method attempts to separate tasks by not forcing operators to classify objects in real time. Data showed that users classifying objects with the 3-D system had better performance and also reported that this system was more user-friendly. In addition, users were able to classify underground objects quicker with the 3-D system. The results of this letter demonstrate the benefit of 3-D visualization in ground scanning systems in increasing performance and decreasing cognitive load.

Yuhan Du;Yicheng Jiang;Wei Zhou; "An Accurate Two-Step ISAR Cross-Range Scaling Method for Earth-Orbit Target," vol.14(11), pp.1893-1897, Nov. 2017. Inverse synthetic aperture radar (ISAR) cross-range scaling is used to obtain the actual cross-range size of the target, which is essential for space surveillance and automatic target recognition. In this letter, a novel two-step ISAR cross-range scaling method for earth-orbit targets is proposed, which improves the computational efficiency through the use of prior information and achieves high estimation accuracy. An initial rotation velocity (RV) is calculated first using the open two-line element data of the satellite orbit to coarsely achieve a cross-range scaling of the ISAR image with high efficiency. Then, the refined cross-range scaling result is obtained with an accurate RV, which is accomplished by the isolated scatterer extraction and the chirp-rate estimation, wherein the blob detection and the integrated cubic phase function are employed, respectively. The initial RV is used to narrow the search width of the chirp-rate estimation, and the corresponding computational burden is expected to decrease accordingly. Finally, simulations and real-data experiments are performed to verify the effectiveness and the accuracy of the proposed method.

M. A. E. Bhuiyan;E. N. Anagnostou;P.-E. Kirstetter; "A Nonparametric Statistical Technique for Modeling Overland TMI (2A12) Rainfall Retrieval Error," vol.14(11), pp.1898-1902, Nov. 2017. In this letter, we evaluate a nonparametric error model for Tropical Rainfall Measurement Mission (TRMM) passive microwave (PMW) rainfall (2A12) product over coverage in the southern continental United States, and assess the impact of surface soil moisture information on the model's performance. Reference precipitation was based on high-resolution (5 min/1 km) rainfall fields derived from the NOAA/National Severe Storms Laboratory multiradar multisensor system. The error model was evaluated using a K-fold validation experiment using systematic and random error statistics of the model-adjusted TRMM Microwave Imager rainfall point estimates, and ensemble verification statistics of the corresponding prediction intervals. Results show better performance, particularly in the accuracy of the prediction intervals, when near-surface soil moisture was used as input parameter. The error model can be extended using the TRMM and Global Precipitation Measurement satellite missions' precipitation radar rainfall and satellite soil moisture data sets to characterize globally the uncertainty of PMW products.

Jun Chen;Jun-Gang Yang;Wei An;Zhi-Jie Chen; "An Attitude Jitter Correction Method for Multispectral Parallax Imagery Based on Compressive Sensing," vol.14(11), pp.1903-1907, Nov. 2017. Attitude jitter is a common problem for high-resolution earth-observation satellites and can diminish the geo-positioning and mapping performance of observed images. It is especially necessary to address this problem when high-performance attitude measurements are unavailable. Therefore, an attitude jitter correction method for multispectral parallax imagery that utilizes the compressive-sensing technology is proposed in this letter. In the proposed method, the attitude jitter is estimated from the parallax disparities of different band images, and then the image displacement caused by attitude jitter can be corrected. Using the normalized cross correlation method and compressive-sensing technology, the proposed method can deal with the condition of texture-feature deficiency in the partial image. The multispectral images of the Terra and ZY-3 satellites are used as experimental data to evaluate the proposed method. The registration errors of different bands are greatly reduced in both the cross- and along-track directions, and the experiment results indicate that the proposed method is effective for correcting the attitude jitter of both satellites.

Saleem Sahawneh;W. Linwood Jones;Sayak K. Biswas;Daniel Cecil; "HIRAD Brightness Temperature Image Geolocation Validation," vol.14(11), pp.1908-1912, Nov. 2017. The Hurricane Imaging Radiometer (HIRAD) is an airborne microwave radiometer developed to provide wide-swath hurricane surface wind speed and rain rate imagery for scientific research. This letter presents a geometric evaluation of the brightness temperature (Tb) images produced by HIRAD for high-contrast land/water targets. Methodologies used to validate geolocation accuracy and spatial resolution are discussed, and results are presented to provide quantitative pixel geolocation accuracy and the effective image spatial resolution of the Tb image.

R. Guinvarc’h;L. Thirion-Lefevre; "Cross-Polarization Amplitudes of Obliquely Orientated Buildings With Application to Urban Areas," vol.14(11), pp.1913-1917, Nov. 2017. Buildings that are rotated with respect to the sensor trajectory could be erroneously classified as vegetated areas in the Pauli basis, and subsequently in many decomposition theorems despite the considerable amount of work done to solve that issue. This misjudgement is linked to the high level of their cross-polarized contribution. Using electromagnetic simulation tools and image analysis, we study the value of these cross-polarization components. We show that forested areas and cities exhibit significantly different cross-polarization levels; indeed, the origin of these components is actually distinct. Based on that, to discriminate between the two environments, we introduce an extension to the Pauli basis where the cross polarization is split into two classes, one for rotated dihedrals and the other for random scatterers. This approach is then tested on two synthetic aperture radar images: the first acquired at C-band using RADARSAT-2 over Downtown San Francisco and the second using RAMSES at X-band over an industrial area near Paris.

A. Hamadi;L. Villard;P. Borderies;C. Albinet;T. Koleck;T. Le Toan; "Comparative Analysis of Temporal Decorrelation at P-Band and Low L-Band Frequencies Using a Tower-Based Scatterometer Over a Tropical Forest," vol.14(11), pp.1918-1922, Nov. 2017. Temporal decorrelation is a critical parameter for repeat-pass coherent radar processing, including many advanced techniques such as polarimetric SAR interferometry (PolInSAR) and SAR tomography (TomoSAR). Given the multifactorial and unpredictable causes of temporal decorrelation, statistical analysis of long time series of measurements from tower-based scatterometers is the most appropriate method for characterizing how rapidly a specific scene decorrelates. Based on the TropiScat experiment that occurred in a tropical dense forest in French Guiana, this letter proposes a comparative analysis between temporal decorrelation at P-band and at higher frequencies in the range of 800-1000 MHz (the low end of the L-band). This letter aims to support the design of future repeat-pass spaceborne missions and to offer a better understanding of the physics behind temporal decorrelation. Beyond the expected lower values that are found and quantified at the low L-band compared with the P-band, similar decorrelation patterns related to rainy and dry periods are emphasized in addition to the critical impacts of acquisition time during the day.

Yiming Zhang;Bo Du;Yuxiang Zhang;Liangpei Zhang; "Spatially Adaptive Sparse Representation for Target Detection in Hyperspectral Images," vol.14(11), pp.1923-1927, Nov. 2017. As sparse representation gradually obtains better and better results in the analysis of hyperspectral imagery and sparsity-based algorithms are becoming more and more popular, especially in target detection. However, these methods mostly assume an absolute equal contribution by all neighboring pixels while detecting the central pixel. There is no doubt that this approach is unsuitable for pixels located in heterogeneous areas. In this letter, to address this problem, spatially adaptive sparse representation for target detection in hyperspectral images (HSIs) is proposed. Neighboring spatial information is utilized by considering the different contributions of the distinct neighborhood pixels. The different weights are determined according to the similarity between the neighboring pixels and the central test pixel. The proposed algorithm was tested on two HSIs and demonstrated outstanding detection performance when compared with other commonly used detectors.

Xiangrong Zhang;Yanjie Liang;Chen Li;Ning Huyan;Licheng Jiao;Huiyu Zhou; "Recursive Autoencoders-Based Unsupervised Feature Learning for Hyperspectral Image Classification," vol.14(11), pp.1928-1932, Nov. 2017. For hyperspectral image (HSI) classification, it is very important to learn effective features for the discrimination purpose. Meanwhile, the ability to combine spectral and spatial information together in a deep level is also important for feature learning. In this letter, we propose an unsupervised feature learning method for HSI classification, which is based on recursive autoencoders (RAE) network. RAE utilizes the spatial and spectral information and produces high-level features from the original data. It learns features from the neighborhood of the investigated pixel to represent the whole local homogeneous area of the image. In addition, to obtain more accurate representation of the investigated pixel, a weighting scheme is adopted based on the neighboring pixels, where the weights are determined by the spectral similarity between the neighboring pixels and the investigated pixel. The effectiveness of our method is evaluated by the experiments on two hyperspectral data sets, and the results show that our proposed method has a better performance.

Yuxiang Zhang;Wu Ke;Bo Du;Xiangyun Hu; "Independent Encoding Joint Sparse Representation and Multitask Learning for Hyperspectral Target Detection," vol.14(11), pp.1933-1937, Nov. 2017. Target detection is playing an important role in hyperspectral image (HSI) processing. Many traditional detection methods utilize the discriminative information within all the single-band images to distinguish the target and the background. The critical challenge with these methods is simultaneously reducing spectral redundancy and preserving the discriminative information. The multitask learning (MTL) technique has the potential to solve the aforementioned challenge, since it can further explore the inherent spectral similarity between the adjacent single-band images. This letter proposes an independent encoding joint sparse representation and an MTL method. This approach has the following capabilities: 1) explores the inherent spectral similarity to construct multiple sub-HSIs in order to reduce spectral redundancy for each sub-HSI; 2) takes full advantage of the prior class label information to construct reasonable joint sparse representation and MTL models for the target and the background; 3) explores the great difference between the target dictionary and background dictionary with different regularization strategies in order to better encode the task relatedness for two joint sparse representation and MTL models; and 4) makes the detection decision by comparing the reconstruction residuals under different prior class labels. Experiments on two HSIs illustrated the effectiveness of the proposed method.

Xiaotao Wang;Fang Liu; "Weighted Low-Rank Representation-Based Dimension Reduction for Hyperspectral Image Classification," vol.14(11), pp.1938-1942, Nov. 2017. A predimension-reduction algorithm that couples weighted low-rank representation (WLRR) with a skinny intrinsic mode functions (IMFs) dictionary is proposed for hyperspectral image (HSI) classification. It seeks a low-rank subspace to solve the performance degradation issue encountered by linear discriminant analysis in a small-sample-size situation. It can also improve the scatter matrix estimation when using a large training set. Unlike those commonly used methods, e.g., the principal component analysis-based ones, WLRR focuses on preserving more structure information. Based on the traditional LRR model, WLRR introduces a local weighted regularization to characterize the correlation between samples such that HSI-specific local structure can be better preserved as well as its global structure. Indeed, more structure information gives more additional discriminant ability. Furthermore, a new discriminant IMFs dictionary is designed to enhance interclass difference via empirical mode decomposition. The proposed method is investigated on several HSI data sets. All experimental results prove it a competitive and promising predimension-reduction means when compared to other traditional techniques.

Jiayuan Li;Qingwu Hu;Mingyao Ai; "Optimal Illumination and Color Consistency for Optical Remote-Sensing Image Mosaicking," vol.14(11), pp.1943-1947, Nov. 2017. Illumination and color consistency are very important for optical remote-sensing image mosaicking. In this letter, we propose a simple but effective technique that simultaneously performs image illumination and color correction for multiview images. In this framework, we first present an uneven illumination removal algorithm based on bright channel prior, which guarantees the illumination consistency inside a single image. We then adapt a pairwise color-correction method to coarsely align the color tone between source and reference images. In this stage, we give a new single-image quality metric which combines brightness deviation, color cast, and entropy together for automatic reference-image selection. Finally, we perform a least-squares adjustment (LSA) procedure to obtain optimal illumination and color consistency among multiview images. In detail, we first perform a pairwise image matching by using SIFT algorithm; once sparse local patch correspondences obtained, the illumination and color relationship between images can be established based on a global gamma correction model; the illumination and color errors can then be minimized by LSA. Extensive experiments on both challenging synthetic and real optical remote-sensing image data sets show that it significantly outperforms the compared state-of-the-art approaches. All the source code and data sets used in this letter are made public.

Xiaoyi Shen;Jie Zhang;Xi Zhang;Junmin Meng;Changqing Ke; "Sea Ice Classification Using Cryosat-2 Altimeter Data by Optimal Classifier–Feature Assembly," vol.14(11), pp.1948-1952, Nov. 2017. Sea ice type is one of the most sensitive variables in Arctic ice monitoring and detailed information about it is essential for ice situation evaluation, vessel navigation, and climate prediction. Many machine-learning methods including deep learning can be employed for ice-type detection, and most classifiers tend to prefer different feature combinations. In order to find the optimal classifier-feature assembly (OCF) for sea ice classification, it is necessary to assess their performance differences. The objective of this letter is to make a recommendation for the OCF for sea ice classification using Cryosat-2 (CS-2) data. Six classifiers including convolutional neural network (CNN), Bayesian, K nearest-neighbor (KNN), support vector machine (SVM), random forest (RF), and back propagation neural network (BPNN) were studied. CS-2 altimeter data of November 2015 and May 2016 in the whole Arctic were used. The overall accuracy was estimated using multivalidation to evaluate the performances of individual classifiers with different feature combinations. Overall, RF achieved a mean accuracy of 89.15%, followed by Bayesian, SVM, and BPNN (~86%), outperforming the worst (CNN and KNN) by 7%. Trailing-edge width (TeW) and leading-edge width (LeW) were the most important features, and feature combination of TeW, LeW, Sigma0, maximum of the returned power waveform (MAX), and pulse peakiness (PP) was the best choice. RF with feature combination of TeW, LeW, Sigma0, MAX, and PP was finally selected as the OCF for sea ice classification and the results that demonstrated this method achieved a mean accuracy of 91.45%, which outperformed the other state-of-art methods by 9%.

Jing Li;Zhaofa Zeng;Cai Liu;Nan Huai;Kun Wang; "A Study on Lunar Regolith Quantitative Random Model and Lunar Penetrating Radar Parameter Inversion," vol.14(11), pp.1953-1957, Nov. 2017. Lunar penetrating radar (LPR) is an important way to evaluate the geological structure of the subsurface of the moon. The Chang'E-3 has utilized LPR, which is equipped on the lunar rover named Yutu, to obtain the shallow lunar regolith structure in Mare Imbrium. The previous result provides a unique opportunity to map the subsurface structure and vertical distribution of the lunar regolith with high resolution. In order to evaluate the LPR data, the study of lunar regolith media is of great significance for understanding the material composition of the lunar regolith structure. In this letter, we focus on the lunar regolith quantitative random model and parameter inversion with LPR synthetic data. First, based on the Apollo drilling core data, we build the lunar regolith quantitative random model with clipped Gaussian random field theory. It can be used to model the discrete-valued random field with a given correlation structure. Then, we combine radar wave impedance and stochastic inversion methods to carry out LPR data inversion and parameter estimation. The results mostly provide reliable information on the lunar regolith layer structure and local details with high resolution. This letter presents a further research strategy for lunar probe and deep-space detection with LPR.

Graham V. Weinberg; "Minimum-Based Sliding Window Detectors in Correlated Pareto Distributed Clutter," vol.14(11), pp.1958-1962, Nov. 2017. Recent investigations have resulted in the derivation of a multivariate Pareto model, which is consistent with the compound Gaussian model framework, allowing one to describe statistically a correlated Pareto distributed sequence. This has permitted the development of noncoherent sliding window detection processes, for operation in an X-band maritime surveillance radar context, which account for correlated clutter returns. Based upon this multivariate Pareto model, the structure of the sample minimum is investigated, which can then be used to produce decision rules robust to interference. Two such detectors will be examined, and their performance in real high-resolution X-band maritime radar clutter will be investigated. It will be shown that a number of avenues of future work are available.

Weiying Xie;Yunsong Li; "Hyperspectral Imagery Denoising by Deep Learning With Trainable Nonlinearity Function," vol.14(11), pp.1963-1967, Nov. 2017. Hyperspectral images (HSIs) can describe subtle differences in the spectral signatures of objects, and thus they are effective in a wide array of applications. However, an HSI is inevitably contaminated with some unwanted components like noise resulting in spectral distortion, which significantly decreases the performance of postprocessing. In this letter, a deep stage convolutional neural network (CNN) with trainable nonlinearity functions is applied for the first time to remove noise in HSIs. Besides the fact that the weight and bias matrices are learned from cubic training clean-noisy HSI patches, the nonlinearity functions in each stage are also trainable, which differ from the conventional CNN with a fixed nonlinearity function. Compared with the state-of-the-art HSI denoising methods, the experimental results on both synthetic and real HSIs confirm that the proposed method can obtain a more effective and efficient performance.

Le Gan;Junshi Xia;Peijun Du;Zhigang Xu; "Dissimilarity-Weighted Sparse Representation for Hyperspectral Image Classification," vol.14(11), pp.1968-1972, Nov. 2017. To improve the capability of a traditional sparse representation-based classifier (SRC), we propose a novel dissimilarity-weighted SRC (DWSRC) for hyperspectral image (HSI) classification. In particular, DWSRC computes the weights for each atom according to the distance or dissimilarity information between the test pixel and the atoms. First, a locality constraint dictionary set is constructed by the Gaussian kernel distance with a suitable distance metric (e.g., Euclidean distance). Second, the test pixel is sparsely coded over the new weighted dictionary set based on the 11-norm minimization problem. Finally, the test pixel is classified by using the obtained sparse coefficients with the minimal residual rule. Experimental results on two widely used public HSIs demonstrate that the proposed DWSRC is more efficient and accurate than other state-of-the-art SRCs.

Yuanguo Zhou;Mingwei Zhuang;Linlin Shi;Guoxiong Cai;Na Liu;Qing Huo Liu; "Spectral-Element Method With Divergence-Free Constraint for 2.5-D Marine CSEM Hydrocarbon Exploration," vol.14(11), pp.1973-1977, Nov. 2017. Rapid simulations of large-scale low-frequency subsurface electromagnetic measurements are still a challenge because of the low-frequency breakdown phenomenon that makes the system matrix extremely poor-conditioned. Hence, significant attention has been paid to accelerate the numerical algorithms for Maxwell's equations in both integral and partial differential forms. In this letter, we develop a novel 2.5-D method to overcome the low-frequency breakdown problem by using the mixed spectral element method with the divergence-free constraint and apply it to solve the marine-controlled-source electromagnetic systems. By imposing the divergence-free constraint, the proposed method considers the law of conservation of charges, unlike the conventional governing equation for these problems. Therefore, at low frequencies, the Gauss law guarantees the stability of the solution, and we can obtain a well-conditioned system matrix even as the frequency approaches zero. Several numerical experiments show that the proposed method is well suited for solving low-frequency electromagnetic problems.

Pengjiang Hu;Shiyou Xu;Wenzhen Wu;Biao Tian;Zengping Chen; "IAA-Based High-Resolution ISAR Imaging With Small Rotational Angle," vol.14(11), pp.1978-1982, Nov. 2017. The Fourier transform-based range Doppler method is commonly used in an inverse synthetic aperture radar. Although it has achieved good success in most scenarios, its performance is determined by the rotational angle, and the cross-range resolution is extremely low in the case of a small rotational angle. In this letter, to improve the cross-range resolution, a novel cross-range compression scheme based on the iterative adaptive approach (IAA) is proposed. In addition to the standard IAA to achieve high resolution, the efficient IAA is introduced to suppress the sidelobes due to noise. Both the simulation and experimental results demonstrate that the proposed method has the advantages of parameter-free, high accuracy, and high efficiency.

Zezong Chen;Chao He;Chen Zhao;Fei Xie; "Enhanced Target Detection for HFSWR by 2-D MUSIC Based on Sparse Recovery," vol.14(11), pp.1983-1987, Nov. 2017. This letter proposes using the 2-D multiple-signal classification (MUSIC) based on sparse recovery (SR) to improve the target-detection capability of high-frequency surface wave radar (HFSWR). Usually, for wide-beam HFSWRs, target detection is first conducted in the range-Doppler spectrum, and bearings are then estimated by superresolution methods such as MUSIC. Unfortunately, the conventional cascaded method can easily result in unfavorable deterioration of multitarget detection when different target signals tend to become mixed in the Doppler spectrum. Moreover, sea clutter is an unwanted signal that frequently masks target signals. To enhance the detection of multiple targets and targets embedded in sea clutter, spatial-temporal joint estimation has been proposed. However, because of the lack of spatial-temporal snapshots caused by the nonstationarity of target signals, the efficiency of the estimator cannot be guaranteed. To overcome this shortcoming, multiple-measurement-vector-based SR, which has been used to solve many under-sampling problems in the past ten years, is adopted. Our approach can effectively detect a target embedded in sea clutter as well as multiple adjacent targets and distinguish them from each other. Results obtained using real data with opportunistic targets validate our approach. Therefore, the proposed 2-D SR-MUSIC approach improves target detection and outperforms conventional cascaded methods.

Amir Behnamian;Koreen Millard;Sarah N. Banks;Lori White;Murray Richardson;Jon Pasher; "A Systematic Approach for Variable Selection With Random Forests: Achieving Stable Variable Importance Values," vol.14(11), pp.1988-1992, Nov. 2017. Random Forests variable importance measures are often used to rank variables by their relevance to a classification problem and subsequently reduce the number of model inputs in high-dimensional data sets, thus increasing computational efficiency. However, as a result of the way that training data and predictor variables are randomly selected for use in constructing each tree and splitting each node, it is also well known that if too few trees are generated, variable importance rankings tend to differ between model runs. In this letter, we characterize the effect of the number of trees (ntree) and class separability on the stability of variable importance rankings and develop a systematic approach to define the number of model runs and/or trees required to achieve stability in variable importance measures. Results demonstrate that both a large ntree for a single model run, or averaged values across multiple model runs with fewer trees, are sufficient for achieving stable mean importance values. While the latter is far more computationally efficient, both the methods tend to lead to the same ranking of variables. Moreover, the optimal number of model runs differs depending on the separability of classes. Recommendations are made to users regarding how to determine the number of model runs and/or trees that are required to achieve stable variable importance rankings.

Li Deng;Sanyi Yuan;Shangxu Wang; "Sparse Bayesian Learning-Based Seismic Denoise by Using Physical Wavelet as Basis Functions," vol.14(11), pp.1993-1997, Nov. 2017. Attenuating random noise is a fundamental yet necessary step for subsequent seismic image processing and interpretation. We introduce a sparse Bayesian learning (SBL)-based seismic denoise method by using the physical wavelet as the basis function. The physical wavelet estimated from seismic and well logging data can appropriately describe the characteristics of the seismic data. Thus, it is an appropriate choice of basis function. Moreover, the tradeoff regularization parameter for determining denoise quality can be adaptively estimated according to the updated data misfit and sparseness degree during the iterative process of the SBL algorithm. The motivation behind the denoise method using sparse representations is that seismic signals can be sparsely represented by using several physical wavelets, whereas noise cannot. Both synthetic and real seismic data examples are adopted to demonstrate the effectiveness of the method.

Li Sun;Yuqi Tang;Liangpei Zhang; "Rural Building Detection in High-Resolution Imagery Based on a Two-Stage CNN Model," vol.14(11), pp.1998-2002, Nov. 2017. High-level feature extraction and hierarchical feature representation of image objects with a convolutional neural network (CNN) can overcome the limitations of the traditional building detection models using middle/low-level features extracted from a complex background. Aiming at the drawbacks of manual village location, high cost, and the limited accuracy of building detection in the existing rural building detection models, a two-stage CNN model is proposed in this letter to detect rural buildings in high-resolution imagery. Simulating the hierarchical processing mechanism of human vision, the proposed model is constructed with two CNNs, whose architectures can automatically locate villages and efficiently detect buildings, respectively. This two-stage CNN model effectively reduces the complexity of the background and improves the efficiency of rural building detection. The experiments showed that the proposed model could automatically locate all the villages in the two study areas, achieving a building detection accuracy of 88%. Compared with the existing models, the proposed model was proved to be effective in detecting buildings in rural areas with a complex background.

Rong Wang;Feiping Nie;Weizhong Yu; "Fast Spectral Clustering With Anchor Graph for Large Hyperspectral Images," vol.14(11), pp.2003-2007, Nov. 2017. The large-scale hyperspectral image (HSI) clustering problem has attracted significant attention in the field of remote sensing. Most traditional graph-based clustering methods still face challenges in the successful application of the large-scale HSI clustering problem mainly due to their high computational complexity. In this letter, we propose a novel approach, called fast spectral clustering with anchor graph (FSCAG), to efficiently deal with the large-scale HSI clustering problem. Specifically, we consider the spectral and spatial properties of HSI in the anchor graph construction. The proposed FSCAG algorithm first constructs anchor graph and then performs spectral analysis on the graph. With this, the computational complexity can be reduced to O(ndm), which is a significant improvement compared to conventional graph-based clustering methods that need at least O(n2d), where n, d, and m are the number of samples, features, and anchors, respectively. Several experiments are conducted to demonstrate the efficiency and effectiveness of the proposed FSCAG algorithm.

Christian Geiß;Patrick Aravena Pelizari;Henrik Schrade;Alexander Brenning;Hannes Taubenböck; "On the Effect of Spatially Non-Disjoint Training and Test Samples on Estimated Model Generalization Capabilities in Supervised Classification With Spatial Features," vol.14(11), pp.2008-2012, Nov. 2017. In this letter, we establish two sampling schemes to select training and test sets for supervised classification. We do this in order to investigate whether estimated generalization capabilities of learned models can be positively biased from the use of spatial features. Numerous spatial features impose homogeneity constraints on the image data, whereby a spatially connected set of image elements is attributed identical feature values. In addition to a frequent occurrence of intrinsic spatial autocorrelation, this leads to extrinsic spatial autocorrelation with respect to the image data. The first sampling scheme follows a spatially random partitioning into training and test sets. In contrast to that, the second strategy implements a spatially disjoint partitioning, which considers in particular topological constraints that arise from the deployment of spatial features. Experimental results are obtained from multi- and hyperspectral acquisitions over urban environments. They underline that a large share of the differences between estimated generalization capabilities obtained with the spatially disjoint and non-disjoint sampling strategies can be attributed to the use of spatial features, whereby differences increase with an increasing size of the spatial neighborhood considered for computing a spatial feature. This stresses the necessity of a proper spatial sampling scheme for model evaluation to avoid overoptimistic model assessments.

Hongbo Sun;Masanobu Shimada;Feng Xu; "Recent Advances in Synthetic Aperture Radar Remote Sensing—Systems, Data Processing, and Applications," vol.14(11), pp.2013-2016, Nov. 2017. This letter closes a special stream consisting of selected papers from the fifth Asia-Pacific Conference on Synthetic Aperture Radar in 2015 (APSAR 2015). The latest research results and outcomes from APSAR 2015, particularly on the synthetic aperture radar (SAR) systems/subsystems design, data processing techniques, and various SAR applications in remote sensing, are summarized and presented. All these results represent the recent advances in SAR remote sensing. Hopefully, this letter can provide some references for SAR researchers/engineers and stimulate the future development of SAR technology for remote sensing.

Bo Lu;Biyang Wen;Yingwei Tian;Ruokun Wang; "A Vessel Detection Method Using Compact-Array HF Radar," vol.14(11), pp.2017-2021, Nov. 2017. A compact-array high-frequency surface wave radar equipped with two crossed-loop/monopole receiving antennas has been established for vessel detection. Using two compact antennas of the same design, this system can obtain two extremely similar sets of radar range-Doppler spectra over the same period. To detect vessel targets efficiently, the spectra of two antennas are enhanced by performing a principle component analysis. A wavelet-based approach is then applied to suppress clutter and reduce noise. The signal-to-noise ratios and signal-to-clutter ratios of the echoes are thus improved. Finally, an adaptive threshold is used to extract targets. The real radar data detection results are compared with Automatic Identification System data as well as those from the conventional ordered-statistic constant false alarm rate method. The feasibility and the validity of method proposed here are thus demonstrated.

Weidong Sun;Pingxiang Li;Jie Yang;Lingli Zhao;Minyi Li; "Polarimetric SAR Image Classification Using a Wishart Test Statistic and a Wishart Dissimilarity Measure," vol.14(11), pp.2022-2026, Nov. 2017. Land-cover classification in polarimetric synthetic aperture radar images is a vital technique that has been developed for years. The Wishart distribution, which the polarimetric coherence matrix obeys, has been researched to design the well-known Wishart classifier. This model is appropriate for homogeneous scenes, but it usually fails in reality when a category consists of several subcategories or clusters. Therefore, a simple but powerful sample-merging strategy is proposed to generate representative subcenters, based on a dissimilarity measure. In addition, a weighted likelihood-ratio criterion is also proposed to further improve the performance of the Wishart distribution-based classification, based on the Wishart test statistic. Two experiments on EMISAR and UAVSAR data sets confirm that combining the proposed strategies can achieve better results than can the Wishart classifier and the other existing methods.

Michael Streßer;Ruben Carrasco;Jochen Horstmann; "Video-Based Estimation of Surface Currents Using a Low-Cost Quadcopter," vol.14(11), pp.2027-2031, Nov. 2017. Video imagery of surface waves recorded from a small off-the-shelf quadcopter with a self-stabilizing camera gimbal is analyzed to estimate the surface current field. The nadir looking camera acquires a short image sequence, which is geocoded to Universal Transverse Mercator coordinates. The resulting image sequence is used to quantify characteristic parameters (wavelength, period, and direction) of short (0.1-1 m) surface waves in space and time. This opens the opportunity to fit the linear dispersion relation to the data and thus monitor the frequency shift induced by an ambient current. The fitting is performed by applying a spectral energy-based maximization technique in the wavenumber-frequency domain. The current field is compared with measurements acquired by an acoustic Doppler current profiler mounted on a small boat, showing an overall good agreement. The root-mean-square error in current velocity is 0.09 m/s with no bias.

Lin Wang;Hisao-Chi Li;Bai Xue;Chein-I Chang; "Constrained Band Subset Selection for Hyperspectral Imagery," vol.14(11), pp.2032-2036, Nov. 2017. This letter extends the constrained band selection (CBS) technique to constrained band subset selection (CBSS) in a similar manner that constrained energy minimization has been extended to linearly constrained minimum variance. CBSS constrains multiple bands as a band subset as opposed to CBS constraining a single band as a singleton set. To achieve this goal, CBSS requires a strategy to search for an optimal band subset, while CBS does not. In this letter, two new sequential algorithms, referred to as sequential CBSS and successive CBSS, which do not exist in CBS are derived for CBSS to find desired band subsets and to avoid exhaustive search.

Xiaobin Li;Shengjin Wang; "Object Detection Using Convolutional Neural Networks in a Coarse-to-Fine Manner," vol.14(11), pp.2037-2041, Nov. 2017. Object detection in remote sensing images has long been studied, but it remains challenging due to the diversity of objects and the complexity of backgrounds. In this letter, we propose an object detection method using convolutional neural networks (CNNs) in a coarse-to-fine manner. In the coarse step, coarse candidate regions that may contain objects are proposed. In the fine step, fine candidate regions are cropped from coarse candidate regions, and are classified as objects or backgrounds. We design a concise and efficient framework that can propose fewer candidate regions and extract more discriminative features. The framework consists of two eight-layer CNNs that are well designed and powerful. To use CNNs to detect inshore ships, image samples are required, each of which should contain only one ship. However, the traditional image cropping method cannot generate such samples. To solve this problem, we present an orientation-free image cropping method that can generate trapezium rather than rectangle samples, making inshore ship detection by CNN feasible. Experimental results on Google Earth images demonstrate that the proposed method outperforms existing state-of-the-art methods.

Lei Qiao;Gang Chen;Jin Wang; "Design and Application of the Distributed Ionospheric Coherent Scatter Radar," vol.14(11), pp.2042-2046, Nov. 2017. In this letter, a newly designed distributed coherent scatter radar for localization of ionospheric irregularities is presented. It is composed of a detection network that can detect ionospheric irregularities with the help of a time synchronization module. To achieve a fairly narrow beam with high directive gain, an antenna array is used in this transmitter module. The frequency band is from high frequency (HF) to very HF to detect irregularities at different scales. In addition, the radar uses a universal serial bus to reduce its size, which allows it to be easily moved to different areas. An iterative ray tracing method is also applied to localize the ionospheric irregularities. The results indicate that the radar can effectively track ionospheric irregularity in 3-D space.

Igor E. Kozlov;Evgenia V. Zubkova;Vladimir N. Kudryavtsev; "Internal Solitary Waves in the Laptev Sea: First Results of Spaceborne SAR Observations," vol.14(11), pp.2047-2051, Nov. 2017. The first results of internal solitary wave (ISW) observations over the ice-free Laptev Sea derived from 354 ENVISAT Advanced Synthetic Aperture Radar (ASAR) images acquired in May-October 2011 are reported. Analysis of the data reveals the key regions of ISW distribution that are primarily found over the outer shelf/slope regions poleward the M2 critical latitude. Most of the ISWs are observed in regions where enhanced tide-induced vertical mixing and heat fluxes have been previously reported. This suggests that spaceborne SAR observations may serve as a tool to infer local mixing hot spots over the ice-free Arctic Ocean.

Francisco Pérez;Balu Santhanam;Ralf Dunkel;Majeed M. Hayat; "Clutter Suppression via Hankel Rank Reduction for DFrFT-Based Vibrometry Applied to SAR," vol.14(11), pp.2052-2056, Nov. 2017. Hankel rank reduction (HRR) is a method that, by prearranging the data in a Hankel matrix and performing rank reduction via singular value decomposition, suppresses the noise of a time-history vector comprised of the superposition of a finite number of sinusoids. In this letter, the HRR method is studied for performing clutter suppression in synthetic aperture radar (SAR)-based vibrometry. Specifically, three different applications of the HRR method are presented. First, resembling the SAR slow-time signal model, the HRR method is utilized for separating a chirp signal immersed in a sinusoidal clutter. Second, using simulated airborne SAR data with 10 dB of signal-to-clutter ratio, the HRR method is applied to perform target isolation and to improve the results of an SAR-based vibration estimation algorithm. Finally, the vibrometry approach combined with the HRR method is validated using actual airborne SAR data.

Javier Redolfi;Jorge Sánchez;Ana Georgina Flesia; "Fisher Vectors for PolSAR Image Classification," vol.14(11), pp.2057-2061, Nov. 2017. In this letter, we study the application of the Fisher vector (FV) to the problem of pixelwise supervised classification of polarimetric synthetic aperture radar images. This is a challenging problem since information in those images is encoded as complex-valued covariance matrices. We observe that the real parts of these matrices preserve the positive semidefiniteness property of their complex counterpart. Based on this observation, we derive an FV from a mixture of real Wishart densities and integrate it with a Potts-like energy model in order to capture spatial dependencies between neighboring regions. Experimental results on two challenging data sets show the effectiveness of the approach.

Jingyu Wang;Ke Zhang;Pei Wang;Kurosh Madani;Christophe Sabourin; "Unsupervised Band Selection Using Block-Diagonal Sparsity for Hyperspectral Image Classification," vol.14(11), pp.2062-2066, Nov. 2017. In order to alleviate the negative effect of curse of dimensionality, band selection is a crucial step for hyperspectral image (HSI) processing. In this letter, we propose a novel unsupervised band selection approach to reduce the dimensionality for hyperspectral imagery. In order to obtain the most representative bands, the correlation matrix computed from the original HSI is used to describe the correlation characteristics among bands, while the block-diagonal structure is measured to segment all bands into a series of subspace. After applying the spectral clustering algorithm, the optimal combination of band is finally selected. To verify the effectiveness and superiority of the proposed band selection method, experiments have been conducted on three widely used real-world hyperspectral data. The results have shown that the proposed method outperforms other methods in HSI classification application.

Alessandro Panico;Maria Daniela Graziano;Alfredo Renga; "SAR-Based Vessel Velocity Estimation From Partially Imaged Kelvin Pattern," vol.14(11), pp.2067-2071, Nov. 2017. Spaceborne synthetic aperture radar (SAR) can be considered an operational asset for maritime monitoring applications. Well-assessed approaches exist for ship detection, validated in several maritime surveillance systems. However, measuring vessel velocity from detected single-channel SAR images of ships is in general difficult. This letter contributes to this problem by investigating the possibility of retrieving vessel velocity by wake analysis. An original method for velocity estimation is developed for calm sea (Beaufort scale 1-2) and applied over seven X-band SAR images, gathered by COSMO-SkyMed mission over the Gulf of Naples, Italy. The algorithm exploits the well-known relation between the wavelength of the waves composing the Kelvin pattern and the ship velocity. But the proposed approach extends the applicability of the existing wake-based techniques since it foresees evaluation of the wavelength along a generic direction in the Kelvin angle. Promising results have been achieved, which are in good agreement with those of more assessed techniques for ship velocity estimation in SAR images.

Dexin Li;Marc Rodriguez-Cassola;Pau Prats-Iraola;Manqing Wu;Alberto Moreira; "Reverse Backprojection Algorithm for the Accurate Generation of SAR Raw Data of Natural Scenes," vol.14(11), pp.2072-2076, Nov. 2017. Future synthetic aperture radar (SAR) mission concepts often rely on locally nonlinear (e.g., high orbits and bistatic) surveys or acquisition schemes. The simulation of the raw data of natural scenes as acquired by future systems appears as one powerful tool in order to understand the particularities of these systems and assess the impact of system and propagation errors on their performance. We put forward, in this letter, a new formulation of the reverse backprojection algorithm for the accurate simulation of raw data of natural surfaces. In particular, the algorithm is perfectly suited to accommodate any kind (1-D/2-D) of temporal and spatial variation, e.g., in observation geometry, acquisition strategy, or atmospheric propagation. The algorithm is analyzed with respect to its SAR image formation sibling, and tested under different simulation scenarios. We expect the reverse backprojection algorithm to play a relevant role in the simulation of future geosynchronous and multistatic SAR missions.

Qi Wang;Zhaotie Meng;Xuelong Li; "Locality Adaptive Discriminant Analysis for Spectral–Spatial Classification of Hyperspectral Images," vol.14(11), pp.2077-2081, Nov. 2017. Linear discriminant analysis (LDA) is a popular technique for supervised dimensionality reduction, but with less concern about a local data structure. This makes LDA inapplicable to many real-world situations, such as hyperspectral image (HSI) classification. In this letter, we propose a novel dimensionality reduction algorithm, locality adaptive discriminant analysis (LADA) for HSI classification. The proposed algorithm aims to learn a representative subspace of data, and focuses on the data points with close relationship in spectral and spatial domains. An intuitive motivation is that data points of the same class have similar spectral feature and the data points among spatial neighborhood are usually associated with the same class. Compared with traditional LDA and its variants, LADA is able to adaptively exploit the local manifold structure of data. Experiments carried out on several real hyperspectral data sets demonstrate the effectiveness of the proposed method.

Feifei Peng;Jing Luo;Gaoqiang Wang;Kunlun Qi; "Stereo Image Retrieval Using Height and Planar Visual Word Pairs," vol.14(11), pp.2082-2086, Nov. 2017. The wide availability of high-resolution satellite stereo images has created a surging demand for effective stereo image retrieval methods. Recently, few retrieval methods have been designed specifically for stereo images having unique characteristics (e.g., viewing number and viewing angles), and often have insufficient retrieval accuracy. A new content-based stereo image retrieval method is achieved with height and planar visual word pairs, which are generated from the stereo extracted digital surface models and orthoimages. Experimental results of the International Society for Photogrammetry and Remote Sensing stereo benchmark test data set show that our method outperforms the state-of-the-art methods in terms of accuracy and stability. Our method achieves a high retrieval precision of 0.9, and has a high efficiency. Our method is stable for two stereo pairs, covering the same scene from different sensors, which usually have a small ranking difference in the returned ranking list. Our method is helpful to quickly and accurately locate desired stereo images from large quantities of multisensor stereo images.

Faeze Soleimani vosta kolaei;Mehdi Akhoondzadeh;Hamid Ghanbari; "Improved ISAC Algorithm to Retrieve Atmospheric Parameters From HyTES Hyperspectral Images Using Gaussian Mixture Model and Error Ellipse," vol.14(11), pp.2087-2091, Nov. 2017. In-scene atmospheric correction (ISAC) is a procedure that accounts for atmospheric effects by direct use of the hyperspectral radiance data without recourse to ancillary meteorological data. This letter aims to improve the accuracy of the ISAC algorithm. In the ISAC method after calculating brightness temperature, the computed radiance and measured radiance at the sensor are plotted on a graph in each band. Then, to estimate atmospheric parameters, the straight line is fit to the upper boundary of the plot. One of the issues in ISAC is to find an optimal upper boundary of data. The main innovation of this letter is the use of Gaussian mixture model (GMM) and error ellipse to find the optimal upper boundary of data and fit the line to it. In the line fitting process, first, a GMM with the optimum class number derived by Akaike information criterion (AIC) is implemented on thermal hyperspectral data and then, the optimal upper boundary is selected for each class and a straight line is fit to it. Finally, the desired parameters are obtained with weighted linear combination of the results from all classes. For quality assessment, the results were compared with atmospheric products of Hyperspectral Thermal Emission Spectrometer sensor and atmospheric parameters that were obtained from traditional ISAC. Root-mean-square errors for atmospheric transmittance obtained from GMM for bands 9.8 and 11.5 μm are 0.0008 and 0.0106 and those for upwelling atmospheric radiance are 0.675 and 0.0265, respectively.

Daoyu Lin;Kun Fu;Yang Wang;Guangluan Xu;Xian Sun; "MARTA GANs: Unsupervised Representation Learning for Remote Sensing Image Classification," vol.14(11), pp.2092-2096, Nov. 2017. With the development of deep learning, supervised learning has frequently been adopted to classify remotely sensed images using convolutional networks. However, due to the limited amount of labeled data available, supervised learning is often difficult to carry out. Therefore, we proposed an unsupervised model called multiple-layer feature-matching generative adversarial networks (MARTA GANs) to learn a representation using only unlabeled data. MARTA GANs consists of both a generative model G and a discriminative model D. We treat D as a feature extractor. To fit the complex properties of remote sensing data, we use a fusion layer to merge the mid-level and global features. G can produce numerous images that are similar to the training data; therefore, D can learn better representations of remotely sensed images using the training data provided by G. The classification results on two widely used remote sensing image databases show that the proposed method significantly improves the classification performance compared with other state-of-the-art methods.

Yinghua Wang;Hongwei Liu; "SAR Target Discrimination Based on BOW Model With Sample-Reweighted Category-Specific and Shared Dictionary Learning," vol.14(11), pp.2097-2101, Nov. 2017. To improve the synthetic aperture radar (SAR) target discrimination performance under complex scenes, this letter presents a new SAR target discrimination method based on the bag-of-words model. The method contains three main stages. In the local feature extraction stage, the SAR-SIFT feature is extracted. In the feature coding stage, we improve the existing category-specific and shared dictionary learning (CSDL) and propose the sample-reweighted CSDL (SR-CSDL). The local features are sparsely coded using the codebook learned from SR-CSDL. In the feature pooling stage, spatial pyramid matching with max pooling is used to aggregate the local coding coefficients to generate the global feature for each chip image. Experimental results using the miniSAR data verify the effectiveness of the proposed method.

Xiayuan Huang;Bo Zhang;Hong Qiao;Xiangli Nie; "Local Discriminant Canonical Correlation Analysis for Supervised PolSAR Image Classification," vol.14(11), pp.2102-2106, Nov. 2017. This letter proposes a novel multiview feature extraction method for supervised polarimetric synthetic aperture radar (PolSAR) image classification. PolSAR images can be characterized by multiview feature sets, such as polarimetric features and textural features. Canonical correlation analysis (CCA) is a well-known dimensionality reduction (DR) method to extract valuable information from multiview feature sets. However, it cannot exploit the discriminative information, which influences its performance of classification. Local discriminant embedding (LDE) is a supervised DR method, which can preserve the discriminative information and the local structure of the data well. However, it is a single-view learning method, which does not consider the relation between multiple view feature sets. Therefore, we propose local discriminant CCA by incorporating the idea of LDE into CCA. Specific to PolSAR images, a symmetric version of revised Wishart distance is used to construct the between-class and within-class neighboring graphs. Then, by maximizing the correlation of neighboring samples from the same class and minimizing the correlation of neighboring samples from different classes, we find two projection matrices to achieve feature extraction. Experimental results on the real PolSAR data sets demonstrate the effectiveness of the proposed method.

Andreas Colliander;Joshua B. Fisher;Gregory Halverson;Olivier Merlin;Sidharth Misra;Rajat Bindlish;Thomas J. Jackson;Simon Yueh; "Spatial Downscaling of SMAP Soil Moisture Using MODIS Land Surface Temperature and NDVI During SMAPVEX15," vol.14(11), pp.2107-2111, Nov. 2017. The Soil Moisture Active Passive (SMAP) mission provides a global surface soil moisture (SM) product at 36-km resolution from its L-band radiometer. While the coarse resolution is satisfactory to many applications, there are also a lot of applications which would benefit from a higher resolution SM product. The SMAP radiometer-based SM product was downscaled to 1 km using Moderate Resolution Imaging Spectroradiometer (MODIS) data and validated against airborne data from the Passive Active L-band System instrument. The downscaling approach uses MODIS land surface temperature and normalized difference vegetation index to construct soil evaporative efficiency, which is used to downscale the SMAP SM. The algorithm was applied to one SMAP pixel during the SMAP Validation Experiment 2015 (SMAPVEX15) in a semiarid study area for validation of the approach. SMAPVEX15 offers a unique data set for testing SM downscaling algorithms. The results indicated reasonable skill (root-mean-square difference of 0.053 m3/m3 for 1-km resolution and 0.037 m3/m3 for 3-km resolution) in resolving high-resolution SM features within the coarse-scale pixel. The success benefits from the fact that the surface temperature in this region is controlled by soil evaporation, the topographical variation within the chosen pixel area is relatively moderate, and the vegetation density is relatively low over most parts of the pixel. The analysis showed that the combination of the SMAP and MODIS data under these conditions can result in a high-resolution SM product with an accuracy suitable for many applications.

Xia Xu;Zhenwei Shi;Bin Pan; "A New Unsupervised Hyperspectral Band Selection Method Based on Multiobjective Optimization," vol.14(11), pp.2112-2116, Nov. 2017. Unsupervised band selection methods usually assume specific optimization objectives, which may include band or spatial relationship. However, since one objective could only represent parts of hyperspectral characteristics, it is difficult to determine which objective is the most appropriate. In this letter, we propose a new multiobjective optimization-based band selection method, which is able to simultaneously optimize several objectives. The hyperspectral band selection is transformed into a combinational optimization problem, where each band is represented by a binary code. More importantly, to overcome the problem of unique solution selection in traditional multiobjective methods, we develop a new incorporated rank-based solution set concentration approach in the process of Tchebycheff decomposition. The performance of our method is evaluated under the application of hyperspectral imagery classification. Three recently proposed band selection methods are compared.

Lei Pan;Heng-Chao Li;Hua Meng;Wei Li;Qian Du;William J. Emery; "Hyperspectral Image Classification via Low-Rank and Sparse Representation With Spectral Consistency Constraint," vol.14(11), pp.2117-2121, Nov. 2017. In this letter, a low-rank and sparse representation classifier with a spectral consistency constraint (LRSRC-SCC) is proposed. Different from the SRC that represents samples individually, LRSRC-SCC reconstructs samples jointly and is able to capture the local and global structures simultaneously. In this proposed classifier, an adaptive spectral constraint is imposed on both the low-rank and sparse terms so as to better reveal the data structure and enhance its discriminative power. In addition, the alternating direction method is introduced to solve the underlying minimization problem, in which, more importantly, the subobjective function associated with the low-rank term is optimized based on the rank equivalence between a matrix and its Gram matrix, resulting in a closed-form solution. Finally, LRSRC-SCC is extended to LRSRC-SCCE for fully exploiting the spatial information. Experimental results on two hyperspectral data sets demonstrate that the proposed LRSRC-SCC and LRSRC-SCCE methods outperform some state-of-the-art methods.

Howard A. Zebker; "User-Friendly InSAR Data Products: Fast and Simple Timeseries Processing," vol.14(11), pp.2122-2126, Nov. 2017. Interferometric synthetic aperture radar (InSAR) methods provide high-resolution maps of surface deformation applicable to many scientific, engineering, and management studies. Despite its utility, the specialized skills and computer resources required for InSAR analysis remain as barriers for truly widespread use of the technique. Reduction of radar scenes to maps of temporal deformation evolution requires not only detailed metadata describing the exact radar and surface acquisition geometries, but also a software package that can combine these for the specific scenes of interest. Furthermore, the range-Doppler reference frame and radar coordinate system itself are confusing, so that many users find it hard to incorporate even useful products in their customary analyses. Finally, the sheer data volume needed for interferogram time series makes InSAR analysis challenging for many analysis systems. We show here that it is possible to deliver radar data products to users that address all of these difficulties, so that the data acquired by large, modern satellite systems are ready to use in more natural coordinates, without requiring further processing, and in as small volume as possible.

Shu Tian;Ye Zhang;Junping Zhang;Nan Su; "A Novel Deep Embedding Network for Building Shape Recognition," vol.14(11), pp.2127-2131, Nov. 2017. Building shape, as a key structured element, plays a significant role in various urban remote sensing applications. However, because of high complexity and intraclass variations between building structures, the capability of building shape description and recognition becomes limited or even impoverished. In this letter, a novel deep embedding network is proposed for building shape recognition, which combines the strength of the unsupervised feature learning of convolutional neural networks (CNNs) and a novel triplet loss. Specifically, we take advantage of the strong discriminative power of CNNs to learn an efficient building shape representation for shape recognition. With this deep embedding network, the high-dimensional image space can be mapped into a low-dimensional feature space, and the deep features can effectively reduce the intraclass variations while increasing the interclass variation between different building shape images. Afterward, the derived deep features are exploited for the process of building shape recognition. This method consists of two stages. In the first stage, for standard building shape image queries stored in the shape primitives library and the building shape data set, two sets of deep features are extracted with the deep embedding network. In the second stage, we formulate the shape recognition task into a feature matching problem and the final building shape recognition results can be achieved by set-to-set feature matching method. Experiments on the VHR-10 and UCML data sets demonstrate the effectiveness and precision of the proposed method.

R. Jin;X. Li;S. M. Liu; "Understanding the Heterogeneity of Soil Moisture and Evapotranspiration Using Multiscale Observations From Satellites, Airborne Sensors, and a Ground-Based Observation Matrix," vol.14(11), pp.2132-2136, Nov. 2017. This letter summarizes a special stream of the IEEE Geoscience and Remote Sensing Letters devoted to understanding the heterogeneity in soil moisture, evapotranspiration, and other related ecohydrological variables based on multiscale observations from satellite-based and airborne remote sensors, a flux observation matrix, and an ecohydrological wireless sensor network in the Heihe Watershed Allied Telemetry Experimental Research project. Scaling and uncertainty are the key issues in the remote-sensing research community, especially regarding the heterogeneous land surface. However, a lack of understanding and an inadequate theoretical basis impede the development and innovation of forward radiative transfer models, as well as the quantitative retrieval and validation of remote-sensing products. We summarize the prior considerations regarding surface heterogeneity research and report the main outcomes and contributions of this special stream. The highlights of this stream are related to spatial sampling, upscaling, uncertainty analysis, the validation of remote-sensing products, and accounting for heterogeneity in remote-sensing models.

Xiaowei Chen;Baoming Zhang;Minyi Cen;Haitao Guo;Tonggang Zhang;Chuan Zhao; "SRTM DEM-Aided Mapping Satellite-1 Image Geopositioning Without Ground Control Points," vol.14(11), pp.2137-2141, Nov. 2017. A Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM)-aided geopositioning method is proposed to solve the problem of geopositioning without ground control points for Mapping Satellite-1 imagery. The method comprises coarse and accurate correction stages, and it compensates errors gradually. DEM extraction and DEM matching are important steps in both the stages, the objectives of which are to compensate the relative and absolute errors in an image, respectively. The SRTM DEM is integrated into all the processes to take full advantage of its consistent and high accuracy. Experimental results showed that this method could greatly improve geometry accuracy and obtain stable and highly accurate geopositioning for Mapping Satellite-1 images, regardless of the land area proportion (LAP) or the production mode. The planimetric and vertical accuracies were better than 8.1 and 5.2 m, respectively, which could satisfy the accuracy requirements of mapping at 1:50 000 scale. The computational efficiency depends on the LAP and target DEM resolution.

Haoyang Yu;Lianru Gao;Wenzhi Liao;Bing Zhang;Aleksandra Pižurica;Wilfried Philips; "Multiscale Superpixel-Level Subspace-Based Support Vector Machines for Hyperspectral Image Classification," vol.14(11), pp.2142-2146, Nov. 2017. This letter introduces a new spectral-spatial classification method for hyperspectral images. A multiscale superpixel segmentation is first used to model the distribution of classes based on spatial information. In this context, the original hyperspectral image is integrated with segmentation maps via a feature fusion process in different scales such that the pixel-level data can be represented by multiscale superpixel-level (MSP) data sets. Then, a subspace-based support vector machine (SVMsub) is adopted to obtain the classification maps with multiscale inputs. Finally, the classification result is achieved via a decision fusion process. The resulting method, called MSP-SVMsub, makes use of the spatial and spectral coherences, and contributes to better feature characterization. Experimental results based on two real hyperspectral data sets indicate that the MSP-SVMsub exhibits good performance compared with other related methods.

Xianghai Cao;Cuicui Wei;Jungong Han;Licheng Jiao; "Hyperspectral Band Selection Using Improved Classification Map," vol.14(11), pp.2147-2151, Nov. 2017. Although it is a powerful feature selection algorithm, the wrapper method is rarely used for hyperspectral band selection. Its accuracy is restricted by the number of labeled training samples and collecting such label information for hyperspectral image is time consuming and expensive. Benefited from the local smoothness of hyperspectral images, a simple yet effective semisupervised wrapper method is proposed, where the edge preserved filtering is exploited to improve the pixel-wised classification map and this in turn can be used to assess the quality of band set. The property of the proposed method lies in using the information of abundant unlabeled samples and valued labeled samples simultaneously. The effectiveness of the proposed method is illustrated with five real hyperspectral data sets. Compared with other wrapper methods, the proposed method shows consistently better performance.

Jiahui Qu;Yunsong Li;Wenqian Dong; "Hyperspectral Pansharpening With Guided Filter," vol.14(11), pp.2152-2156, Nov. 2017. A new hyperspectral (HS) pansharpening method based on guided filter is proposed in this letter. The proposed method, which obtains the spatial detail difference of each band successively, is different from the traditional component substitution method. The detail information of each band is extracted at first. Then, the panchromatic (PAN) image is sharpened to enhance the details. The spatial information difference between the enhanced PAN image and the detail information of each band is obtained using the guided filter, without causing spectral and spatial distortion. In order to reduce spectral distortion and add enough spatial information, the injection gains matrix is generated. The fused HS image is finally achieved by injecting the corresponding spatial difference into each band of the interpolated HS image. Experiments demonstrate that the proposed method can obtain superior performance in terms of subjective and objective evaluations.

Xikai Fu;Maosheng Xiang;Bingnan Wang;Shuai Jiang;Xiaofan Sun; "A Robust Yaw and Pitch Estimation Method for Mini-InSAR System," vol.14(11), pp.2157-2161, Nov. 2017. For the mini-interferometric synthetic aperture radar system mounted on small aircraft or unmanned aerial vehicles, yaw and pitch angle deviations can be considerably high due to their small size and atmospheric turbulence. Moreover, we cannot install a large-volume, heavy-weight, and high-cost inertial navigation system limited by the aircraft's carrying capacity and system cost. In view of the problem, this letter proposes a robust yaw and pitch angle estimation method based on the relationship between range-variant Doppler centroid and attitude angles. For each azimuth moment, estimate the range-variant Doppler centroid for each range gate and solve the range-variant Doppler centroid model using a total least squares method to obtain a robust yaw and pitch angle estimation result. The comparison of the estimated and recorded yaw and pitch angles by a high-accuracy position and orientation system validated the effectiveness and reliability of our proposed yaw and pitch angle estimation method.

Yongpeng Zhu;Yinsheng Wei;Lei Yu; "Ionospheric Decontamination for HF Hybrid Sky-Surface Wave Radar on a Shipborne Platform," vol.14(11), pp.2162-2166, Nov. 2017. This letter describes a method of correcting ionospheric frequency modulation for a high-frequency hybrid sky-surface wave radar mounted on a shipborne platform. In the proposed method, azimuth-dependent sea clutter signals are first decomposed into monocomponent signals based on distinguishable differences in their directions of incidence. Afterward, based on the decomposed monocomponent signals, the statistical mean of the time derivatives of the signal phases, weighted by the signal amplitudes, is used to estimate the ionospheric frequency modulation. Finally, the estimated result is applied to the received data to compensate for the ionospheric contamination. Numerical results on simulated data demonstrate the effectiveness of the proposed algorithm.

Wei Feng;Wenxing Bao; "Weight-Based Rotation Forest for Hyperspectral Image Classification," vol.14(11), pp.2167-2171, Nov. 2017. In this letter, we propose a new weight-based rotation forest (WRoF) induction algorithm for the classification of hyperspectral image. The main idea of the new method is to guide the growth of trees adaptively via exploring the potential of important instances. The importance of a training instance is reflected by a dynamic weight function. The higher the weight of an instance, the more the next tree will have to focus on the instance. Experimental results on two real hyperspectral data sets show that the WRoF algorithm results in significant classification improvement compared with random forests and rotation forest.

* "IEEE Geoscience and Remote Sensing Letters information for authors," vol.14(11), pp.C3-C3, Nov. 2017.* These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

* "IEEE Geoscience and Remote Sensing Letters Institutional Listings," vol.14(11), pp.C4-C4, Nov. 2017.* Presents the GRSS society institutional listings.

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing - new TOC (2017 November 23) [Website]

* "[Front cover[Name:_blank]]," vol.10(11), pp.C1-C1, Nov. 2017.* Presents the front cover for this issue of the publication.

* "IEEE Geoscience and Remote Sensing Society," vol.10(11), pp.C2-C2, Nov. 2017.* Provides a listing of current staff, committee members and society officers.

* "Table of contents," vol.10(11), pp.4661-4662, Nov. 2017.* Presents the table of contents for this issue of this publication.

J. P. Kerekes;J. Shi;L. Tsang;J.-P. Gastellu-Etchegorry; "Foreword to the Special Issue on Modeling and Simulation of Remote Sensing Data," vol.10(11), pp.4663-4665, Nov. 2017. The papers in this special issue focus on the deployment modeling and simulation in remote sensing applications which play important roles in the development and application of remote sensing technology. While modeling occurs in many aspects of remote sensing including in the representation of natural phenomena and in the development and use of data analysis algorithms, a particular role of models is in the creation of the signatures and signals that lead to the remotely sensed measurements from which we can extract information about the scene.

Eric Anterrieu;François Cabot;Ali Khazaal;Yann H. Kerr; "On the Simulation of Complex Visibilities in Imaging Radiometry by Aperture Synthesis," vol.10(11), pp.4666-4676, Nov. 2017. The basic observables of an imaging interferometer by aperture synthesis are the complex visibilities. Under some conditions, they can be simulated with the aid of the van Cittert–Zernike theorem. However, owing to underlying assumptions, some important effects that may alter them cannot be taken into account. This paper is devoted to the numerical simulation of complex visibilities without any reference to the van Cittert–Zernike theorem, in such a way that these effects can be taken into account. The emission from an extended source is modeled using a linear superposition of random waves emitted by a collection of point sources, which are all assumed to behave like black bodies at thermal equilibrium. These random waves are numerically generated with the aid of white noises filtered in such a way that their power spectral densities follow the shape of Planck distributions at the temperature of the point sources over a wide range of frequencies. The radio signal is then transported to the antennas, where the voltage patterns are taken into account as well as the filters response of the bandpass receivers. It is, therefore, sent to the correlator unit for being cross-correlated. From emission to correlation, perturbing effects can be introduced at any time. To illustrate this modeling method, numerical simulations are carried out in the L-band around 1413.5 MHz in reference to the SMOS-next project led by the French Space Agency. The results are discussed and compared with the estimates provided by the van Cittert–Zernike theorem. Owing to the amount of calculations to be performed, massive parallel architectures like that found in GPU have been required.

Jeonghwan Park;Joel T. Johnson; "A Study of Wind Direction Effects on Sea Surface Specular Scattering for GNSS-R Applications," vol.10(11), pp.4677-4685, Nov. 2017. A modeling study investigating the influence of wind direction on spaceborne global navigation satellite system reflectometry (GNSS-R) near-specular observations of the sea surface is reported. The study first focuses on a purely specular geometry under plane wave incidence, for which it is shown using the theorem of reciprocity and reflection symmetry that up-down wind variations are identically zero. It is also shown that “single-scattering” approximations of rough surface scattering predict no variations with wind direction of any kind for a purely specular geometry, while higher order approximations (such as the second-order small-slope approximation) can predict up/cross wind differences. Examples of these variations are reported and found to be small. Because the delay doppler maps (DDMs) measured in GNSS-R include some nonspecular contributions even for “specular” portions of the DDM, the second part of the study performs an examination of near-specular DDM variations with wind direction using the widely used geometrical optics approximation of surface scattering for a surface described with the non-Gaussian Cox–Munk slope probability density function. Variations with wind direction of the normalized radar cross section (NRCS) mapped onto the surface are examined, and again, it is shown that these variations are small for surface portions contributing to the near-specular portion of the DDM. In addition, it is shown that the dependencies of the bistatic NRCS on wind direction are such that differing portions of the surface “glistening zone” have differing phase shifts in their dependence on wind direction, causing the wind dependencies of the final near-specular DDM to be negligible. The final results of the study suggest that any wind direction dependence in spaceborne GNSS-R should be sought only in portions of the DDM away f- om the specular region. These results provide information to guide analyses of the wind direction information available in current GNSS-R missions such as TDS-1 and Cyclone Global Navigation Satellite System.

Shurun Tan;Jiyue Zhu;Leung Tsang;Son V. Nghiem; "Microwave Signatures of Snow Cover Using Numerical Maxwell Equations Based on Discrete Dipole Approximation in Bicontinuous Media and Half-Space Dyadic Green's Function," vol.10(11), pp.4686-4702, Nov. 2017. A three-dimensional snowpack scattering and emission model is developed by numerically solving Maxwell's equations (NMM3D) over the entire snowpack on an underlying half-space. The solutions are crucial to microwave remote sensing that requires the preservation of coherent phase information. The heterogeneous snowpack is represented as a bicontinuous medium. Effects of the underlying half-space are included through a half-space Green's function in a volume integral equation formulation. The volume integral equation is then solved using the discrete dipole approximation. The fast Fourier transform is effectuated for all three dimensions with half-space Green's function rather than the conventional free space Green's function. To overcome the snow volume truncation in the finite numerical calculations, periodic boundary conditions are applied in the lateral extent. Thus, in NMM3D, the physical microwave scattering and emission problem is solved without using any radiative transfer equations. In this formulation, the scattering matrix of the snowpack accounts for both the magnitude and phase. The NMM3D simulations are demonstrated at Ku-band frequency for a snow cover up to 25-cm thick. The results are applicable to remote sensing of snow over sea ice, and thin layers of terrestrial snow. Quantitative values of backscattering and bistatic scattering coefficients are derived for active microwave remote sensing, and brightness temperatures for passive microwave remote sensing. The full wave simulation results are compared with those of the partially coherent approach of the dense media radiative transfer (DMRT). The NMM3D results capture effects of backscattering enhancement and coherent layering that are missed in DMRT. The full wave solution to Maxwell equations is important to advance radar polarimetry, interferometry, and tomography that require the preservation of the full - hase information and all interface interactions for applications to radar remote sensing of snow cover on land and on sea ice.

Leung Tsang;Tien-Hao Liao;Shurun Tan;Huanting Huang;Tai Qiao;Kung-Hau Ding; "Rough Surface and Volume Scattering of Soil Surfaces, Ocean Surfaces, Snow, and Vegetation Based on Numerical Maxwell Model of 3-D Simulations," vol.10(11), pp.4703-4720, Nov. 2017. In this paper, we give an overview and an update on the recent progress of our research group in numerical model of Maxwell equations in three dimensions (NMM3D) on random rough surfaces and discrete random media and their applications in active and passive microwave remote sensing. The random rough surface models were applied to soil surfaces and ocean surfaces. The discrete random media models were applied to snow and vegetation. For rough surface scattering, we use the surface integral equations of Poggio–Miller–Chang–Harrington–Wu–Tsai that are solved by the method of moments using the Rao–Wilton–Glisson basis functions. The sparse matrix canonical grid method is used to accelerate the matrix column multiplications. In modeling the rough surfaces, we use the exponential correlation functions for soil surfaces and the Durden–Vesecky ocean spectrum for ocean surfaces. In scattering by terrestrial snow and snow on sea ice, we use the volume integral equations formulated with the dyadic half-space Green's function. The microstructure of snow is modeled by the bicontinuous media. In scattering by vegetation, we use the discrete scatterers of cylinder. The NMM3D formulation is based on the Foldy–Lax multiple scattering equations in conjunction with the body of revolution for a single scatterer. For rough surface scattering, simulations results are compared with advanced integral equation model, small slope approximation, small perturbation method, and two scale model. For volume scattering by snow, results are compared with the bicontinuous dense media radiative transfer. For scattering by vegetation, results are compared with distorted Born approximation and radiative transfer equation. Comparisons are also made with experiments.

Yu Liu;Kun-Shan Chen;Peng Xu;Zhao-Liang Li; "Bistatic Coherent Polarimetric Scattering of Randomly Corrugated Layered Snow Surfaces," vol.10(11), pp.4721-4739, Nov. 2017. We analyzed the bistatic coherent scattering mechanism of a layered randomly corrugated snow surface, a typical rough surface, with radar polarimetry theory whose scattering matrix was obtained from a physical-based full wave numerical simulation by solving Maxwell's equations. The effects of top–bottom structure, layer thickness, frequency response, and angular dependence are illustrated by observing stokes vector, coherence matrix, and Kennaugh matrix. The results show that top–bottom structure and snow thickness change the state of polarization depending on frequency and bistatic configuration. Analyzing the bistatic polarimetric scattering mechanism based on numerical simulation and the polarimetry theory can be an efficacious source for configuring bistatic observation to detect and classify radar targets. For example, observation at a specular angle of 55° comparatively contains more information on surface structure, and wave entropy is more preferable over degree of polarization as a snow surface structure estimator. Moreover, parameters from Kennaugh decomposition can indicate top–bottom structure better than layer thickness. Last but not the least, we also found that the symmetry assumption commonly used in classical theory of polarization is generally not valid for bistatic observation, and the combination of some Huynen parameters can be reasonably good indicators of snow surface structural symmetry. We expect this paper to offer deeper understanding of the coherent imaging of snow surfaces and to help design a novel bistatic imaging system for layered snow surface.

Ying Yang;Kun-Shan Chen;Leung Tsang;Liu Yu; "Depolarized Backscattering of Rough Surface by AIEM Model," vol.10(11), pp.4740-4752, Nov. 2017. This paper presents a new expression for multiple scattering by including the upward and downward propagation waves in the medium 1 and medium 2. Unlike the single scattering, the multiple scattering accounts for the interactions, up to second order, among all the spectral components of surface roughness spectrum. Though the derivation is mathematically intricate, but yet manageable, the final expression is compact and easy for numerical implementation, which only involves a series of two-dimensional integration. Some of special cases in depolarized backscattering are also derived and compared with known analytical model to partly validate the update AIEM model. Then, extensive comparisons with numerical simulations and field measurements are conducted to illustrate the model accuracy.

Seung-bum Kim;Motofumi Arii;Thomas Jackson; "Modeling L-Band Synthetic Aperture Radar Data Through Dielectric Changes in Soil Moisture and Vegetation Over Shrublands," vol.10(11), pp.4753-4762, Nov. 2017. L-band airborne synthetic aperture radar observations were made over California shrublands to better understand the effects of soil and vegetation parameters on backscattering coefficient <inline-formula><tex-math notation="LaTeX"> $(sigma ^{0})$</tex-math></inline-formula>. Temporal changes in <inline-formula><tex-math notation="LaTeX">$sigma ^{0}$</tex-math></inline-formula> of up to 3 dB were highly correlated to surface soil moisture but not to vegetation, even though vegetation water content (VWC) varied seasonally by a factor of two. HH was always greater than VV, suggesting the importance of double-bounce scattering by the woody parts. However, the geometric and dielectric properties of the woody parts did not vary significantly over time. Instead the changes in VWC occurred primarily in thin leaves that may not meaningfully influence absorption and scattering. A physically based model for single scattering by discrete elements of plants successfully simulated the magnitude of the temporal variations in HH, VV, and HH/VV with a difference of less than 0.9 dB for both the mean and standard deviation when compared with the airborne data. In order to simulate the observations, the VWC input of the plant to the model was formulated as a function of plant's dielectric property (water fraction) while the plant geometry remains static in time. In comparison, when the VWC input was characterized by the geometry of a growing plant, the model performed poorly in describing the observed patterns in the <inline-formula><tex-math notation="LaTeX">$sigma ^{0}$</tex-math> </inline-formula> changes. The modeling results offer explanation of the observation that soil moisture correlated highly with <inline-formula><tex-math notation="LaTeX">$sigma ^{0}$</tex-math></inline-formula>: the dominant mechanisms for HH and VV are double-bounce scattering by trunk, and soil surface scattering, respectively. The time-series- inversion of the physical model was able to retrieve soil moisture with the difference of <inline-formula> <tex-math notation="LaTeX">$- {text{0.037}}, text{m}^{3}{/ text{m}}^{3}$</tex-math></inline-formula> (mean), <inline-formula><tex-math notation="LaTeX">${text{0.025}}, text{m}^{3}{/ text{m}}^{3}$</tex-math></inline-formula> (standard deviation), and 0.89 (correlation), which demonstrates the efficacy of the model-based time-series soil moisture retrieval for shrublands.

Yang Zhang;Qinhuo Liu;Longfei Tan;Huaguo Huang;Wenjian Ni;Tiangang Yin;Wenhan Qin;Guoqing Sun; "A 3-D Joint Simulation Platform for Multiband Remote Sensing," vol.10(11), pp.4763-4778, Nov. 2017. Canopy radiation and scattering signals contain abundant vegetation information. Many biophysical parameters can be quantitatively retrieved with the help of canopy radiation and scattering models. Joint simulation of three-dimensional (3-D) models for multiband that combines the advantages of different spectral (frequency) domains could be a useful tool for validation in remote sensing. This manuscript presents a 3-D joint simulation platform (3-DMultiSim) that simulates spectral responses from visible to microwave bands. We validated our platform with the corn field experimental data at the Huailai testing site of the Chinese Academy of Sciences. The correlation coefficients between the validation data and the simulation results were higher than 0.92, while the relative mean deviation was 15%. For the thermal infrared band, the correlation coefficient was 0.91, but the variation of the simulated directional bright temperature from 2 π space was less than 0.4 °C. The reason may be due to the model limitation at high leaf area index (LAI). For the microwave bands, the simulation data and the validation data had the best consistency at L band, whereas the same trend but bigger deviation at X - and C-band. As an application of the platform, we performed sensitivity analyses of the radiation and scattering responses to LAI and incident-observation geometries at multiband. The simulation results were analyzed quantitatively. Further applications of the joint simulation platform are proposed.

Stefan Auer;Isabel Hornig;Michael Schmitt;Peter Reinartz; "Simulation-Based Interpretation and Alignment of High-Resolution Optical and SAR Images," vol.10(11), pp.4779-4793, Nov. 2017. The successful alignment of optical and synthetic aperture radar (SAR) satellite data requires that we account for the effects of sensor-specific geometric distortion, which is a consequence of the different imaging concepts of the sensors. This paper introduces SimGeoI, a simulation framework for the object-related interpretation of optical and SAR images, as a solution to this problem. Using metainformation from the images and a digital surface model as input, the processor follows the steps of scene definition, ray tracing, image generation, geocoding, interpretation layer generation, and image part extraction. Thereby, for the first time, object-related sections of optical and SAR images are automatically identified and extracted in world coordinates under consideration of three-dimensional object shapes. A case study for urban scenes in Munich and London, based on WorldView-2 images and high-resolution TerraSAR-X data, confirms the potential of SimGeoI in the context of a perspective-independent and object-focused analysis of high-resolution satellite data.

Sanghui Han;John P. Kerekes; "Overview of Passive Optical Multispectral and Hyperspectral Image Simulation Techniques," vol.10(11), pp.4794-4804, Nov. 2017. The simulation of optical images can play key roles in the development of new instruments, the quantitative evaluation of algorithms and in the training of both image analysis software and human analysts. Methods for image simulation include surrogate data collections, operations on empirical imagery, statistical generation techniques, and full physical modeling approaches. Each method offers advantages or disadvantages in terms of time, cost, and realism. Current state of the art suggests three-dimensional radiative transfer models capture most of the significant characteristics of real imagery and find valuable use in system development and evaluation programs. Emerging computational power available from multithreading, graphical processing units, and techniques from deep learning will continue to enable even more realistic simulations in the near future.

Rajagopalan Rengarajan;John R. Schott; "Modeling and Simulation of Deciduous Forest Canopy and Its Anisotropic Reflectance Properties Using the Digital Image and Remote Sensing Image Generation (DIRSIG) Tool," vol.10(11), pp.4805-4817, Nov. 2017. Extraction of biophysical information from forest canopies using temporal analysis of multispectral and hyperspectral data can be significantly improved by understanding its anisotropic reflectance properties. However, limitations on the accessibility and data collection techniques in the field reduce the availability of high-resolution bidirectional reflectance measurements (BRDF) to a few datasets. These limitations can be mitigated in a virtual environment and this paper presents an approach to model the spectral BRDF of a forest canopy using the Digital Image and Remote Sensing Image Generation (DIRSIG) tool. The three-dimensional geometries of the trees were modeled using forest inventory data and OnyxTree, while the spectral properties of the geometric elements were assigned based on the field collected spectra and PROSPECT inversion model. The DIRSIG tool was used as a virtual goniometer to measure the BRDF observations for varying sun-view geometries and a full hemispherical BRDF model was constructed by fitting the measurements to a semiempirical BRDF model. This paper discusses the methods involved in modeling the forest canopy scene, sensitivity of the radiative transfer, BRDF sampling and modeling strategies, model accuracy and its effect on real-world simulations. The model fit results indicate a root mean square error of less than 5% relative to the forests reflectance in the VIS-NIR-SWIR region. The simulated BRDF matched to within 2% of the Landsat-8 surface reflectance product in the red and NIR bands. The results can be used directly to evaluate BRDF modeling algorithms and the proposed method can be easily extended for other biomes.

Adam A. Goodenough;Scott D. Brown; "DIRSIG5: Next-Generation Remote Sensing Data and Image Simulation Framework," vol.10(11), pp.4818-4833, Nov. 2017. The digital imaging and remote sensing image generation model is a physics-based image and data simulation model that is primarily used to generate synthetic imagery across the visible to thermal infrared regions using engineering-driven descriptions of remote sensing systems. The model recently went through a major redesign and reimplementation effort to address changes in user requirements and numerical computation trends that have emerged in the 15 years since the last major development effort. The new model architecture adopts some of the latest light transport algorithms matured by the computer graphics community and features a framework that is easily parallelized at the microscale (multithreading) and macroscale (cluster-based computing). A detailed description of the framework is provided, including a novel method for efficiently storing, evaluating, integrating, and sampling spherical and hemispherical datasets appropriate for the representation of modeled or measured bidirectional scattering, reflectance, and transmission distribution functions. The capabilities of the model are then briefly demonstrated and cross-verified with scenarios of interest to the remote sensing community.

Jianbo Qi;Donghui Xie;Dashuai Guo;Guangjian Yan; "A Large-Scale Emulation System for Realistic Three-Dimensional (3-D) Forest Simulation," vol.10(11), pp.4834-4843, Nov. 2017. The realistic reconstruction and radiometric simulation of a large-scale three-dimensional (3-D) forest scene have potential applications in remote sensing. Although many 3-D radiative transfer models concerning forest canopy have been developed, they mainly focused on homogeneous or relatively small heterogeneous scenes, which are not compatible with the coarse-resolution remote sensing observations. Due to the huge complexity of forests and the inefficiency of collecting precise 3-D data of large areas, realistic simulation over large-scale forest area remains challenging, especially in regions of complex terrain. In this study, a large-scale emulation system for realistic 3-D forest Simulation is proposed. The 3-D forest scene is constructed from a representative single tree database (SDB) and airborne laser scanning (ALS) data. ALS data are used to extract tree height, crown diameter and position, which are linked to the individual trees in SDB. To simulate the radiometric properties of the reconstructed scene, a radiative transfer model based on a parallelized ray-tracing code was developed. This model has been validated with an abstract and an actual 3-D scene from the radiation transfer model intercomparison website and it showed comparable results with other models. Finally, a 1 km <inline-formula><tex-math notation="LaTeX">$times$</tex-math></inline-formula> 1 km scene with more than 100 000 realistic individual trees was reconstructed and a Landsat-like reflectance image was simulated, which kept the same spatial pattern as the actual Landsat 8 image.

Julianne de Castro Oliveira;Jean-Baptiste Féret;Flávio Jorge Ponzoni;Yann Nouvellon;Jean-Philippe Gastellu-Etchegorry;Otávio Camargo Campoe;José Luiz Stape;Luiz Carlos Estraviz Rodriguez;Guerric le Maire; "Simulating the Canopy Reflectance of Different Eucalypt Genotypes With the DART 3-D Model," vol.10(11), pp.4844-4852, Nov. 2017. Finding suitable models of canopy reflectance in forward simulation mode is a prerequisite for their use in inverse mode to characterize canopy variables of interest, such as leaf area index (LAI) or chlorophyll content. In this study, the accuracy of the three-dimensional reflectance model DART (Discrete Anisotropic Radiative Transfer) was assessed for canopies of different genotypes of Eucalyptus, having distinct biophysical and biochemical characteristics, to improve the knowledge on how these characteristics are influencing the reflectance signal as measured by passive orbital sensors. The first step was to test the model suitability to simulate reflectance images in the visible and near infrared. We parameterized DART model using extensive measurements from Eucalyptus plantations including 16 contrasted genotypes. Forest inventories were conducted and leaf, bark, and forest floor optical properties were measured. Simulation accuracy was evaluated by comparing the mean top of canopy (TOC) bidirectional reflectance of DART with TOC reflectance extracted from a Pleiades very high resolution satellite image. Results showed a good performance of DART with mean reflectance absolute error lower than 2%. Intergenotype reflectance variability was correctly simulated, but the model did not succeed at catching the slight spatial variation for a given genotype, excepted when large gaps appeared due to tree mortality. The second step consisted of sensitivity analysis to explore which biochemical or biophysical characteristics influenced more the canopy reflectance between genotypes. Perspectives for using DART model in inversion mode in these ecosystems were discussed.

Sahar Ben Hmida;Abdelaziz Kallel;Jean-Philippe Gastellu-Etchegorry;Jean-Louis Roujean; "Crop Biophysical Properties Estimation Based on LiDAR Full-Waveform Inversion Using the DART RTM," vol.10(11), pp.4853-4868, Nov. 2017. This paper presents the results of a three-dimensional (3-D) model inversion in order to demonstrate the potential of small footprint light detection and ranging (LiDAR) waveforms for estimating crop biophysical properties. For such, we consider the height, leaf area index (LAI), and ground spectral reflectance of two maize and wheat fields. Crop structure spatial variability that is observed per measured waveform is a source of inaccuracy for the inversion of LiDAR small footprint waveforms. For example, in the maize field, standard deviation is 0.16 m for height and 0.6 for LAI. To mitigate this issue, all measured waveforms are first classified into maize and wheat clusters. Then, biophysical properties are assessed per cluster using a look-up table of waveforms that are simulated by the discrete anisotropic radiative transfer model that works with the LiDAR configuration and realistic crop 3-D mock-ups with varied properties. Results were tested against in situ measurements. Crop height is very well estimated, with root-mean-square error (RMSE) <inline-formula><tex-math notation="LaTeX">$approx {0.07}$</tex-math> </inline-formula> and 0.04 m for maize and wheat, respectively. LAI estimate is also accurate (RMSE = 0.17) for maize except for wheat last growth stage (RMSE = 0.5), possibly due to the wheat low LAI value. Finally, the field spatial heterogeneity justifies the selection of many clusters to get accurate results.

Álvaro Ordóñez;Francisco Argüello;Dora B. Heras; "GPU Accelerated FFT-Based Registration of Hyperspectral Scenes," vol.10(11), pp.4869-4878, Nov. 2017. Registration is a fundamental previous task in many applications of hyperspectrometry. Most of the algorithms developed are designed to work with RGB images and ignore the execution time. This paper presents a phase correlation algorithm on GPU to register two remote sensing hyperspectral images. The proposed algorithm is based on principal component analysis, multilayer fractional Fourier transform, combination of log-polar maps, and peak processing. It is fully developed in CUDA for NVIDIA GPUs. Different techniques such as the efficient use of the memory hierarchy, the use of CUDA libraries, and the maximization of the occupancy have been applied to reach the best performance on GPU. The algorithm is robust achieving speedups in GPU of up to <inline-formula><tex-math notation="LaTeX"> ${text{240}}.{text{6}}times$</tex-math></inline-formula>.

Raúl Guerra;Ernestina Martel;Jehandad Khan;Sebastián López;Peter Athanas;Roberto Sarmiento; "On the Evaluation of Different High-Performance Computing Platforms for Hyperspectral Imaging: An OpenCL-Based Approach," vol.10(11), pp.4879-4897, Nov. 2017. Hyperspectral imaging systems are a powerful tool for obtaining surface information in many different spectral channels that can be used in many different applications. Nevertheless, the huge amount of information provided by hyperspectral images also has a downside, since it has to be processed and analyzed. For such purpose, parallel hardware devices, such as field-programmable gate arrays (FPGAs) and graphic processing units (GPUs), are typically used, especially for hyperspectral imaging applications under real-time constraints. However, developing hardware applications typically requires expertise in the specific targeted device, as well as in the tools and methodologies that can be used to perform the implementation of the desired algorithms in that device. In this scenario, the Open Computing Language (OpenCL) emerges as a very interesting solution in which a single high-level language can be used to efficiently develop applications in multiple and different hardware devices. In this work, the parallel Fast Algorithm for Linearly Unmixing Hyperspectral Images (pFUN) has been implemented in two different NVIDIA GPUs, the GeForce GTX 980 and the Tesla K40c, using OpenCL. The obtained results are compared with the results provided by the previously developed NVIDIA CUDA implementation of the pFUN algorithm for the same GPU devices for comparing the efficiency of OpenCL against a more specific synthesis design language for the targeted hardware devices, such as CUDA is for NVIDIA GPUs. Moreover, the FUN algorithm has also been implemented into a Bitware Stratix V Altera FPGA, using OpenCL, for comparing the results that can be obtained using OpenCL when targeting different devices and architectures. The obtained results demonstrate the suitability of the followed methodology in the sense that it allows the achievement of efficient FPGA and GPU implementations able to cope with the stringent requirements imposed by hyperspectral imaging systems.

Bo Zhong;Wuhan Chen;Shanlong Wu;Longfei Hu;Xiaobo Luo;Qinhuo Liu; "A Cloud Detection Method Based on Relationship Between Objects of Cloud and Cloud-Shadow for Chinese Moderate to High Resolution Satellite Imagery," vol.10(11), pp.4898-4908, Nov. 2017. Cloud detection of satellite imagery is very important for quantitative remote sensing research and remote sensing applications. However, many satellite sensors do not have enough bands for a quick, accurate, and simple detection of clouds. Particularly, the newly launched moderate to high spatial resolution satellite sensors of China, such as the charge-coupled device on-board the Chinese Huan Jing 1 (HJ-1/CCD) and the wide field of view (WFV) sensor on-board the Gao Fen 1 (GF-1), only have four available bands including blue, green, red, and near infrared bands, which are far from the requirements of most could detection methods. In order to solve this problem, an improved and automated cloud detection method for Chinese satellite sensors called object-oriented cloud and cloud-shadow matching method (OCM) is presented in this paper. It first modified the automatic cloud cover assessment (ACCA) method, which was developed for Landsat-7 data, to get an initial cloud map. The modified ACCA method is mainly based on threshold and different threshold settings produce different cloud maps. Subsequently, a strict threshold is used to produce a cloud map with high confidence and large amount of cloud omission and a loose threshold is used to produce a cloud map with low confidence and large amount of commission. Second, a corresponding cloud-shadow map is also produced using the threshold of near-infrared band. Third, the cloud maps and cloud-shadow map are transferred to cloud objects and cloud-shadow objects. Cloud and cloud-shadow are usually in pairs; consequently, the final cloud and cloud-shadow maps are made based on the relationship between cloud and cloud-shadow objects. The OCM method was tested using almost 200 HJ-1/CCD and GF-1/WFV images across China and the overall accuracy of cloud detection is close to 90%.

Furkan Isikdogan;Alan C. Bovik;Paola Passalacqua; "Surface Water Mapping by Deep Learning," vol.10(11), pp.4909-4918, Nov. 2017. Mapping of surface water is useful in a variety of remote sensing applications, such as estimating the availability of water, measuring its change in time, and predicting droughts and floods. Using the imagery acquired by currently active Landsat missions, a surface water map can be generated from any selected region as often as every 8 days. Traditional Landsat water indices require carefully selected threshold values that vary depending on the region being imaged and on the atmospheric conditions. They also suffer from many false positives, arising mainly from snow and ice, and from terrain and cloud shadows being mistaken for water. Systems that produce high-quality water maps usually rely on ancillary data and complex rule-based expert systems to overcome these problems. Here, we instead adopt a data-driven, deep-learning-based approach to surface water mapping. We propose a fully convolutional neural network that is trained to segment water on Landsat imagery. Our proposed model, named DeepWaterMap, learns the characteristics of water bodies from data drawn from across the globe. The trained model separates water from land, snow, ice, clouds, and shadows using only Landsat bands as input. Our code and trained models are publicly available at http://live.ece.utexas.edu/research/deepwatermap/.

Nguyen Thi Thu Ha;Katsuaki Koike;Mai Trong Nhuan;Bui Dinh Canh;Nguyen Thien Phuong Thao;Michael Parsons; "Landsat 8/OLI Two Bands Ratio Algorithm for Chlorophyll-A Concentration Mapping in Hypertrophic Waters: An Application to West Lake in Hanoi (Vietnam)," vol.10(11), pp.4919-4929, Nov. 2017. Monitoring chlorophyll-a concentration (Chl-a) in inland waters, particularly hypertrophic lake waters in megacities, is a critically important environmental issue. To enable long-term Chl-a monitoring using Landsat series sensors, development of a Chl-a estimation algorithm for the new Landsat sensor is requisite. This study aims to identify the most accurate algorithm for Chl-a estimation in hypertrophic waters using Landsat 8 images and in situ Chl-a data from West Lake and nine other hypertrophic lakes in Hanoi (Vietnam's capital). The best estimation was obtained by the ratio of two reflectances at 562 and 483 nm, corresponding to the ratio of the OLI band 3 versus band 2, termed the GrB2 algorithm. The GrB2 values using the reflectances of water samples and the Landsat images were correlated with the Chl-a by an exponential function (r2 = 0.64 to 0.82), and the estimated Chl-a were verified by the smallness of standard error (smaller than 10%) and degree of conformity with recent fish-kill phenomena that commonly occur in those lakes in summer and early spring. Because the availability of GrB2 is limited to waters with low levels of inorganic suspended matter, its extension to waters with much higher levels requires further investigation.

Mengmeng Li;Kirsten M. de Beurs;Alfred Stein;Wietske Bijker; "Incorporating Open Source Data for Bayesian Classification of Urban Land Use From VHR Stereo Images," vol.10(11), pp.4930-4943, Nov. 2017. This study investigates the incorporation of open source data into a Bayesian classification of urban land use from very high resolution (VHR) stereo satellite images. The adopted classification framework starts from urban land cover classification, proceeds to building-type characterization, and results in urban land use. For urban land cover classification, a preliminary classification distinguishes trees, grass, and shadow objects using a random forest at a fine segmentation level. Fuzzy decision trees derived from hierarchical Bayesian models separate buildings from other man-made objects at a coarse segmentation level, where an open street map provides prior building information. A Bayesian network classifier combining commonly used land use indicators and spatial arrangement is used for the urban land use classification. The experiments were conducted on GeoEye stereo images over Oklahoma City, USA. Experimental results showed that the urban land use classification using VHR stereo images performed better than that using a monoscopic VHR image, and the integration of open source data improved the final urban land use classification. Our results also show a way of transferring the adopted urban land use classification framework, developed for a specific urban area in China, to other urban areas. The study concludes that incorporating open source data by Bayesian analysis improves urban land use classification. Moreover, a pretrained convolutional neural network fine tuned on the UC Merced land use dataset offers a useful tool to extract additional information for urban land use classification.

Caixia Gao;Shi Qiu;En-Yu Zhao;Chuanrong Li;Ling-Li Tang;Ling-Ling Ma;Xiaoguang Jiang;Yonggang Qian;Yongguang Zhao;Ning Wang;Lu Ren; "Land Surface Temperature Retrieval From FY-3C/VIRR Data and Its Cross-Validation With Terra/MODIS," vol.10(11), pp.4944-4953, Nov. 2017. Accurate inversion of land surface temperature (LST) from remote sensing data is an essential and challenging topic for earth observation applications. This paper successfully retrieves the LST from FY-3C/VIRR data with split-window method. With the simulated data, the algorithm coefficients are acquired with root mean square errors lower than 1.0 K for all subranges when view zenith angle (VZA) < 30° and the water vapor content (WVC) < 4.25 g/cm2 , as well as those in which the VZA < 30° and the LST < 307.5 K. In addition, a detailed sensitivity analysis is carried out. The analysis result indicates that the total LST uncertainty caused by the standard error of the algorithm, the uncertainties of land surface emissivity and WVC, and the instrument noise would be 1.22 K and 0.94 K for dry and wet atmosphere, respectively. Furthermore, LST retrieval method is applied to the visible and infrared radiometer measurements over the study area covering the geographical latitude of 31.671°N to 44.211°N and longitude of 10.739°W to 1.898°E, and the derived LST is cross-validated with Terra/MODIS LST product. The preliminary validation result shows that the split-window method determines the LST within 2.0 K for vegetation and soil areas.

Alexander S. Antonarakis;Alejandro Guizar Coutiño; "Regional Carbon Predictions in a Temperate Forest Using Satellite Lidar," vol.10(11), pp.4954-4960, Nov. 2017. Large uncertainties in terrestrial carbon stocks and sequestration predictions result from insufficient regional data characterizing forest structure. This study uses satellite waveform lidar from ICESat to estimate regional forest structure in central New England, where each lidar waveform estimates fine-scale forest heterogeneity. ICESat is a global sampling satellite, but does not provide wall-to-wall coverage. Comprehensive, wall-to-wall ecosystem state characterization is achieved through spatial extrapolation using the random forest machine-learning algorithm. This forest description allows for effective initialization of individual-based terrestrial biosphere models making regional carbon flux predictions. Within 42/43.5 N and 73/71.5 W, aboveground carbon was estimated at 92.47 TgC or 45.66 MgC ha−1, and net carbon fluxes were estimated at 4.27 TgC yr−1 or 2.11 MgC ha−1 yr−1. This carbon sequestration potential was valued at 47% of fossil fuel emissions in eight central New England counties. In preparation for new lidar and hyperspectral satellites, linking satellite data and terrestrial biosphere models are crucial in improving estimates of carbon sequestration potential counteracting anthropogenic sources of carbon.

Khadije Kiapasha;Ali Asghar Darvishsefat;Yves Julien;Jose A. Sobrino;Nosratoallah Zargham;Pedram Attarod;Michael E. Schaepman; "Trends in Phenological Parameters and Relationship Between Land Surface Phenology and Climate Data in the Hyrcanian Forests of Iran," vol.10(11), pp.4961-4970, Nov. 2017. Vegetation activity may be changed in response to climate variability by affecting seasonality and phenological events. Monitoring of land surface phenological changes play a key role in understanding feedback of ecosystem dynamics. This study focuses on the analysis of trends in land surface phenology derived parameters using normalized difference vegetation index time series based on Global Inventory Monitoring and Mapping Studies data in the Hyrcanian forests of Iran covering the period 1981–2012. First, we applied interpolation for data reconstruction in order to remove outliers and cloud contamination in time series. Phenological parameters were retrieved by using the midpoint approach, whereas trends were estimated using the Theil–Sen approach. Correlation coefficients were evaluated from multiple linear regression between phenological parameters against temperature and precipitation time series. Significant Mann–Kendall test analysis indicate average start of season (SOS) and end of season (EOS) increased by −0.16 and +0.14 days per year, respectively. Results of significant trend analysis showed that later EOS was associated with increasing temperature trends and we found strongest relationships between temperature and phenological parameters in the west of the Hyrcanian forests, where precipitation was abundant. Moreover, SOS correlated strongly with total precipitation and mean temperature. This study allows us to better estimate the drivers affecting the vegetation dynamics in the Hyrcanian forests of Iran.

Wenqiang Hua;Shuang Wang;Hongying Liu;Kun Liu;Yanhe Guo;Licheng Jiao; "Semisupervised PolSAR Image Classification Based on Improved Cotraining," vol.10(11), pp.4971-4986, Nov. 2017. In order to obtain good classification performance of polarimetric synthetic aperture radar (PolSAR) images, many labeled samples are needed for training. However, it is difficult, expensive, and time-consuming to obtain labeled samples in practice. On the other hand, unlabeled samples are substantially cheaper and more plentiful than labeled ones. In addressing this issue, semisupervised learning techniques are proposed. In this paper, a novel semisupervised algorithm based on an improved cotraining process is proposed for PolSAR image classification. First, we propose an indirect analysis strategy to analyze the nature of sufficiency and independence between two different views for cotraining. Then, an improved cotraining process with a new sample selection strategy is presented, which can effectively take advantage of unlabeled samples to improve the performance of classification, particularly when labeled samples are limited. Finally, a new postprocess method based on a similarity principle and a superpixel algorithm is developed to improve the consistency of the classification. Experimental results on three real PolSAR images show that our proposed method is an effective classification method, and is superior to other traditional methods.

Ning Cao;Hyongki Lee;Evan Zaugg;Ramesh Shrestha;William Carter;Craig Glennie;Guoquan Wang;Zhong Lu;Juan Carlos Fernandez-Diaz; "Airborne DInSAR Results Using Time-Domain Backprojection Algorithm: A Case Study Over the Slumgullion Landslide in Colorado With Validation Using Spaceborne SAR, Airborne LiDAR, and Ground-Based Observations," vol.10(11), pp.4987-5000, Nov. 2017. The major impediment to accurate airborne repeat-pass differential synthetic aperture radar (SAR) interferometry (DInSAR) is compensating for aircraft motion caused by air turbulence. Various motion compensation (MoCo) procedures have been used in the airborne DInSAR processing to acquire reliable deformation mapping. In this paper, we present the use of time-domain backprojection (BP) algorithm for SAR focusing in an airborne DInSAR survey: No MoCo procedure is needed because the BP algorithm is inherently able to compensate for platform motion. In this study, we present the results of a pilot study aimed at demonstrating the feasibility of deformation mapping with an airborne SAR system based on the monitoring of the Slumgullion landslide in Colorado, USA between July 3 and 10 of 2015. The employed airborne SAR system is an Artemis SlimSAR that is a compact, modular, and multi-frequency radar system. Airborne light detection and ranging and global navigation satellite system (GNSS) observations, as well as spaceborne DInSAR results using COSMO-SkyMed (CSK) images, were used to verify the performance of the airborne SAR system. The surface velocities of the landslide derived from the airborne DInSAR observations showed good agreement with the GNSS and spaceborne DInSAR estimates. A three-dimensional deformation map of the Slumgullion landslide was also generated, which displayed distinct correlation between the landslide motion and topographic variation. This study shows that an inexpensive airborne L-band DInSAR system has the potential to measure centimeter level deformation with flexible temporal and spatial baselines.

Wei Pu;Junjie Wu;Yulin Huang;Ke Du;Wenchao Li;Jianyu Yang;Haiguang Yang; "A Rise-Dimensional Modeling and Estimation Method for Flight Trajectory Error in Bistatic Forward-Looking SAR," vol.10(11), pp.5001-5015, Nov. 2017. Bistatic forward-looking synthetic aperture radar (BFSAR) is a kind of bistatic SAR system that can image forward-looking terrain in the flight direction of a moving platform. In BFSAR, compensation of the flight trajectory errors is of great significance to get a well-focused image. To accomplish an accurate motion compensation in image processing, a high-precision navigation system is needed. However, in many cases, due to the accuracy limit of such systems, flight trajectory errors are hard to be compensated correctly, causing mainly the resolution decrease in final images. In order to cope with such a problem, we propose a rise-dimensional modeling and estimation for flight trajectory error based on raw BFSAR data in this paper. To apply this method, we first carry out a preprocessing named azimuth-slowtime decoupling to deal with the spatially variant flight trajectory error before estimation. Then, an optimization model for flight trajectory estimation under the criterion of maximum image intensity is built. The solution to the optimization model is the accurate flight trajectory. Then, block coordinate descent technique is used to solve this optimization model. The processing of BFSAR data shows that the algorithm can obtain a more accurate estimation results, and generate better focused images compared with the existing trajectory estimation method.

Jia Su;Haihong Tao;Mingliang Tao;Ling Wang;Jian Xie; "Narrow-Band Interference Suppression via RPCA-Based Signal Separation in Time–Frequency Domain," vol.10(11), pp.5016-5025, Nov. 2017. Narrow-band interference (NBI) is a critical issue for synthetic aperture radar (SAR), in which the imaging quality can be degraded severely. To suppress NBI effectively, a novel interference suppression algorithm using robust principal component analysis (RPCA) based signal separation in time–frequency domain is proposed. The RPCA algorithm is introduced for signal separation in the time–frequency domain for the first time. The fundamental assumption of RPCA is that a matrix can be modeled as a combination of a low-rank matrix and a sparse counterpart. In terms of the SAR echo, the short time Fourier transformation (STFT) matrix of mixed signals (i.e., useful SAR signals and NBIs) well fits the assumption of RPCA. Based on this property, radar echoes are first transformed into the time–frequency domain by STFT to form an STFT matrix. Then, the RPCA algorithm is used to decompose the STFT matrix into a low-rank matrix (i.e., NBIs) and a sparse matrix (i.e., useful signals). Finally, the NBIs can be reconstructed and subtracted from the echoes to realize the interference suppression. The experimental results of simulated and measured data demonstrate that the proposed algorithm not only can suppress interference effectively, but also preserve the useful information as much as possible.

Gui Gao;Gongtao Shi;Gaosheng Li;Jianghua Cheng; "Performance Comparison Between Reflection Symmetry Metric and Product of Multilook Amplitudes for Ship Detection in Dual-Polarization SAR Images," vol.10(11), pp.5026-5038, Nov. 2017. The reflection symmetry metric (RSM) and product of multilook amplitudes (PMA) detectors, which were proposed recently, have been demonstrated to be promising methods for processing dual-polarimetric synthetic aperture radar (SAR) data for ship detection. The improvements in ship detection performance by using the RSM, compared to that using the PMA, are investigated in this paper. As the ship-sea contrast (or the signal-clutter-ratio, SCR) is a central index to assess the performance of a detection method, the SCRs in the RSM and PMA are first defined and compared. Next, a theoretical explanation for why the RSM outperforms the PMA in detection performance is provided. The detection performance is then characterized by calculating the receiver operating characteristic (ROC) curves. The preliminary experimental results performed on measured RADARSAT-2, ALOS-PALSAR, and NASA/JPL AIRSAR images verify the accuracy of the theoretical analysis.

Hui Li;Linhai Jing; "Improvement of a Pansharpening Method Taking Into Account Haze," vol.10(11), pp.5039-5055, Nov. 2017. Pansharpening is an important technique used to generate high-quality high-spatial-resolution multispectral (MS) bands by fusing low-spatial-resolution MS bands and a panchromatic (PAN) band obtained by the same sensor. A PAN-modulation (PM)-based pansharpening method taking account of haze, which is referred as Haze- and Ratio- based (HR) method, has been demonstrated to yield good performances, indicating that the impact of haze should be considered in PM-based methods. It is obvious that the haze values used in the HR fusion influence the spectral vectors of fused pixels, thus affect the spectral distortion of fused images. In order to reach stable and good performances of the HR method, the determination of the optimal haze values is discussed in this study. First, six approaches for haze values determination, which are variations of the histogram minimal approach and the darkest pixel approach employed by the image-based dark-object subtraction method for atmospheric correction of remote-sensed images, are compared. Then, an improved approach for haze values determination is proposed. The proposed approach is proved to be effective for improving the performance of the HR method. This is very important for the employment of the HR method in practical applications and by more researchers.

James C. Tilton;Robert E. Wolfe;Guoqing Lin; "On-Orbit Line Spread Function Estimation of the SNPP VIIRS Imaging System From Lake Pontchartrain Causeway Bridge Images," vol.10(11), pp.5056-5072, Nov. 2017. The visible infrared imaging radiometer suite (VIIRS) instrument was launched on October 28, 2011 onboard the Suomi National Polar-Orbiting Partnership (SNPP) satellite. The VIIRS instrument is a whiskbroom system with 22 spectral and thermal bands split between 16 moderate resolution bands (M-bands), five imagery resolution bands (I-bands), and a day–night band. In this study, we estimate the along-scan line spread function (LSF) of the I-bands and M-bands based on measurements performed on images of the Lake Pontchartrain Causeway Bridge. In doing so, we develop a model for the LSF that closely matches the prelaunch laboratory measurements. We utilize VIIRS images co-geolocated with a Landsat TM image to precisely locate the bridge linear feature in the VIIRS images as a linear best fit to a straight line. We then utilize nonlinear optimization to compute the best fit equation of the VIIRS image measurements in the vicinity of the bridge to the developed model equation. From the found parameterization of the model equation, we derive the full-width at half-maximum as an approximation of the sensor field of view for all bands, and compare these on-orbit measured values with prelaunch laboratory results.

Yifan Zhang;Xiaoqin Xue;Ting Wang;Mingyi He; "A Hybrid Subpixel Mapping Framework for Hyperspectral Images Using Collaborative Representation," vol.10(11), pp.5073-5086, Nov. 2017. Subpixel mapping with a low-resolution hyperspectral image as the only input is widely applicable due to the fact that auxiliary image is not always available in practice. In this paper, the collaborative representation-based subpixel mapping (CRSPM) framework is proposed to acquire an improved classification map at subpixel scale with only a low-resolution hyperspectral image available. To efficiently extract and utilize spatial information in this case without auxiliary image, the low-resolution hyperspectral (LHS) image is processed in a hybrid framework in two different ways to generate two subpixel scale classification maps. One is obtained by classifying the upsampled LHS image using collaborative representation-based (CR-based) classifier. The other is available using CR-based classification combined with spectral unmixing and subpixel spatial attraction model. Specifically, to enclose the contextual spatial information for higher classification accuracy, a spatially joint as well as post-partitioning CR-based classifier, JCRT-based classifier, is proposed and applied in this work. To achieve better classification performance, decision fusion is applied to determine class label from the two classification maps for each subpixel by the voting of the neighboring subpixels. Experimental results illustrate that the proposed CRSPM approach clearly outperforms some state-of-the-art subpixel mapping approaches by producing smoother classification map with less misclassification.

Weiwei Sun;Long Tian;Yan Xu;Dianfa Zhang;Qian Du; "Fast and Robust Self-Representation Method for Hyperspectral Band Selection," vol.10(11), pp.5087-5098, Nov. 2017. In this paper, a fast and robust self-representation (FRSR) method is proposed to select a proper band subset from hyperspectral imagery (HSI). The FRSR assumes the separability structure of the HSI band set and transforms the problem of separable nonnegative matrix factorization into the robust self-representation (RSR) model. Then, the FRSR incorporates structured random projections into the RSR model to improve computational efficiency. The solution of FRSR is formulated into optimizing a convex problem and the augmented Lagrangian multipliers are adopted to estimate the proper factorization localizing matrix in the FRSR. The selected band subset is constituted with the bands corresponding to the r largest diagonal entries of the factorization localizing matrix. The experimental results show that FRSR outperforms state-of-the-art techniques in classification accuracy with lower computational cost.

Biplab Banerjee;Subhasis Chaudhuri; "Hierarchical Subspace Learning Based Unsupervised Domain Adaptation for Cross-Domain Classification of Remote Sensing Images," vol.10(11), pp.5099-5109, Nov. 2017. We address the problem of automatic updating of land-cover maps by using remote sensing images under the notion of domain adaptation (DA) in this paper. Essentially, unsupervised DA techniques aim at adapting a classifier modeled on the source domain by considering the available ground truth and evaluate the same on a related yet diverse target domain consisting only of test samples. Traditional subspace learning based strategies in this respect inherently assume the existence of a single subspace spanning the data from both the domains. However, such a constraint becomes rigid in many scenarios considering the diversity in the statistical properties of the underlying semantic classes and problem due to data overlapping in the feature space. As a remedy, we propose an automated binary-tree based hierarchical organization of the semantic classes and subsequently introduce the notion of node-specific subspace learning from the learned tree. We validate the method on hyperspectral, medium-resolution, and very high resolution datasets, which exhibits a consistently improved performance in comparison to standard single subspace learning based strategies as well as other representative techniques from the literature.

Stefania Matteoli;Laura Zotta;Marco Diani;Giovanni Corsini; "POSEIDON: An Analytical End-to-End Performance Prediction Model for Submerged Object Detection and Recognition by Lidar Fluorosensors in the Marine Environment," vol.10(11), pp.5110-5133, Nov. 2017. An analytical end-to-end model is developed to predict the performance of underwater object recognition by means of light detection and ranging (lidar) fluorosensors, as an aid to underwater lidar mission planning and system design. The proposed Performance prediction mOdel for Submerged object dEtection and recognitIon by liDar fluOrosensors in the marine eNvironment (POSEIDON) reproduces the overall end-to-end fluorescence lidar system chain—from signal generation, to signal propagation, acquisition, and processing. The goal is assessing the performance that may be obtained for spectral recognition of an underwater object in various operational scenarios in terms of several different performance metrics. In addition to the performance prediction models developed in the literature for airborne lidar bathymetry, POSEIDON embeds a novel comprehensive signal simulator that accounts for inelastic scattering phenomena as well as a signal processing module designed ad hoc to accomplish spectral recognition of an underwater object with respect to a data base of objects of interest spectrally characterized by their fluorescence spectral signatures. Test cases with a lidar system arranged in two configurations and several objects submerged at various depths in different Cases I and II waters were reproduced and explored. Results obtained within a Monte Carlo simulation framework provide proof-of-concept of POSEIDON performance forecasting capabilities for underwater object recognition.

Luz García;Isaac Álvarez;Manuel Titos;Alejandro Díaz-Moreno;M. Carmen Benítez;Ángel de la Torre; "Automatic Detection of Long Period Events Based on Subband-Envelope Processing," vol.10(11), pp.5134-5142, Nov. 2017. This work presents a novel approach to automatic detection of long period events (LP) in continuous seismic records. Without any supervised learning, the proposal is based on a simple processing to search for the LP characteristic shape, duration, and band of activity. Continuous raw signals from the seismometer are first filtered into three frequency bands separating lower, central, and upper frequency components. These new signals are then processed in parallel to extract subband envelopes and create a characteristic function that enhances LP features. Experiments to test the proposal are presented using: 1) 2 h of continuous recordings of the Volcano of Deception Island, Antarctica, containing LP events artificially contaminated with seismic background noise to create low signal-to-noise ratio scenarios and 2) a set of earthquake-like computer generated signals, randomly produced and inserted in the continuous records to recreate a testing environment as challenging as possible. A receiver operating curve analysis of the results compared to those of a classical short/long time average approach, provides positive conclusions on the performance of the technique presented.

Allan A. Nielsen;Knut Conradsen;Henning Skriver; "Corrections to “Change Detection in Full and Dual Polarization, Single- and Multi-Frequency SAR Data” [Aug 15 4041-4048[Name:_blank]]," vol.10(11), pp.5143-5144, Nov. 2017. When the covariance matrix formulation is used for multi-look polarimetric synthetic aperture radar (SAR) data, the complex Wishart distribution applies. Based on this distribution a test statistic for equality of two complex variance-covariance matrices and an associated asymptotic probability of obtaining a smaller value of the test statistic are given. In a case study airborne EMISAR C- and L-band SAR images from the spring of 1998 covering agricultural fields and wooded areas near Foulum, Denmark, are used in single- and bi-frequency, bi-temporal change detection with full and dual polarimetry data.

* "Call for papers," vol.10(11), pp.5145-5145, Nov. 2017.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "Call for papers," vol.10(11), pp.5146-5146, Nov. 2017.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "Proceedings of the IEEE," vol.10(11), pp.5147-5147, Nov. 2017.* Advertisement: For over 100 years, Proceedings of the IEEE has been the leading journal for engineers looking for in-depth tutorial, survey, and review coverage of the technical developments that shape our world. Offering practical, fully referenced articles, Proceedings of the IEEE serves as a bridge to help readers understand important technologies in the areas of electrical engineering and computer science.

* "Become a published author in 4 to 6 weeks," vol.10(11), pp.5148-5148, Nov. 2017.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "IEEE Geoscience and Remote Sensing Society," vol.10(11), pp.C3-C3, Nov. 2017.* Provides a listing of current committee members and society officers.

* "Institutional listings," vol.10(11), pp.C4-C4, Nov. 2017.* Advertisements.

IEEE Geoscience and Remote Sensing Magazine - new TOC (2017 November 23) [Website]

* "Front Cover," vol.5(3), pp.C1-C1, Sept. 2017.* Presents the front cover for this issue of the publication.

* "GRSM Call for Papers," vol.5(3), pp.C2-C2, Sept. 2017.* Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

* "Table of Contents," vol.5(3), pp.1-2, Sept. 2017.* Presents the table of contents for this issue of the publication.

* "Staff List," vol.5(3), pp.2-2, Sept. 2017.* Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Lorenzo Bruzzone; "IEEE GRSM Now Included in Thomson Reuters?s Journal Citation Report [From the Editor[Name:_blank]]," vol.5(3), pp.3-4, Sept. 2017. Presents information on IEEE GRSM inclusion in Thomson Reuters?s Journal Citation Report.

* "ARSI-KEO," vol.5(3), pp.4-4, Sept. 2017.* Advertisement, IEEE.

Adriano Camps; "IEEE GRSS Accomplishes New Milestones [President's Message[Name:_blank]]," vol.5(3), pp.5-7, Sept. 2017. Presents the President’s message for this issue of the publication.

* "STRATUS," vol.5(3), pp.7-7, Sept. 2017.* Advertisement, IEEE.

Arnau Fombuena; "Unmanned Aerial Vehicles and Spatial Thinking: Boarding Education With Geotechnology And Drones," vol.5(3), pp.8-18, Sept. 2017. The recent boom in the number and importance of unmanned aerial vehicles (UAVs), such as drones, unmanned aircraft systems (UASs), and remotely piloted aircraft systems (RPASs), has placed the geosciences and remote sensing (RS) community in a privileged position. But the increasing market demand for a geoenabled workforce contrasts markedly with the number of college-level students enrolling in the related disciplines. This article focuses on current and future opportunities for incorporating UAVs, geosciences, and RS as part of education programs to engage incoming students (and society more broadly) in this set of emerging technologies. Specifically, we will review the current status of geosciences and RS education involving UAVs, including a strengths, weaknesses, opportunities, and threats (SWOT) matrix and a vision toward the future. In short, it is essential that we systematize, disseminate, and universalize topics related to geosciences and RS education in terms of UAVs because the fields are growing exponentially, and the trend is expected to continue.

Hripsime Matevosyan;Ignasi Lluch;Armen Poghosyan;Alessandro Golkar; "A Value-Chain Analysis for the Copernicus Earth Observation Infrastructure Evolution: A Knowledgebase of Users, Needs, Services, and Products," vol.5(3), pp.19-35, Sept. 2017. This article reviews and analyzes the needs of Earth observation (EO) services' users, stakeholders, and beneficiaries. It identifies the key elements of the value chain of the European EO infrastructure and builds a comprehensive knowledgebase of those elements, represented as a relational database. The entities in the database are users, needs, services, and products. The database also includes connections between these entities-such as users to needs and products to services-via mapping tables. Leveraging data from the relevant policy and requirement documents as well as from research project reports, the database contains 63 users, 37 explicit needs, and 95 EO products across six Copernicus services.

Stephen L. Durden;Dragana Perkovic-Martin; "The RapidScat Ocean Winds Scatterometer: A Radar System Engineering Perspective," vol.5(3), pp.36-43, Sept. 2017. The NASA International Space Station (ISS)-RapidScat scatterometer operated on board the ISS from October 2014 into August 2016. It was developed using a combination of new subsystems and spare SeaWinds scatterometer engineering model subsystems to interface with the ISS. Using commercial (nonflight-qualified) parts in the new assemblies, developing RapidScat required a relatively small budget and short time schedule (just over two years). This article describes RapidScat's development from the perspective of radar system engineering, particularly in relation to performance requirements and testing.

Lionel Gourdeau;Bughsin Djath;Alexandre Ganachaud;Fernando Nino;Florence Birol;Jacques Verron;Nicolas Fuller; "Altimetry in a Regional Tropical Sea [Space Agencies[Name:_blank]]," vol.5(3), pp.44-52, Sept. 2017. The satellite for Argos and AltiKa (SARAL/AltiKa) is the first ocean altimeter mission to operate in the Ka-band frequency. The objective of this article is to investigate the extent to which SARAL/AltiKa sea-level measurements provide valuable information in a complex bathymetric region, i.e., the semienclosed Solomon Sea. The data-editing procedure is revisited, and we propose two new data-editing criteria. The first is based on the detection of erroneous sea-level values after computation, and the second directly analyzes the radar measurements and geophysical corrections. We show that both methods are significantly more efficient than the standard procedure used in operational processing chains.

Diane K. Davies;Molly E. Brown;Kevin J. Murphy;Karen A. Michael;Bradley T. Zavodsky;E. Natasha Stavros;Mark L. Caroll; "Workshop on Using NASA Data for Time-Sensitive Applications [Space Agencies[Name:_blank]]," vol.5(3), pp.52-58, Sept. 2017. Over the past decade, there has been an increase in the use of NASA's Earth Observing System (EOS) data and imagery for time-sensitive applications such as monitoring wildfires, floods, and extreme weather events. In September 2016, NASA sponsored a workshop for data users, producers, and scientists to discuss the needs of time-sensitive science applications.

Feng Xu;Feng Wang;Qiang Yin; "China Chapter Chairs Meeting in Shanghai [Chapters[Name:_blank]]," vol.5(3), pp.59-60, Sept. 2017. Presents information on various GRS Society chapters.

Colin Schwegmann; "The Reinvigorated South African GRSS Chapter [Chapters[Name:_blank]]," vol.5(3), pp.61-62, Sept. 2017. Presents information on various GRS Society chapters.

* "GRSS Chapters and Contact Information [Chapters[Name:_blank]]," vol.5(3), pp.63-64, Sept. 2017.* Presents information on various GRS Society chapters.

David Le Vine; "Three New GRSS Distinguished Lecturers Announced [Distinguished Lecturer Program[Name:_blank]]," vol.5(3), pp.65-68, Sept. 2017. Presents information on the GRSS Distinguished Lecturers series.

* "GRSS Members Elevated to IEEE Senior Member in April and June 2017 [GRSS Member Highlights[Name:_blank]]," vol.5(3), pp.69-69, Sept. 2017.* Present GRSS members who were elevated to the status of IEEE Senior Member.

Werner Wiesbeck;Martti Hallikainen;Mahta Moghaddam; "IEEE GRSS Awards 2018: Call for Nominations [GRSS Member Highlights[Name:_blank]]," vol.5(3), pp.70-71, Sept. 2017. Presents a call for nominations for select GRSS awards for 2018.

Yuqi Bai;Clifford A. Jacobs;Mei-Po Kwan;Christoph Waldmann; "Geoscience and the Technological Revolution [Perspectives[Name:_blank]]," vol.5(3), pp.72-75, Sept. 2017. The imperative for geoscience is to help society understand the Earth system and thus inform decision-making processes. This necessity has never been greater than it is today, nor have the challenges been more complex.

* "Calendar [Calendar[Name:_blank]]," vol.5(3), pp.76-76, Sept. 2017.* Presents the GRSS upcoming calendar of events.

Topic revision: r6 - 22 May 2015, AndreaVaccari
 
banner.png
©2017 University of Virginia. Privacy Statement
Virginia Image and Video Analysis - School of Engineering and Applied Science - University of Virginia
P.O. Box 400743 - Charlottesville, VA - 22904 - E-Mail viva.uva@gmailREMOVEME.com