# Nonnegative Matrix And Tensor Factorizations Applications To Exploratory Pdf

By Marjolaine R.

In and pdf

03.05.2021 at 23:21

7 min read

File Name: nonnegative matrix and tensor factorizations applications to exploratory .zip

Size: 2949Kb

Published: 03.05.2021

*Non-negative matrix factorization NMF or NNMF , also non-negative matrix approximation [1] [2] is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into usually two matrices W and H , with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered.*

Keywords: nonnegative tensor factorization; proximal method; alternating least squares; enhanced line search; global convergence. In this paper, we focus on algorithmic improvement of this method. To speed up the PANLS method, we propose to combine it with a periodic enhanced line search strategy. Similar articles:. Tutorial and applications.

## Non-negative matrix factorization

Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation BSS algorithms.

Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization NMF that is capable of identifying: a the unknown number of the sources, b the delays and speed of propagation of the signals, and c the locations of the sources.

Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect.

By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found. This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose.

The work is made available under the Creative Commons CC0 public domain dedication. Department of Energy Office of Science, Grant Competing interests: The authors have declared that no competing interests exist. Presently, large datasets are collected at various scales, from laboratory to planetary, [ 1 , 2 ].

For example, large amounts of data are gathered by distributed sets of asynchronous sensor arrays that, via remote sensing, can measure a considerable number of physical parameters. Usually, each record of a sensor in such array represents a mixture of signals originating from an unknown number of physical sources with varying locations, speed, and strengths, which are typically also unknown.

The analysis of such types of data, especially related to threat reduction, nuclear nonproliferation, and environmental safety, could be critical for emergency responses. One of the main goals of the analyses of such data is to identify and determine the locations of the unknown number of activities, for example by investigating signals of approximately non-dispersive waves, such as: seismic waves, electromagnetic waves in unbounded free space, sound waves in air, radio waves with frequencies less than 15 GHz in air, etc.

One way to perform such analysis is to leverage some complex, poorly constrained and uncertain physics-based inverse-model methods where, typically, computationally-intensive numerical models are needed to simulate the governing process related to signal propagation through the medium of interest.

An alternative approach, which we will consider here, is the model-free analysis based on the Blind Source Separation BSS techniques [ 3 ]. The main idea behind ICA is that, while the probability distribution of a linear mixture of the sources in the observed data is expected to be close to a Gaussian according to the Central Limit Theorem , the probability distribution of the original sources could be non-Gaussian.

Hence, ICA maximizes the non-Gaussian characteristics of the estimated sources, to find the maximum number of statistically independent original sources that, when mixed, reproduce the observation data see Eq 1. The second approach, NMF, is a well-known unsupervised learning method, created for parts-based representation [ 8 ] in the field of image recognition [ 6 , 7 ], and successfully leveraged for Blind Source Separation, that is, for decomposition of mixtures formed by various types of non-negative signals [ 9 ].

The nonnegativity constraints lead to strictly additive and naturally sparse components that are parts of the data and correspond to readily understandable features. NMF can successfully decompose large sets of nonnegative observations by leveraging the multiplicative update algorithm introduced by Lee and Seung [ 7 ], and we use here a modification of this algorithm. However, NMF requires a priori knowledge of the number of the original sources.

In another example, a shift in the onset of frequency profile can be naturally induced by the Doppler effect. Therefore, for analyzing: astronomical data, Electroencephalography EEG data, Positron Emission Tomography PET , or fluorescence spectra, taking into account the presence of shifts is beneficial or even necessary. A natural extension of NMF is to take into account the potential delays of the signals at the sensors, caused by the different positions of the physical sources in space, combined with the finite speed of propagation of the signals into the considered medium.

Various factorization methods have been developed to deal with signals with possible delays or spectral shifts see, for example, Refs. Thus, how to find this unknown number of sources, based only on recorded data with potential delays, remains a largely untreated problem. Identifying the positions of the sources producing the delayed signals is another common problem for the BSS methods. It arises in many applications e. Various methods and techniques for addressing this problem have been developed over time, but it is still an area of active research.

Here, we report a new algorithm, called ShiftNMF k , and demonstrate that it is capable of determining the unknown number of the sources of delayed signals and estimating their delays, locations, and speeds based only on records of their mixtures. The main improvement and benefits of our new method is its ability to estimate the unknown number of sources with delays and their locations. Here, n , i and m index the sensors, the sources and the observation moments.

These indexes run from 1 to N , from 1 to K and from 1 to M respectively, where N is the number of the sensors, K is the total number of unknown sources emitting signals which form the mixtures in the observation data, and M is the number of discretized moments in time at which these mixtures are recorded by the sensors.

If the problem is solved in a temporally discretized framework, the goal of the BSS algorithm is to retrieve the K original signals that have produced N mixtures recorded by the given set of sensors.

The number of sensors has to be greater than the number of sources. The observations at different sensors are assumed to be at the same times, however, the temporal spacings between them need not be uniform.

If the data are not collected at the same times, interpolation techniques like cubic splines, see e. For NMF to work, the problem must be amenable to a nonnegativity constraint on the sources H and mixing matrix W. This constraint leads to the reconstruction of the observations the rows of matrix V as linear combinations of the elements of H and W that cannot cancel mutually. The classic NMF algorithm starts with a random initial guess for H and W , and proceeds by minimizing the cost objective function, O , which in our case is the Frobenius norm it can be also the Kullback-Leibler KL divergence, or another feasible norm , 2 during each iteration.

In order to minimize O , it is common to use the gradient descent approach based on the multiplicative updates proposed by Lee and Seung [ 7 ]. During each iteration, the algorithm first minimizes O by holding W constant and updating H , and then holds H constant while updating W. The norm, Eq 2 , is non-increasing under these update rules and invariant when an accurate reconstruction of V is achieved [ 7 ]. A limitation of the classic NMF algorithm described above is the assumption of the instantaneous mixing of the signals, which is equivalent to postulating an infinite speed of propagation of the signals in the medium.

A natural extension of NMF is to take into account the potential differences between the moments the same signal reaches different sensors [ 17 ], caused by the spatial distribution of sensors, combined with the finite speed of propagation of the signals in the medium. One way of treating such type of problems is the approach of a NMF algorithm with shifts [ 12 ], designed specifically for signals with delays.

Below we describe the key features of this algorithm. Somewhat more succinct version of this equation is 5 and note that we have used the elementary property of the Discrete Fourier Transform DFT to convert time shifts into phase factors. Details of the algorithm of minimization can be found in Ref.

It is important to note that the NMF algorithm with delays determines only the relative delays the time shifts of the same signal arriving at different sensors. This ambiguity can be simply resolved by centering the delays at zero and splitting them in positive and negative values, which we use below. While powerful and producing results easy to interpret the classic NMF method described in the previous section require a priory knowledge of the number of the original sources which we denote by K.

To address the case when this number is unknown, we have developed an algorithm designed to estimate the number of sources based on the robustness of the minimization solutions. Our algorithm explores consecutively all possible numbers of sources producing signals with delays, from 1, 2… to N N is the number of the sensors by obtaining a large number of NMF minimization solutions for each number of sources.

Then we use clustering to estimate the robustness of each set of solutions corresponding to the same number of sources obtained with different random initial guesses for the minimization. Comparing the quality of the clusters and the accuracy of minimization, we can determine the optimal estimate for K. A similar approach has been used for decomposition of instantaneous mixtures of signals i.

Below we present the clustering method for mixtures of signals with delays. We start by performing N sets of minimizations, which we call NMF runs, one for each possible number D of original sources, which serves to index the distinct NMF models differing only by the number of sources, and goes from 1 to N.

In each of these runs we have P solutions e. This custom clustering is similar to a k -means clustering, but with an additional constraint of holding the number of elements in each of the clusters equal. Note that we have to enforce this condition of having an equal number of points in each cluster since each solution of minimization, specified by a given combination, contributes only one possible solution for each source, and accordingly supplies exactly one element to each cluster.

During the clustering, the similarity between two signals a and b , is measured using the cosine distance [ 20 ], given by: 9 where a i and b i are the individual components of the vectors a and b. Specifically, the optimal number of sources is picked by selecting the value of D that leads to both a an acceptable reconstruction error R of the observation matrix V , where , and b a high average Silhouette width i.

The combination of these two criteria is easy to understand intuitively. By applying our clustering algorithm we determine the number of the sources, and from the minimization with the determined number of sources we obtain the waveforms, the mixing ratios and the delays associated with each of the sources.

With this information available, we can estimate the locations of the sources and speed of propagation of the signals. Also, note that we have assumed a two-dimensional medium, but F in more general form can be used in arbitrary dimensions provided that r j , i is suitably modified. Further, the equation for r i , j gives the distance from a source j to sensor i , and and are the coordinates of source j , while and are the coordinates of sensor i.

The coordinates of all the sensors are known. The coordinates of the sources are the unknown variables, which, along with the speeds of signals propagation, can be found by minimizing the objective function F.

Hence, by minimizing F , we are trying to find the coordinates of the sources and speeds that make this true. So far we have outlined the key parts of the proposed algorithm. In this section, we concentrate on providing a detailed description of its implementation. The starting point of our method is the modification of the NMF minimization designed for signals with delays [ 12 ].

We explored the optimal number of iterations, I , needed to have a reasonable reconstruction error;. Our results demonstrate that after a certain number, I max , increasing further the number of the iterations does not lead to any improvement in the final results. This is because often the algorithm stops by its internal convergence criteria, before reaching the maximum number of iterations.

To be able to determine the number of the original sources we need to perform a large number of simulations to build the clusters and, hence, we need a proper understanding of the limitations of the minimization with different random initial guesses for the elements of W and H is required.

To unravel these limitations, we performed a large number of minimizations with different random initial guesses for observation data generated with waveforms with different level of pair correlation. Our results demonstrate that minimization works better if the original signals are not very strongly correlated.

The reason is easy to understand. If the waveforms of two of the sources are strongly correlated, the algorithm can easily assume that the mixtures recorded by the sensors are produced by one source instead of two nearly identical sources, thus returning a wrong number of original signals that, however, provides a decent reconstruction, R. Changing the correlation between two of three initially generated signals, we studied the success rate of the reconstruction of the minimization by comparing the distance between the generated wave-forms and these derived by the minimization.

For this comparison we used the cosine distance. Fig 1 presents the relationship between the similarity of the original sources measured by cosine distance and the success rate ratio of the number of successful recognitions to the total number of solutions in groups containing one hundred minimizations, for sources with the same correlations. The bars on Fig 1 represent the percentage of the correct reconstructions obtained by the minimization, for the case of 3 source signals with different levels of cosine similarity between two of them.

When performed many times with fixed number of sources, D , but with random initial guesses, the minimization either converges to different sometimes very dissimilar solutions or stops fails before reaching a good solution. Considering the ill-posedness of the minimization problem this behavior is expected. The minimization behavior depends on various factors, such as the initial guesses, the ratio between the number of sources and the number of sensors, the specific shape of the waveforms of the signals and delays, etc.

For example, achieving a good minimization and obtaining an accurate reconstruction depends on the level of correlation between the waveforms of the original signals see, Fig 1. As a result, when we performed a large number of consecutive minimizations with different initial conditions, the algorithm many times returns solutions that demonstrate a poor reconstruction of the observation matrix, V.

To overcome this problem, we employ two elimination criteria needed to extract from each set of solutions from a reasonably sized pool of P minimizations only those that both reproduce accurately the observations and are physically meaningful. Specifically, we use the following two elimination criteria to discard a minimization outliers that do not reconstruct sufficiently well the observation matrix, V.

Skip to search form Skip to main content You are currently offline. Some features of the site may not work correctly. DOI: Cichocki and R. Zdunek and A. Phan and S. Cichocki , R.

Nonnegative Matrix and Tensor Factorizations: Applications to. Nonnegative Matrix and Tensor Factorizations:. Analysis and Blind Source Separation [Electronic book. Nonnegative Matrix and Tensor Factorizations by. Buy Nonnegative Matrix and Tensor Factorizations:.

Kutipan per tahun. Kutipan duplikat. Artikel berikut digabungkan di Scholar.

#### Introduction

To browse Academia. Skip to main content. By using our site, you agree to our collection of information through the use of cookies. To learn more, view our Privacy Policy. Log In Sign Up. Download Free PDF. Anh Huy.

Беккеру удалось увернуться в последнее мгновение. Убийца шагнул к. Беккер поднялся над безжизненным телом девушки.

А вы ищете проститутку. - Слово прозвучало как удар хлыста. - Но мой брат… - Сэр, если ваш брат целый день целовался в парке с девчонкой, то это значит, что она работает не в нашем агентстве.

Пальцы у него онемели. Он упал. И в следующее мгновение не осталось ничего, кроме черной бездны. ГЛАВА 102 Стратмор спустился на нижний этаж ТРАНСТЕКСТА и ступил с лесов в дюймовый слой воды на полу.

*Офицер еще какое-то время разглядывал паспорт, потом положил его поверх вороха одежды. - У этого парня была виза третьего класса. По ней он мог жить здесь многие годы.*

Это не смешно, Чед. Заместитель директора только что солгал директорской канцелярии. Я хочу знать. Бринкерхофф уже пожалел, что не дал ей спокойно уйти домой. Телефонный разговор со Стратмором взбесил .

Стратмор отрицает, что ТРАНСТЕКСТ бьется над каким-то файлом восемнадцать часов. - Он был крайне со мной любезен, - просияв, сказал Бринкерхофф, довольный тем, что ему удалось остаться в живых после телефонного разговора. - Он заверил меня, что ТРАНСТЕКСТ в полной исправности. Сказал, что он взламывает коды каждые шесть минут и делал это даже пока мы с ним говорили.

- Беккер улыбнулся и достал из кармана пиджака ручку. - Я хотел бы составить официальную жалобу городским властям. Вы мне поможете. Человек вашей репутации - ценнейший свидетель.

* У нас ничего такого не случалось. - Вот. - Она едва заметно подмигнула.*

Не забывай и о сильнейшем стрессе, связанном с попыткой шантажировать наше агентство… Сьюзан замолчала. Какими бы ни были обстоятельства, она почувствовала боль от потери талантливого коллеги-криптографа. Мрачный голос Стратмора вывел ее из задумчивости. - Единственный луч надежды во всей этой печальной истории - то, что Танкадо путешествовал .

Вряд ли он позволил бы ТРАНСТЕКСТУ простаивать целый уик-энд. - Хорошо, хорошо. - Мидж вздохнула.

Чатрукьяну вдруг стало холодно. У сотрудников лаборатории систем безопасности была единственная обязанность - поддерживать ТРАНСТЕКСТ в чистоте, следить, чтобы в него не проникли вирусы. Он знал, что пятнадцатичасовой прогон может означать только одно: зараженный файл попал в компьютер и выводит из строя программу. Все, чему его учили, свидетельствовало о чрезвычайности ситуации.

*Как и все другие крупные базы данных - от страховых компаний до университетов, - хранилище АНБ постоянно подвергалось атакам компьютерных хакеров, пытающих проникнуть в эту святая святых. Но система безопасности АНБ была лучшей в мире. Никому даже близко не удалось подойти к базе АНБ, и у агентства не было оснований полагать, что это когда-нибудь случится в будущем.*