Skip to main content

Clutter suppression in ultrasound: performance evaluation and review of low-rank and sparse matrix decomposition methods

Abstract

Vessel diseases are often accompanied by abnormalities related to vascular shape and size. Therefore, a clear visualization of vasculature is of high clinical significance. Ultrasound color flow imaging (CFI) is one of the prominent techniques for flow visualization. However, clutter signals originating from slow-moving tissue are one of the main obstacles to obtain a clear view of the vascular network. Enhancement of the vasculature by suppressing the clutters is a significant and irreplaceable step for many applications of ultrasound CFI. Currently, this task is often performed by singular value decomposition (SVD) of the data matrix. This approach exhibits two well-known limitations. First, the performance of SVD is sensitive to the proper manual selection of the ranks corresponding to clutter and blood subspaces. Second, SVD is prone to failure in the presence of large random noise in the dataset. A potential solution to these issues is using decomposition into low-rank and sparse matrices (DLSM) framework. SVD is one of the algorithms for solving the minimization problem under the DLSM framework. Many other algorithms under DLSM avoid full SVD and use approximated SVD or SVD-free ideas which may have better performance with higher robustness and less computing time. In practice, these models separate blood from clutter based on the assumption that steady clutter represents a low-rank structure and that the moving blood component is sparse. In this paper, we present a comprehensive review of ultrasound clutter suppression techniques and exploit the feasibility of low-rank and sparse decomposition schemes in ultrasound clutter suppression. We conduct this review study by adapting 106 DLSM algorithms and validating them against simulation, phantom, and in vivo rat datasets. Two conventional quality metrics, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR), are used for performance evaluation. In addition, computation times required by different algorithms for generating clutter suppressed images are reported. Our extensive analysis shows that the DLSM framework can be successfully applied to ultrasound clutter suppression.

Background

Angiology, which concerns vessel-related diseases, is one of the most important branches of medical science since vascular diseases are very common and cause death to a large number of people every year [1]. Vascular diseases can primarily be divided into several categories based on the type of vessel. Arterial diseases include aneurysms, thrombosis, vasculitides, and vasospastic disorders. Venous diseases include venous thrombosis, chronic venous insufficiency, and varicose veins. There are also diseases associated with capillaries. One such example is the capillary hemangioma. Currently, the most accepted classification of vascular abnormalities is tumors and deformities which were adopted in 1996 by the International Society for the Study of Vascular Anomalies [2]. Therefore, many major clinical diseases have been shown to cause vascular growth abnormalities. For example, many cardiovascular diseases are related to aneurysms or other vascular variations [3, 4]; the growth of many tumors in cancer is also highly dependent on angiogenesis [5, 6]. Similarly, angiogenesis is also an important feature of diabetes-related diseases [7,8,9] and endometriosis [10]. Therefore, blood vessel imaging is indispensable in clinical fields and medical research [11], including but not limited to diagnosis, treatment planning, surgery, and follow-up treatment results.

Several medical imaging modalities such as duplex ultrasound (DUS), computed tomography (CT), magnetic resonance imaging (MRI), and digital subtraction angiography (DSA) have been employed thus far to ensure a proper visualization of blood vessels. Among different vascular imaging modalities, ultrasound has become the primary choice, for it is safe, economical, easy-to-use, and most importantly, real-time [11]. Duplex ultrasound is the combination of color flow imaging (CFI) and grayscale/brightness mode (B-mode) imaging, whereas the CFI is used to observe the blood flow direction and velocities, and the B-mode ultrasound is used to visualize two-dimensional anatomy images simultaneously. By simultaneous processing frequency, phase, and amplitude of the backscattered ultrasound signal, CFI can rapidly identify the flow direction and velocities in the region of interest. Moreover, CFI can be used to mark flow abnormalities, including stenoses and occlusions [12]. The comparison between ultrasound and other vascular imaging methods is shown in Table 1.

Table 1 A comparison of vessel imaging methods [11, 13]

Due to the excellent performance, ultrasound CFI has been increasingly used for the diagnosis of vessel-related diseases [14]. However, as one of the most promising and widely applicable methods with low cost and no risk, CFI still has some obvious disadvantages. First, due to the tissue scattering of the ultrasound beam, the intensity of the blood backscatter is several orders of magnitude less than that of the tissue backscatter, which makes it hard to image blood flow clearly [12, 15]. Second, more than three pulses are needed to estimate the velocity because of the stochastic behavior of blood signals and the impact of tissue clutters [12]. The requirement for multiple pulses limits frame rates and the number of scan lines. Third, CFI is limited by the insonation angle which is the angle between the ultrasound beam and the flow direction [16, 17]. Generally, an accurate measurement requires Doppler angles ranging from 30° to 60°, where smaller angles will result in lower speeds and greater angles will produce a significant overestimation of the velocity [16, 17]. Last but not least, blood signals and clutter signals will possess a significant overlap, that is, when the blood flow rate is very slow (such as in small blood vessels) or when the tissue movement is obvious. The overlap will be harmful to blood vessel visualization [18, 19]. Most of these disadvantages are caused by clutter, as a consequence, clutter suppression is particularly important in ultrasound blood flow imaging. Figure 1 shows the clutter in two CFI images and illustrates the importance of clutter filtering.

Fig. 1
figure 1

A set of comparison images showing CFI with and without clutter filters. a CFI raw data in Brightness mode. b The same data after clutter suppression by SVD. In the upper right window, the raw CFI data contain a lot of tissue clutter in the background, which is suppressed by SVD in the second image

The main purpose of clutter filtering is to suppress gross-moving tissue clutter and beam sidelobe leakages [19]. An efficient clutter suppression is a prerequisite for CFI to present accurate and clear blood flow maps. The most significant effect of clutter reduction is an increase in the signal-to-noise ratio (SNR) of the blood signal, which enables clearer blood flow maps and reduces erroneous moving tissue signals. Meanwhile, pure blood flow signals also help reduce the number of pulses needed to estimate the speed, thereby increasing the frame rate. In addition, the overlapping frequency spectra of slow blood flow and fast-moving tissue will no longer hinder the microvascular flow detection or add bias to high-velocity flow [19, 20].

However, the perfect removal of clutter signals is still impossible for now since clutter signals are 40 to 100 dB stronger than blood signals and they exhibit similar properties [15].

In early development of CFI clutter filtering, tissue signals and blood signals were assumed to have completely different frequency characteristics. This assumption holds that tissue and blood signals exhibit non-overlapping frequency spectra since the tissue is considered to be nearly stationary whereas red blood cells are rapidly moving [18]. Based on this assumption, finite impulse response (FIR) and infinite impulse response (IIR) high-pass filters were used to filter clutter signals and enhance the sensitivity of blood flow [15, 18]. Nowadays, it is recognized that FIR and IIR filters have distinct drawbacks. FIR filters require a high order to separate blood from clutter, whereas IIR filters take a long time to settle [18, 21]. Furthermore, both types of high-pass filters suffer from the insufficient number of slow time samples, which leads to inefficient suppression of clutter [19, 22]. Another clutter removal approach introduces linear regression filters [23,24,25]. The regression filter eliminates clutter signals by taking the least square fitting of signals from the signal model [18]. Studies suggest that polynomial regression filters and IIR filters have better performance than FIR filters. In the case of contrast-enhanced ultrasound vascular imaging, pulse inversion technique has been introduced toward the end of clutter rejection [26,27,28]. In this approach, the linearity property of tissue echo is exploited for distinguishing tissue from blood [26, 29, 30]. Although these methods significantly improve the SNR of blood signals, they are not considered in this paper because of their invasiveness.

The aforementioned traditional clutter suppression algorithms, such as FIR and IIR, have at least one of the following issues: (1) long settling time, (2) inability to adaptively suppress the clutter based on data property, (3) inadequate temporal sample or resolution. Besides, two main reasons are resulting in the imperative innovation of ultrasound clutter filtering. First, new ultrasound technologies like plane wave ultrasound have brought a higher frame rate and imaging speed. Traditional filters cannot meet the higher clutter filtering performance requirements, though they do not suffer from the settling time due to the high frame rate. Second, the underlying assumption of traditional filters does not hold in the presence of significant tissue motion stemming from the sonographer’s sinusoidal hand movement or the patient’s breathing and heartbeat [31, 32]. In such a scenario, the frequency bands corresponding to tissue and blood overlap with each other without a definite boundary between them. Hence, high-pass filters fail to separate blood from tissue when the clutter signal dominates with non-zero Doppler frequency caused by substantial tissue movements.

To resolve these issues, eigen-based filters [33,34,35] have been proposed which take both spatial and temporal samples into consideration to develop an adaptive clutter suppression scheme. The techniques related to these eigen-based filters have been widely applied in the field of computer science which is mainly used for processing high-dimensional data. Meanwhile, these techniques are not based on incomplete traditional assumptions. Matrix decomposition is the principal idea behind these algorithms and it is assumed that clutter and blood signals lie in different subspaces. Therefore, eigen-based filters are considered adaptive to gross motions induced by the sonographer or the subject being examined. Based on different assumptions, research proves that eigen-based filters perform better than traditional methods [20, 22].

Most of the eigen-based filters for ultrasound clutter suppression are based on singular value decomposition (SVD) or eigenvalue decomposition and improve upon it [36,37,38,39]. To perform the subspace separation task, slow-time temporal ultrasound frames are stacked as columns of data matrix, known as the Casorati matrix [40]. The SVD of this Casorati matrix provides the opportunity to distinguish blood from clutter. It has been reported in the literature that the most dominant singular values and vectors correspond to clutter, the next few represent blood and the least significant ones correspond to noise [41]. In these eigen-based approaches, the eigen or singular values representing clutter and noise are set to zero to find the blood component of the echo signal [41, 42].

Many SVD-based techniques have been proposed which work with conventional line-by-line scanning [20, 43,44,45]. These methods suffer from lacking an adequate number of temporal samples due to low frame rate associated with focused ultrasound imaging [21]. Recent clutter suppression algorithms [18, 42, 46,47,48,49] have resolved this issue by incorporating ultrafast plane-wave imaging. However, the blood signal in plane-wave ultrasound is even weaker than normal ultrasound due to the unfocused wave [50, 51]. The sidelobe in plane-wave imaging is also much higher than that in conventional imaging due to the same reason. Therefore, plane-wave ultrasound has a higher and more urgent filtering requirement than traditional CFI. Recent methods have extended SVD-based clutter suppression to a higher order by analyzing a data tensor instead of a two-dimensional matrix [42, 48, 52]. Since the first few singular values do not necessarily correspond to the clutter signal in the presence of a large temporal misalignment among the frames, the motion correction step has been introduced in SVD-based clutter rejection [53]. Since SVD was initially combined with plane-wave imaging in 2015, almost all the clutter suppression research have been based on plane-wave ultrasound since SVD can reach its full potential on large datasets [18].

Although SVD-based techniques are promising for suppressing clutter optimally, they have two major drawbacks. First, there is still no uniform and efficient standard for rank selection which presents boundaries of tissue and blood flow [42]. Proper subspace rank selection, which is done by extensive manual intervention, is crucial for the optimality of clutter rejection. Recent methods suggest different criteria for selecting the optimal ranks [54]. In addition, [21] proposes K-means clustering of the decomposed components as a criterion for selecting singular values and vectors corresponding to clutter and blood. Though different ideas are proposed for automatic rank selection [55], there is still no efficient and standard method. An example that briefly explains the problem of SVD threshold selection is shown in Fig. 2. The selected rank will affect blood signals. A large threshold range cannot effectively filter clutter and noise, and a small range will lose part of the blood signals. The second drawback is that SVD is sensitive to noise. It fails to obtain the optimal result while processing data with large random noise [56].

Fig. 2
figure 2

A set of pictures showing the threshold selection of SVD. a The original simulation data in brightness mode. be The processed images by SVD with different thresholds. Parameters b and e represent the selected rank of blood and noise signal, respectively. The full rank of the data is 20

The aforementioned issues can potentially be resolved by taking the framework called decomposition into low-rank and sparse matrices (DLSM) [21] into account. SVD is one of the calculation methods in DLSM framework and there are also approximate SVD or SVD-free algorithms. This is a well-established framework in the field of computer vision due to its robustness to large noise and information corruption [56]. The underlying assumption of this approach is that steady tissue is a low-rank component and moving blood exhibits sparsity [50]. It has been noticed that both temporal and spatial information can be used to separate tissue and blood signals since tissue signals have a higher temporal–spatial coherence than blood signals (e.g., the blood scatterers are unique and constantly changing). A convex optimization problem is usually solved to decompose the data matrix into low-rank clutter and sparse blood components. A recent technique has used this model for the concurrent removal of clutter and noise [57]. Furthermore, recent work has incorporated deep learning with low-rank and sparse decomposition for improved clutter suppression performance [21].

The main purpose of this work is to demonstrate the feasibility of 106 established low-rank and sparse decomposition algorithms in ultrasound clutter suppression and to provide suggestions for most suitable DLSM models, optimization methods, and algorithms for ultrasound clutter suppression. The paper is organized as follows. “Decomposition into low-rank and sparse matrices (DLSM) framework” section illustrates DLSM framework including decomposition types, loss functions, and relationships with tensor decomposition. “Experiments” section includes detailed experimental settings and results on simulation, phantom, and in vivo rat datasets. Finally, the paper reveals our discoveries in “Discussion” and “Conclusion” sections including the appropriate DLSM algorithms for clutter suppression and the shortcomings of the remaining algorithms. Comments and possible solutions are also proposed in response to different shortcomings.

Decomposition into low-rank and sparse matrices (DLSM) framework

Low-rank and sparse structures are attractive since they usually represent part of the large and high-dimensional data which we are most interested in. Noise and data corruption can be fixed when decomposing matrices into low-rank and sparse components. Methods like sparse representation and low-rank modeling have achieved great success in computer vision, natural language processing, system identification, bioinformatics, etc. [58,59,60]. So far, many different models, optimization methods, and algorithms are proposed aiming at solving the low-rank and sparse matrix recovery problems. Meanwhile, many classifications have been proposed [58, 61,62,63] according to linearity, convexity, number of subspaces, or number of addition matrices.

Decomposition into low-rank and sparse matrices (DLSM) is one of the relatively detailed and comprehensive frameworks [61] which classifies various models of matrix decomposition according to the number of constrained component matrices. DLSM framework provides a suitable framework for signal processing, system identification, computer vision, machine learning, etc. This decomposition idea is becoming more popular and widely used in recent years, especially after the robust principal component pursuit (RPCP) was purposed in papers of Candes et al. [56], and Chandrasekharan et al. in 2009 [64]. In the beginning, these algorithms are designed to deal with high-dimensional data which are often regarded as an extremely high-dimensional data matrix. Since many dimensions are usually independent, it is possible to recover the matrix from corruption or noise. These ideas are based on the assumption that the uncorrupted information matrix is highly correlated within the observing time-window and therefore lies in the low-rank subspace. At the same time, the moving foreground objects, noise, or other special signals constitute the correlated sparse outliers.

Based on similar assumptions, several algorithms under DLSM framework have been validated that they can be successfully applied to ultrasound clutter suppression [18, 22, 36, 38, 55, 65]. In medical ultrasound, tissue and blood flow also lie in different subspace. In terms of temporal information, tissue signals and blood signals have different spectral features due to the different movement patterns of blood and tissue. As for spatial features, the blood signal has an extremely lower spatial coherence than tissue signal because the irregular movement and arrangement of red blood cells produce constantly changing scatterers, whereas the tissue movement is overall patterned. Therefore, they gain a low rank and sparsity characteristics, respectively, and lie in different subspaces. Due to the robust and efficient performance of DLSM frameworks in separating low-rank and sparse components, it can show great potential in the field of ultrasound clutter suppression.

Overall, DLSM framework is divided into decomposition problems, minimization problems, loss function and solvers (algorithms used to solve the optimization problems) [63] as Fig. 3 shows. The permutations and combinations of models and optimization methods and solvers lead to various algorithms, which is the origin of the DLSM framework. DLSM framework and its application in the ultrasound clutter suppression will be briefly illustrated in the following subsections.

Fig. 3
figure 3

The schematic diagram of DLSM framework. DLSM framework contains 5 branches, which are models (or called math formulations), decomposition problems, minimization problems, loss functions, solvers (or called algorithms). Examples are shown beside the branches

Preprocessing and notations

Preprocessing of ultrasound data is necessary for integration into an input matrix or tensor in a special shape when applying DLSM algorithms. In general, the input of the DLSM algorithm consists of a sequence of n consecutive ultrasound data (\(F_1\dots F_n\)) with the original size of \(F\in \mathfrak {R}^{i_1\times i_2}\). For a two-dimensional DLSM algorithm, the input M (\(M\in \mathfrak {R}^{m\times n}, m=i_1\times i_2\)) is in matrix form in most cases which consists of n resized ultrasound data frames (\(F\in \mathfrak {R}^{m\times 1}\)) arranged in order. In terms of high-order DLSM algorithms, the input is typically an N-order tensor T (\(T\in \mathfrak {R}^{t_1\times t_2\dots t_n}\)). T is generally third order and concatenated by original size ultrasound frames, where \(T=[F_1^{i_1\times i_2},\dots F_n^{i_1\times i_2}], T\in \mathfrak {R}^{i_1\times i_2\times n}\). Next, the input M (or T) is decomposed into several components through the DLSM algorithm as follows:

$$\begin{aligned} M = \sum \limits _{x=1}^X K_x \end{aligned}$$
(1)

where \(0\le X \le 3\) and \(K_1,K_2,K_3\) typically represent low-rank L, sparse S, and noise components E, respectively. The specific components \(K_x\) and the number of X depend on the purpose (interested in sparse or low rank components) and the decomposition formulation.

Decomposition formulations

Implicit decomposition

Implicit decomposition \((X=1)\): Under the condition that x is equal to 1, matrix M is approximately equal to a target low-rank matrix L under the constraint condition, because the information that people interested in mainly lies in the low rank component in most cases. Sparse matrix S can be obtained from the difference between M and S (e.g., \(S=M-L\)). However, this processing is the opposite in the application of ultrasound clutter suppression because the blood signal is relatively sparse. The formulation of this problem is as follows:

$$\begin{aligned} \min \, f(M,L) \quad \text {s.t.}\; L \end{aligned}$$
(2)

where \(M\approx L\), f(.) is a loss function used for the minimization term which depends on specific solvers or algorithms. Models like principal component analysis (PCA), non-negative matrix factorization (NMF), and matrix completion (MC) are in this category.

For the applications targeted to sparse components, implicit decomposition sets the target matrix \(K_1\) as a sparse matrix S. Then low-rank matrix L is the difference between M and S which can be calculated as \(L=M-S\). Sparse dictionary learning [66], sparse linear approximation, and compressive sensing [66,67,68], etc. are built under the same idea.

$$\begin{aligned} \min \, f(M,S) \quad \text {s.t.}\; S \end{aligned}$$
(3)

where \(M\approx S\), and the difference contains noise and other information. In this case, implicit decomposition can be used in the compressed sensing and signal recovery similar to unsupervised clustering [69] and image recognition [70].

Before more robust explicit decomposition method was proposed, the main development of ultrasound clutter suppression was based on PCA or SVD or eigenvalues, which belong to implicit decomposition [19, 22, 35, 38, 39, 45]. Although many experiments have proved that these eigen-based filters greatly improve the performance than traditional IIR and regression filters, many authors realize that the filtering method based on implicit decomposition is not robust to accelerated tissue movements and different kinds of noise [22, 35, 39]. Moreover, their expensive computational complexity is not suitable for real-time processing.

Explicit decomposition

Explicit decomposition \((X=2)\): Under this condition, M is usually decomposed into a low-rank matrix \(K_1=L\) and a sparse matrix \(K_2=S\) (\(M\approx L+S\)). This is called explicit decomposition because there are two constraints. One is sparse constraint over S and the other is low-rank constraint over L. Therefore, explicit decomposition is more robust than implicit decomposition. The formulation of explicit decomposition is as follows:

$$\begin{aligned} \min \, f(L)+f(S) \quad \text {s.t.}\; L,S \end{aligned}$$
(4)

where \(M\approx L+S\) and f(.) represents loss function. The explicit decomposition includes robust principal component analysis (RPCA), robust non-negative matrix factorization (RNMF), robust matrix completion (RMC), and robust subspace tracking (RST) [59, 71].

These methods generally work better and are more robust than implicit decomposition because of the additional constraints [71]. In this way, RPCA has been used as a powerful tool in MRI, CT, and ultrasound imaging [72,73,74]. Many optimization algorithms have been proposed for cluster suppression in ultrasound imaging using RPCA, RMC [21, 55, 75].

Stable decomposition

Stable decomposition \((X=3)\): Due to the fact that there are always noise and corruption caused by special cases in the real world, an additional matrix \(K_3\) is added to represent unexpected components. \(K_3\) could represent distortion, shadows, and noise according to special situations (\(M\approx S+L+N\)). It is more stable than the explicit decomposition since more detailed information is separated and taken into account. The stable decomposition can handle more complex situations in the real life such as dynamic videos and maritime monitoring videos which are corrupted by complicated noise.

$$\begin{aligned} \min \, f(S)+f(L)+f(N) \quad \text {s.t.}\; L, S \end{aligned}$$
(5)

Stable decomposition methods include Stable Principal Component Analysis (Stable PCA) or Stable Non-negative Matrix Factorization (Stable NMF) and Three Term Decomposition (TTD). These methods can deal with more complex situations. In terms of US imaging, it is usually assumed that signal M contains clutter signals L (low-rank), blood signals S (sparse), and noise N. Since ultrasound signals have complex noises and dynamic clutter signals, this assumption \(M=S+L+N\) is more acceptable when there are meticulous requirements such as microvascular imaging. Although some literature mentions the stable decomposition of blood (LSE respectively represent blood flow signals, clutter signals, and noise), they do not illustrate whether constraints are added to noise component. Therefore, the stable decomposition formulation is still a promising research area for ultrasound clutter suppression.

Models under DLSM framework

As of today, many models, also called problem formulations, have been proposed. According to different math formulations and features, methods are usually classified under families such as robust principal component analysis (RPCA), non-negative matrix factorization (NMF), matrix completion (MC), and Subspace Tracking (ST). Different models have different functions and aims. However, it has been proved that the solutions of many robust models can be mutually expressed in closed forms [76]. For instance, RPCA via principal component pursuit [56] can be considered as MC models using \(l_1\)-norm loss function [63]. In addition, these models can be flexibly generated in any decomposition formulations. For example, adding constraints on noise components based on the RPCA will change it from explicit decomposition to stable decomposition.

Robust principal component analysis

Principal component analysis (PCA) generates a set of linearly uncorrelated variables which is called principal components, from a set of observations by orthogonal transformation. Similar mathematical tools include SVD and eigenvalue decomposition. RPCA is based on the extension of PCA (expansion from implicit decomposition to explicit decomposition), which aims to recover low-rank components and reduce the impact of grossly corrupted data. RPCA can be approached by principal component pursuit (PCP) [56, 64], Bayesian RPCA [77,78,79], and so on. RPCA problem is generally expressed as follows:

$$\begin{aligned} M = L + S \end{aligned}$$
(6)

where L is low-rank matrix and S is sparse matrix. According to the nature of L and S, the most intuitive way to solve the RPCA problem is to minimize the rank of L and the \(l_0\)-norm of S:

$$\begin{aligned} \min _{L,S}\, rank(L) + \lambda \Vert S\Vert _{l_0}\quad \text {s.t.}\;M-L-S=0 \end{aligned}$$
(7)

where \(\lambda\) is a balanced parameter. However, this formulation is NP-hard. Therefore, optimization problems like PCP are needed.

The convex optimization principal component pursuit (PCP) was first proposed by Candes et al. [56, 63, 80] to address the RPCA problem. It becomes one of the most famous methods of face recognition and background modeling in recent years. PCP uses the following formula to convexly optimize RPCA problem:

$$\begin{aligned} \min _{L,S}\, \Vert L\Vert _{*} + \lambda \Vert S\Vert _{l_1}\quad \text {s.t.}\;M-L-S=0 \end{aligned}$$
(8)

where \(\Vert .\Vert _{*}\) and \(\Vert .\Vert _{l_1}\) are the nuclear norm and \(l_1\)-norm, respectively. Although this method excels in computer vision, there are still some limitations to sparse components recovery. First, it requires expensive computational algorithms. Second, it is a batch method which is not suitable for real-time applications, especially for plane-wave ultrasound with high frame rates. Third, it has very high requirements for low rank and sparse properties; however, the complex blood flow or noise may make it difficult for ultrasound data to meet such requirements. To accelerate the algorithms and achieve higher precision, different solvers have been developed [81,82,83]. Solvers for real-time implementations have also been proposed [84, 85].

The stable principal component pursuit (SPCP) is a stable expanded form based on PCP, which mainly aims at reducing the impact of noise. SPCP adds noise term E based on PCP and constrains it by Frobenius norm.

Matrix completion

The main purpose of matrix completion (MC) is to recover low-rank observation matrix of its missing entries. The Netflix movie rating matrix recover problem is one of the most classic examples. The classic low-rank matrix completion problem can be seen as finding the lowest rank of the matrix L which matches the matrix M, for all the measured entries in set \(\Omega\). The mathematical formulation of MC problem is as follows:

$$\begin{aligned} \min _{L}\, rank(L)\quad \text {s.t.}\;L_{m,n}=M_{m,n}\;\forall i,j\in \Omega \end{aligned}$$
(9)

Due to the implicit decomposition of MC is not robust to noise which only affects a small-scale data [86, 87], MC is generally extended to explicit decomposition by adding restrictions, which is called robust matrix completion (RMC). The common RMC obtains stronger robustness than MC by adding sparse constraints, and its formulation after convex optimization is as follows:

$$\begin{aligned} \min _{L,S}\, \Vert L\Vert _{*} + \lambda \Vert S\Vert _{l_1}\quad \text {s.t.}\;P_\Omega (L+S)=P_\Omega (M) \end{aligned}$$
(10)

where \(P_\Omega (M)\) is the projection of the complete dataset on the measured entries \(\Omega\). Although the form of decomposition is the same as PCP, the unique constraints of RMC make it supervised while the PCP is unsupervised learning [63], which is consistent with the purpose of RMC.

Nonnegative matrix factorization

The nonnegative matrix factorization (NMF) is also a widely used matrix factorization and dimension reduction model under DLSM framework. The main unique feature of NMF is that low-rank factor matrix is subject to nonnegative constraints consistent with the physically natural features in many fields [88, 89]. The NMF problem is generally expressed as follows:

$$\begin{aligned} M \approx WH^\top \end{aligned}$$
(11)

where \(W\in \mathfrak {R}^{m\times k}\) and \(H\in \mathfrak {R}^{n\times k}\) are two nonnegative matrices, and \(k<\min \{m,n\}\) due to the goal of dimension reduction. The most common formulation for the optimization problem of NMF is as following:

$$\begin{aligned} \min _{W,H}\, f(W,H) = \Vert M-WH^\top \Vert _F^2\quad \text {s.t.}\;W\ge 0,H\ge 0 \end{aligned}$$
(12)

where \(\Vert .\Vert _F^2\) is the Frobenius norm. The problem (14) is a non-convex problem and it is NP-hard to find its global minimum [88, 90]. Consequently, optimization algorithms and solvers are developed for the local minimum.

Subspace tracking

The subspace tracking (ST) can be regarded as the dynamic RPCA designed to handle increasing new data or dynamic subspaces. The data at each moment t are processed as the increments and then discarded. This idea addresses the problem when new observations come in asynchronously in online streaming environments. It makes subspace tracking more efficient and less computationally expensive on extremely long data sequences [91]. Since ST can recover subspaces from incomplete frame vectors, it has the potential to further improve efficiency by downsampling the input frames [63]. The general formulation for the ST problem is as follows:

$$\begin{aligned} m_t = \sum _{x=1}^X k_x = l_t+s_t+e_t \quad for\; t=1,2,\dots ,n;\; X\in {1,2,3} \end{aligned}$$
(13)

where \(m_t\) is input frame data at time t, and \(l_t\), \(s_t\), \(e_t\) are low-rank, sparse, and noise components of \(m_t\). The number of k is determined according to different decomposition forms, and the constraint conditions and approximate approximations on each component are determined according to different optimization methods.

Low-rank representation

Low-rank representation (LRR) can also be called low-rank optimization or low-rank minimization. Other unclassified models can be regarded as LRR. LRR is a minimization problem in mathematics. In LRR, the cost function measures the fit between the input matrix M and the approximation matrix L [63]. The mathematical formulation of LRR problem is as follows:

$$\begin{aligned} \min \, \Vert M-\hat{M}\Vert _F\quad \text {s.t.}\;rank(L)\le r \end{aligned}$$
(14)

where M is the input matrix, \(\hat{M}\) the approximate matrix, \(\Vert .\Vert _F\) the Frobenius norm, and r the rank. The basic form of LRR is similar to other models; therefore, most of the other unclassified models can be regarded as a category in LRR. For instance, RPCA and NMF can be obtained by similar architectures. Constraints other than rank constraints can be added for specific applications. LRR can be extended into an explicit or stable form by adding constraints on the sparse and noise components.

The extension to tensor

In DLSM framework, only some of the single-dimensional information is used when images are pretreated into data matrix M as vectors. This means that some multidimensional information is not taken into account in the process of decomposition. To improve the results, the tensor decomposition is proposed.

Tensor DLSM

When it comes to tensor, the most intuitive idea is to change all matrices to tensors directly since a tensor can be seen as a combination of several matrices. It is very similar to DLSM framework which subjects to \(T=L+S+E\). The tensor DLSM extends all components to a tensor form as Fig. 4.

$$\begin{aligned} T=L+S+E \end{aligned}$$
(15)

where T, L, S and N represent the data tensor, low-rank tensor, sparse tensor and noise tensor, respectively. Similar to the matrix DLSM framework, it can be optimized and solved by a minimization problem. Some other classic matrix decomposition optimization methods have also been extended to tensor. The tensor robust principal component method [92] has been proposed based on tensor singular value decomposition (t-SVD) [93]. It has been demonstrated the effectiveness of image denoising. Another robust low-rank tensor recovery model based on RPCA has also been published for complex multilinear data analysis [94]. Rank sparsity tensor decomposition (RSTD) [95] and some other ideas based on stable principal component pursuit (PCP) also have been utilized in image processing.

Fig. 4
figure 4

The illustration of tensor decomposition

Tensor decomposition

There are two classical tensor decomposition forms which are CANDECOMP/PARAFAC (CP) decomposition and Tucker decomposition [96]. Given a tensor \(T\in \mathfrak {R}^{t_1\times t_2\times \cdots t_n}\), the CP decomposition and Tucker decomposition can be modeled as follows:

  • Tucker decomposition

    $$\begin{aligned} T=g\times \prod _{i=1}^N U_i+\varepsilon \end{aligned}$$
    (16)

where \(g\in \mathfrak {R}^{r_1\times r_2\times \cdots r_n}\) is the core tensor, r is the rank of factor matrix \(U_i\in \mathfrak {R}^{t_i\times r_i}\), and \(\varepsilon\) represents the residuals. Figure 5 is a schematic representation of the Tucker decomposition. The Tucker decomposition is usually regarded as a non-convex optimization problem [63]. Two most famous and widely used solvers for Tucker decomposition are Tucker-ALS based on alternating least squares [96] and Tucker-ADAL based on alternating direction augmented Lagrangian [94]. SVD based on Tucker decomposition is generally called high-order singular value decomposition (HOSVD) [97, 98], which calculates the singular values of the three expansions \(U_1,U_2,U_3\) of a three-dimensional tensor under Tucker decomposition. HOSVD-based ultrasound clutter optimization has been proposed [52, 99] and proved to be more robust to low sampling rates than SVD.

  • CP decomposition

    $$\begin{aligned} T=U_1\circ U_2 \cdots \circ U_R+\varepsilon \end{aligned}$$
    (17)

where R is the number of the components, \(U_i\in \mathfrak {R}^{t_i\times r_i}\), \(\varepsilon\) represents the residuals, and \(U_1\circ U_2 \cdots \circ U_R\) is the CP model [71]. Figure 6 is a schematic representation of the CP decomposition. CP decomposition is similar in form to Tucker decomposition since the number of components in the factor matrices is the same [96]. The original CP problem is NP-hard. Therefore, the Frobenius norm is generally used to relax the low-rank constraint. Similar to Tucker decomposition, CP decomposition problem can also be solved by alternating least squares, called CP-ALS. To the best of our knowledge, there is currently no well-known article applying CP decomposition to ultrasound clutter filtering.

Fig. 5
figure 5

The illustration of Tucker decomposition

Fig. 6
figure 6

The illustration of CANDECOMP/PARAFAC (CP) decomposition

Minimization problems

The decomposition problems generally turn into minimization problems or optimization problems in its original form or its Lagrangian form [63].

$$\begin{aligned} \min _{K_i} \, \sum _{i=1}^{x}{\lambda _if_i(K_i)}\quad \text {s.t.} \;C_i \end{aligned}$$
(18)

where \(\lambda _i\) are the regularization parameters, \(f_i(.)\) are the loss functions for low-rank, sparse, and noise components, \(C_i\) are the constraints on \(K_i\). Consistent with the decomposition problems, the minimization problems can be divided into three categories according to the number of constraints and loss functions imposed.

  • \(x=1\) is the case of implicit decomposition: \(\min _L \, \lambda _1f_1(L) \quad \text {s.t.} \; C_1\)

    where \(C_1\) is \(\Vert M-L\Vert _2=0\) or other forms. For sparse decomposition, the low-rank components are replaced by sparse components. This problem can be NP-hard, non-convex, or under specific constraints. Therefore, other formats of the loss functions are applied to relax the constraints when the problem is NP-hard or non-convex. For example, the loss function f is rank loss function in original MC model as \(\min _{L}\, rank(L) \quad \text {s.t.} \; \Vert M-L\Vert _2=0\).

  • \(x=2\) is the case of explicit decomposition: \(\min _{L,S} \; \lambda _1f_1(L)+\lambda _2f_2(S) \quad \text {s.t.} \; C_2\)

    where \(C_2\) is \(\Vert M-L-S\Vert _2=0\) or other forms. For example, the \(f_1\) and \(f_2\) loss functions are rank and \(l_0-norm\) loss functions in original RPCA model as \(\min _{L,S}\, rank(L)+\lambda \Vert S\Vert _{l_0} \quad \text {s.t.} \;\Vert M-L-S\Vert _2=0\).

  • \(x=3\) is the case of stable decomposition: \(\min _{L,S} \, \lambda _1f_1(L)+\lambda _2f_2(S)+\lambda _3f_3(N) \quad \text {s.t.} \; C_3\)

    where \(C_3\) is \(\Vert M-L-S-E\Vert _2=0\) or other forms. For example, the \(f_1\) and \(f_2\) loss functions are rank and \(l_0-norm\) in original RPCA model as \(\min _{L,S}\, rank(L)+\lambda \Vert S\Vert _{l_0} \quad \text {s.t.} \; \Vert M-L-S\Vert _2=0\). The stable decomposition is generally adding constraints on the noise component based on the robust decomposition. The Frobenius norm loss function (\(\lambda \Vert M-L-S\Vert _F^2=0\)) is used in most cases.

Although there are some algorithms that can solve non-convex problems through mathematical approximation [100], in general, non-convex problems are difficult to solve with weak convergence. This is also an important role that minimization problems play.

Loss functions

The loss function can be seen as a constraint of the decomposed matrices. In DLSM framework, loss functions are used on the minimization matrices as norm formats. For example, in implicit decomposition, explicit decomposition, and stable decomposition, the functions f(S), f(L), f(E), represent the loss functions or norms on sparse component, low-rank component, and noise component, respectively. However, in most cases, the original loss function will be replaced by other forms of the loss function to optimize and solve the problem. The common loss function forms (or norm forms) can be listed as follows:

  • \(l_0 \,\) norm loss function \((\Vert M\Vert _{l_0})\) is the number of non-zero entries.

  • \(l_1 \,\)norm loss function \((\Vert M\Vert _{l_1} = \sum _{i,j}|M_{i,j}|)\) is the Manhattan distance.

  • \(l_2 \,\)norm loss function \((\Vert M\Vert _{l_2} = \sqrt{\sum _{i,j}M^2_{i,j}})\) is also called the Frobenius norm (\(l_{F} \,\)norm loss function \((\Vert M\Vert _{l_{F}} = \sqrt{\sum _{i,j}M^2_{i,j}})\) ).

  • \(l_{\infty } \,\)norm loss function \((\Vert M\Vert _{l_{\infty }} = max_{i,j}|M_{i,j}|)\) is also called the max norm (\((\Vert M\Vert _{max} = max_{i,j}|M_{i,j}|)\)).

  • \(l_{*} \,\)norm loss function \((\Vert M\Vert _{l_{*}})\) is the sum of singular values.

Solvers

The models are solved by specific algorithms, which are called solvers in DLSM [63, 71] framework. Solvers are generally applied to the models after the minimization problem has been optimized and the loss function has been relaxed. Solvers can be broadly divided into regularization-based approaches and statistical-based approaches [101]. As for regularization approaches, the data matrices are regularized by convex surrogates with different features [63]. Typical regularization approaches include singular value thresholding (SVT) [102], accelerated proximal gradient (APG) [103], and augmented Lagrange multiplier (ALM) [83]. In terms of statistical-based approaches, prior distributions are used to capture low-rank or sparse properties and predict the joint distribution of the measured entries and unknown variables. Meanwhile, posterior distributions of the unknown variables can be approximated by Bayesian-based methods [63].

Although many solvers are proposed to solve the optimization problems under DLSM framework, most of the mainstream algorithms for ultrasound clutter suppression are based on SVD. SVD-based clutter suppression algorithms that are proposed and reviewed [19, 20, 22, 38, 43] based on traditional CFI before 2011. In these algorithms, SVD is used as one of the steps or iterations within many of the algorithms we evaluated. After 2015, with the rapid development of ultrasound technologies like plane-wave ultrasound, SVD was combined with ultrafast plane-wave imaging, which can provide a huge amount of data at a high frame rate, to improve the effectiveness of SVD and overcome the limitation of low frame rate [18, 51, 104, 105]. Due to the excellent performance of SVD on large datasets, SVD-based clutter suppression algorithms based on the plane-wave ultrasound have become a popular and mainstream research area. The SVD-based algorithms have been used in functional ultrasound [106, 107], super resolution ultrasound localization microscopy [104, 108] and high-sensitivity microvessel perfusion imaging [18, 51] due to its excellent performance in the ultrasound clutter suppression and the microvascular imaging [21].

Due to the obvious disadvantages of SVD, DLSM framework contains many approximate SVD and non-SVD algorithms for higher efficiency and lower computational cost, which have the potential for real-time ultrasound clutter suppression.

Experiments

DLSM framework has been successfully utilized to video surveillance, face recognition, texture modeling, video inpainting, audio separation, and latent semantic indexing [109]. However, only a few algorithms under DLSM framework have been applied to ultrasound clutter suppression. Herein, we apply DLSM algorithms as the clutter filter for CFI. To that end, we test if DLSM algorithms can be used for clutter suppression and conduct simulation experiments, phantom experiments, and in vivo experiments. Finally, we will conclude a list of algorithms that are suitable for ultrasound clutter suppression.

Experiment data

Three datasets are used in this experiment which are simulation data, phantom data, and in vivo rat data. For each dataset, raw RF-data, complex envelope data, and B-mode data formats are used for analysis. The specific parameters and obtaining process of three datasets and a brief introduction of three data formats are given in the following subsections.

Simulation data

The simulation data include a set of ultrasound simulation frames as Fig. 7 shows. The ultrasound simulation data are generated by the Field II simulation program implemented in MATLAB [110, 111]. A cube \(A\in \mathfrak {R}^{60 \times 60 \times 60 }\) is built to represent the tissue filled with scatterers given the fact that each voxel is \(1\, \text{mm}^3\). A vessel through the middle of the cube with a radius of \(20\, \text{mm}\) is generated by scatterers flowing to the right. The max velocity in the center of the vessel is \(15\, \text{mm}/\text{s}\). Assuming sound waves travel from the top to the bottom and focus on the center. Probe frequency and sampling frequency are set to \(7.27\, \text{mHz}\) and \(40\, \text{mHz}\), respectively. The frame rate is set to \(1000\, \text{fps}\) and 64 active elements are used for beamforming.

Fig. 7
figure 7

The illustration of the simulation data. a The simulation cube with tissue scatterers and blood scatterers. The red blood scatterers are in the middle and moving to the right. The simulated sound waves focus in the center. b A series of simulation data frames obtained from simulation experiments

Phantom data

The phantom was created to simulate a cube of tissue including one blood vessel which travels across the cube in the middle. Knox unflavored gelatin, water, and sugar-free Metamucil psyllium fiber supplement were gently heated and mixed to prepare the phantom gel which represents soft tissue. An intra-venous tube simulating a venous structure model runs through the gel cube. Probe frequency and sampling frequency are set to 10 MHz and 40 MHz, respectively. The Alpinion E-Cube R12 ultrasound system is used in ultrasound data collection with an L3-12H linear array probe. Figure 8 briefly illustrates the phantom experiment.

Fig. 8
figure 8

The illustration of the phantom experiments. a The illustration of phantom data collection experiment. b The B-mode image of the first frame in phantom data

Rat data

The acquisition of the rat data was under the supervision of the Animal Care Facility of Concordia University. A 27-week-old Sprague–Dawley male rat was anesthetized for ultrasound scanning. The experiment followed the guidelines of the Canadian Council on Animal Care and was approved by the Animal Ethics Committee of Concordia University \((\#30000259)\). The probe frequency and the sampling frequency were set to 10 MHz and 40 MHz, respectively. Similarly, as with phantom data, the Alpinion E-Cube R12 research ultrasound system with an L3-12H linear array probe was used. The schematic diagram of the in vivo rat experiment is shown in Fig. 9.

Fig. 9
figure 9

The illustration of the in vivo rat experiments. a The illustration of the in vivo rat data collection experiment. b A schematic representation of sparse component of the in vivo rat data

Data formats

Both real and simulated ultrasound data are available in three formats, which are raw RF data, complex envelope data, and B-mode data. Common ultrasound probes generally consist of a piezoelectric transducer array that emits and receives signals. The backscatter signal which is processed by the preamplifier and the time gain compensation is referred as radio-frequency (RF) signal. The RF signal then processed by an envelope detector becomes complex envelope data. Last, the complex envelope data are log-compressed into a grayscale format. And the data are further passed through intensity mapping and post-processing filtering. The final readable image is commonly called brightness mode (B-mode) image. RF frames generally have a very large size since the sampling rate of the RF data is usually extremely high. This high sampling rate is not necessary for envelope data as it does not have high-frequency contents. Therefore, envelope and B-mode images can be downsampled by a large factor. RF data may also be downsampled by a small factor, but the Nyquist sampling rate should be considered to avoid aliasing.

Experiment methods

In “Decomposition into low-rank and sparse matrices (DLSM) framework” section, DLSM framework is introduced and built as Fig. 3 shows. The DLSM algorithms are classified in four groups which are implicit decomposition, explicit decomposition, stable decomposition, and tensor decomposition. In this experiment, all algorithms are selected from LRSLibrary [61, 63, 71] which provides a group of low-rank and sparse matrix decomposition algorithms in moving object detection. In LRSLibrary, these algorithms are further subdivided into robust PCA (RPCA), subspace tracking (ST), matrix completion (MC), three-term decomposition (TTD), low-rank representation (LRR), nonnegative matrix factorization (NMF), nonnegative tensor factorization (NTF), and standard tensor decomposition (TD) according to the models. Due to the flexible conversion between models and their similar mathematical formulations, in this paper, TTD can be a subcategory in stable PCA under stable decomposition. Similarly, NTF belongs to the subcategory under the tensor decomposition (TD) model.

In the first step, the DLSM algorithms are applied to three formats of simulation data to verify the performance of all algorithms compared to sparse component with ground truth and give a computing time contrast. Then, all algorithms are used on phantom data to find out if DLSM suits ultrasound data with real ultrasound features. In the third step, rat data are used for verification and comparison. The acquired data have three formats which are RF data format, complex envelope data format, and B-mode data format. The results of different data formats and different datasets are grouped for comparison to find the optimal conditions of ultrasound clutter suppression.

All experiments are processed by a normal desktop computer with an i7-4770 CPU @ 3.40 GHz and 16.0 GB RAM.

Evaluation metrics

Two main indicators are used to evaluate the performance of various algorithms, which are signal-to-noise ratio (SNR) [112] and contrast-to-noise ratio (CNR). The SNR and CNR are calculated as follows:

$$\begin{aligned} \text{SNR}=\frac{\mu _1}{\sigma _1},\quad \text{CNR}=\frac{|\mu _1-\mu _2|}{\sqrt{\sigma _1^2+\sigma _2^2}/2} \end{aligned}$$
(19)

where \(\mu _1\) and \(\sigma _1\) are the mean intensity value and the standard deviation of the background window, \(\mu _2\) and \(\sigma _2\) are the mean intensity value and the standard deviation of the target window.

Experiment results

The results of 106 DLSM algorithms on three datasets and their three formats are reported in this section. The results of all algorithms include the SNR, CNR, calculation times, and images for visual observation. Since all the output images are sparse components of the same data and are very similar, we classify the results according to their performance and report the number of algorithms in each category instead of SNR and CNR of all algorithms.

The results of all algorithms are divided into several categories. The results which fall in the first category are considered to be good results as they give the correct sparse matrix with a pure blank background which means high robustness to noise and dynamic background and strong decomposition ability. The cases when the output sparse component is more than 100 times higher than background pixel values are also regarded as good results. The results which fall in the second category are considered to be defective. These results either contain background noise which is supposed to be part of the low-rank components, or are noisy and algorithms failed to decompose. The results in the third category are not considered because some algorithms failed to run due to some limitations like non-negative limitations or real input limitations. Algorithms with this type of results are called restricted algorithms in this section.

The information that all algorithms, including their model classifications, are from LRSLibrary [61, 63, 71], and they have all been proved to be successfully applied to moving object detection on traffic video.

Simulation experiments

The experiments first applied simulation data to verify the availability and approximate performance of all algorithms. Meanwhile, the computation cost and time of these algorithms on ultrasound clutter suppression are also tested. The first experiment applied all 106 DLSM algorithms to the RF simulation data. Among 106 algorithms, 11 of them were out of memory and failed to run. These algorithms cannot deal with the large size of simulation data because they use the full singular value decomposition or QR decomposition and require a huge memory to initialize (7.9 GB). Meanwhile, there are 6 algorithms that require non-negative input and cannot take RF data as input. Consequently, a total of 17 of these two kinds of algorithms are classified as restricted algorithms.

In terms of the remaining 89 algorithms, only 19 of them are able to output relatively pure sparse components that match the ground truth without any processes of RF simulation data. To be precise, only three algorithms (abbreviation: LRR-ROSL, RPCA-IALM, RPCA-IALM-BLWS) give a truly pure background as zero matrices (all entries in sparse matrices except the ones presenting simulated vessel are 0). The other 16 results highlight simulated vessel with a non-zero background. Since the value of the background pixels is 1000–10,000 times less than the value of sparse component, we consider it to be a pure result without low-rank components. The possible reason is the particular small values of RF simulation data and low dynamic range. In general, these results with the CNR values above 1.6 are classified as good results in Table 2. The results of the other 44 algorithms are very noisy with the CNR values less than 1.1. As for these algorithms, the sparse parts in simulation data are not clearly determined and the clutter is not well suppressed. The remaining 24 algorithms give blank output due to low dynamic range and other reasons. Almost all the DLSM algorithms give an SNR of about 0.759, so SNR is not reported in detail here.

Table 2 The 19 algorithms with the CNR values above 1.6

Due to the extremely small data values and dynamic ranges, a large number of algorithms are invalidated. Therefore, in the second step, the order of magnitude and dynamic range of RF simulation data are expanded to re-examine the performance of all algorithms.

After processing RF data, 56 algorithms show good results. Among these algorithms, 16 of them give correct sparse components with a zero-valued background, others give sparse components 1000–10,000 times greater than background pixel values. The results of the remaining 33 algorithms are noisy. These algorithms either do not correctly isolate sparse components or contain inseparable background noise with similar values. Most good results have a CNR greater than 1.3, while noisy results generally have a CNR less than 1. Similarly, almost all the DLSM algorithms with good results give an SNR of about 0.759. There are a few good results with a CNR less than 1. The algorithms with such results only highlight the sparsest parts which reduce the mean intensity values of the target window. However, these results are considered to be good because the unhighlighted sparse components still have higher intensities than backgrounds. The results after increasing dynamic range are listed in Table 3. Examples of different kinds of results in simulation exp

Table 3 The 16 algorithms with pure background after increasing dynamic range

eriments are shown in Fig. 10.

Fig. 10
figure 10

The output result images of simulation data. a The output of sparse component obtained by the IALM algorithm on original simulated RF data. It is a typical good result representing correct decomposition and pure sparse components. b The output of sparse component obtained by the ADM algorithm on original simulated RF data. It is a typical noisy result with background noise as sparse components. c The output of sparse component obtained by the OSTD algorithm on processed simulated RF data with larger dynamic range. The algorithms with a CNR less than 1 in Table 3 give such results with pure background because they only show the sparsest parts

The complex envelope simulation data are obtained by Hilbert transform based on the RF data. For this reason, the complex envelope data do not have the problem of miniature pixel values and low dynamic range. However, the SNR and CNR of the complex envelope simulation data are lower than the SNR and CNR of RF simulation data. Except for the 11 algorithms that are limited by frame size, 24 of the remaining algorithms show good results. In addition, 13 algorithms are affected by complex numbers generated by the Hilbert transform and thus failed to run. The results of other algorithms are noisy. After enlarging the dynamic range of the complex envelope simulation data, 8 algorithms which failed on original simulation data give good results on the preprocessed data. These algorithms are sensitive to the changes of dynamic ranges. The results on complex envelope simulation data are shown in Table 4. Obviously, the CNR in the results of complex envelope simulation data is far less than the CNR on the RF data.

Table 4 The algorithms with good results on complex envelope simulation data

The third step of the simulation experiment is using B-mode data. As for the results of B-mode simulation data, 40 DLSM algorithms have successfully detected the simulate vessel on original B-mode simulation data. Meanwhile, 12 algorithms are affected by high peak values in the background and keep static peaks into sparse components. These algorithms give pure sparse matrices after suppressing peak values. After enlarging the dynamic range of the original B-mode data, another 10 algorithms successfully detect correct sparse components. Therefore, 62 algorithms can successfully separate the correct sparse components. The other algorithms which give very noisy results may need parameter adjustment and threshold process. The results of simulation experiment on B-mode data are reported in Table 5.

Table 5 The algorithms with good results on B-mode simulation data

Phantom experiments

The next set of experimental data used for testing is phantom data. The phantom data are used to test whether DLSM framework is suitable for ultrasound clutter suppression with real ultrasound noise and other ultrasound features. The phantom data also consist of three formats, which are RF phantom data, complex envelope phantom data, and B-mode phantom data. Except for 11 inapplicable algorithms due to size limitation, the other 95 algorithms have a huge difference in computing time ranging from less than 0.1 s to more than 500 s.

As for RF phantom data, the order of magnitude of all pixels is first adjusted into the range of \(10^{\pm 3}\). However, the structured peak pixels that are caused by bright structure generated at the rebound reflection interface still affect many algorithms. 36 algorithms clearly show the simulated vessel with a pure background with an average CNR of 3.5. Meanwhile, 43 algorithms only highlight bright edges as the sparse components with an average CNR of 0.4. The structured peak pixels of RF phantom data can compromise the calculation of some algorithms when these bright edges have a pixel value \(10^3\) times larger than the remaining pixel values. Therefore, the peak values are processed logarithmically to achieve the gray balance and reduce the dynamic range. After logarithmic processing, 22 additional algorithms are able to display sparse components correctly excluding bright and static edges. Among them, 19 algorithms are previously affected by peaks, and 3 algorithms are defective on the original data. The results of remaining algorithms are still noisy, and parameter adjustment should be applied to these algorithms for better performance. The results of RF phantom experiments are shown in Table 6.

Table 6 The algorithms with good results in RF phantom experiments

The complex envelope phantom data are then used for experiments. The number of good results of the complex envelope phantom data is less than the number of good results of RF phantom data. 26 algorithms successfully detected the simulated vessel and 33 algorithms only showed bright edges, which is an intra-venous (IV) tube representing the vessel wall. The CNR of all results is less than 0.4. After suppressing the edge brightness logarithmically, 11 of these algorithms that have been affected by edges can separate pure sparse components. It shows that the extremely high bright structures can affect the sensitivity to sparse components in many algorithms. However, there are still some algorithms that give noisy results. At the same time, 19 algorithms cannot take complex numbers as input. The results on complex envelope phantom data are shown in Table 7.

Table 7 The algorithms with good results on complex envelope phantom data

The last format of data to be applied is B-mode data. As for B-mode phantom data, 49 algorithms successfully give good results with a CNR higher than 2. However, the results of four of these algorithms contain bright edges which are considered to be low-rank components. The results of 35 algorithms only show the edges. Among them, a few sensitive algorithms can also partly detect sparse partition with obvious motion. However, only several bright pixels with motion can be detected. After reducing the dynamic range, no algorithm is affected by edges and 75 algorithms give sparse components with pure background. Examples of good results and noisy results in phantom experiments are shown in Fig. 11. The results of B-mode phantom data are shown in Table 8.

Fig. 11
figure 11

Three typical output results of phantom experiments. a A typical good result showing pure sparse components without noise. This image is obtained by ALM algorithm on original phantom data. b A typical output affected by bright edge structures. This image is obtained by APG algorithm on original phantom data. Because the pixel values of bright edges are 1000 times larger than the pixel values in the rest of the image, the flow sparse component in the middle of the tube cannot be observed. c A typical noisy result showing sparse components with indivisible noise. This image is obtained by RSTD algorithm on original phantom data

Table 8 The algorithms with good results on B-mode phantom data

In vivo experiments

In the third step, rat data are used to test the performance of these algorithms on real ultrasound data with small vessels-like tissues. The RF rat data, complex envelope rat data, and B-mode rat data are used to be compared.

Fig. 12
figure 12

The examples of the results of rat experiments. a The B-mode image of rat data for comparison. b is obtained by ALM algorithm on original rat data. The dynamic background and noise are filtered out relatively well. c is obtained by APG algorithm on original rat data. Large areas of dynamic tissue are classified as sparse components. Since there is no ground truth for in vivo rat data, the results are described using relatively good and relatively noisy

In terms of RF rat data, 82 algorithms give very good and similar results, and 7 algorithms show noisy and meaningless results. The other 17 algorithms are restricted due to the size limitation or non-negative limitation. The complex envelope rat data remains to share similar results with RF rat data. As for B-mode rat data, 92 algorithms successfully detected vessel-like tissues and only 3 algorithms failed to show any part of the sparse components. Examples of good results and noisy results in in vivo rat experiments are shown in Fig. 12. Since most algorithms give results with similar SNR and CNR, the evaluation of results combine subjective observations and numerical analysis. Due to the unknown in vivo structure, we lack ground truth for the accuracy of the assessment results. Only algorithms with pure backgrounds are shown in Table 9 due to similar results and limited space.

Table 9 The algorithms with pure backgrounds on in vivo data

Discussion

A total of 106 algorithms were tested in this paper. Analyzing the results obtained from simulation, phantom, and in vivo experiments, we found that 11 algorithms require huge memory (7.9 GB for frame size \(250\times 125\), 20 frames) due to the singular value decomposition or QR decomposition process. Since typical ultrasound frames are large in size, the left unitary matrix in full singular value decomposition demands an excessive amount of memory, e.g., ADM. There are two possible solutions to this problem. First, the approximate SVD can be calculated and stored in every iteration instead of full SVD [55, 113]. Second, small overlapping patches from the ultrasound frames can be considered to formulate the data matrix which will substantially reduce the size of the Casorati matrix and eventually the memory footprint. Another advantage of using this windowing technique is that it can automatically equalize uneven noise distribution by normalizing the power locally [51]. The 11 algorithms with size limitation are listed in Table 10.

Table 10 The 11 algorithms with size limitation

Moreover, there are 6 algorithms that require non-negative input. Since ultrasound RF data usually contain both positive and negative values, these algorithms are not suitable for working with RF data for clutter suppression. 20 algorithms giving good results on RF simulation data are tested with the absolute value of RF data to confirm the impact of non-negative requirements on ultrasound clutter suppression. Although all of these 53 algorithms are still capable of showing show high-contrast vessel structures, the SNR obtained with absolute value RF data (0.81) is slightly greater than the original SNR (0.76) showing a significant increase of background noise in sparse components. At the same time, the CNR obtained with absolute value RF data (1.69) is slightly lower than the original CNR (1.73), which proves that nonnegative requirement has only limited effect on the accuracy of the DLSM decomposition. The 6 algorithms with size limitations are listed in Table 11.

Table 11 The algorithms with non-negative requirement

Another type of restricted algorithms is affected by complex inputs. From the experiment results, it is obvious that complex envelope data are not suitable for ultrasound clutter suppression since it takes longer calculation time and gives poor performance. Also, 13 algorithms are affected by complex value and cannot separate low-rank and sparse components well. Among them, 13 algorithms cannot take complex numbers as input, and some algorithms are stuck in a longer loop that requires more than 300 s. Algorithms which failed due to complex numbers are listed below in Table 12. In addition, the extremely small CNR obtained from the envelope data is only one-hundredth of the ones obtained from other datasets which indicates that envelope data are not suitable as an input form of ultrasound clutter suppression.

Table 12 The 13 algorithms that cannot take complex numbers as input

Among the remaining algorithms, 17 algorithms are easily affected by outliers. These algorithms cannot denoise the peak values when the dynamic range is roughly greater than \(11.5\, \text{bits}\), which is the natural logarithm of difference between maximum and minimum. These 17 algorithms have performed well on pre-processed data and showed good results in simulation experiments and phantom experiments. However, they are not robust to outliers. In the simulation experiments, these algorithms divided the background peak pixels into sparse components, resulting in a noisy background. Similarly, they are not robust to large-shaped structured outliers and divide bright static edges into sparse components in phantom experiments. These 17 algorithms that are susceptible to outliers are listed in Table 13. Since complex envelope data are not suitable for ultrasound clutter suppression, the performance of the algorithms on complex envelope data has not been considered.

Table 13 The algorithms not robust to the outliers

In addition, a pure background (0 dB) is of great significance for vascular image segmentation and process and analysis of other medical images [46, 114]. However, this is a difficult goal due to the probe jitter, dynamic backgrounds, noise, shadows, and many other reasons. Therefore, only a few results have pure background on simulation data and phantom data. Furthermore, no result has pure background on in vivo rat data because of the complex tissue motions and the harsh conditions. Some algorithms have a strong ability dealing with these challenges and give pure backgrounds on simulation data and phantom data. These algorithms are listed in Table 14.

Table 14 The algorithms with the potential to give a pure background

Overall, in terms of calculation time, DLSM algorithms take the longest time to run complex envelope data in comparison with RF data and B-mode data. Due to its large amount of calculations, complex envelope data take twice as long as RF data do to run. This confirms again that complex envelope data are not suitable for ultrasound clutter suppression. Meanwhile, RF data require slightly less computation time than B-mode data. This may be caused by the extra information RF data contain. Meanwhile, we can find that DLSM algorithms use a slightly longer time on preprocessed data than on original data. However, the algorithms separate sparse components more accurately. Table 15 lists the average time taken by the fastest 20 algorithms on different datasets and different data formats.

Table 15 The average time taken by the fastest 20 algorithms

The experimental results prove that ultrasound data are very different from ordinary video surveillance frames. All DLSM algorithms can be successfully applied to surveillance images. However, some of them are not suitable for ultrasound data. There are quite a few algorithms that are not suitable for ultrasound RF data and complex envelope data, which may due to the complexity of the RF data and the complex space of complex envelope data. The simulation experiments prove that some algorithms are still not robust to ultrasonic clutter and are not sensitive to the data with overall small pixel values (\(<10^{-3}\)). As for these algorithms, the low-rank components of the results often contain inseparable background flicker, noise, and tiny motion. These algorithms have been listed in Table 13. Meanwhile, the phantom experiment results prove that some DLSM algorithms are not robust and stable with a high dynamic range greater than \(10\, \text{bits}\). For ultrasound data, an area with small values often exists in a uniform tissue. Edges that are much brighter than other tissues are also common due to the strong reflections at the interface. The CNR after preprocessing the ultrasound data is generally higher than the CNR of raw data. The result of the data that removed the peak is also significantly better than the results of raw data. Therefore, it is necessary to preprocess the ultrasound image when applying the DLSM algorithm. Moreover, parameter adjustment or other math improvements are necessary when applying some DLSM algorithms on ultrasound data to get the best filtering performance.

On the other hand, in terms of ultrasound data formats, experiments show that B-mode ultrasound data can make more algorithms successful for vascular detection. The B-mode ultrasound data may lose information. However, the outliers that may affect DLSM algorithms may also be weakened by Hilbert transform an absolute process. This might be the reason why more DLSM algorithms work for B-mode data. Although B-mode data has more good results than RF data, RF data require slightly less average calculation time and are more suitable for real-time requirements. The algorithms in Table 16 are relatively stable in all three datasets. These algorithms all require less than 1 s for computation while giving the correct sparse components. Experiments show that they may be more suitable for ultrasound clutter suppression.

Table 16 The algorithms require less than 1 s calculation time

Finally, 22 algorithms which are most robust to noise with the best performance in all the experiments are listed in Table 17. These algorithms may have a strong ability for ultrasound clutter suppression.

Table 17 The most robust algorithms with the best performance

In this paper, we adapted different techniques originally proposed for natural images in the field of computer vision for ultrasound color flow imaging. As ultrasound images have unique characteristics due to the physics of sound propagation, these images have the so-called “speckle noise”. We believe that the results of this paper can be generalized to other imaging modalities that are affected by diffraction, such as optical coherence tomography (OCT).

Conclusion

The performance of 106 established low-rank and sparse decomposition algorithms for clutter filtering has been tested in this work. Our results show that few robust matrix decomposition techniques are suitable for solving the limitations of SVD-based ultrasound clutter suppression methods such as sensitivity to large noise. In addition, several matrix decomposition techniques show the potential for real-time implementation on commercial ultrasound machines due to their low computational complexity. Furthermore, some preprocessing is necessary when applying this framework to ultrasound data. Finally, some of the algorithms studied in this work can automatically estimate the optimal power Doppler images without requiring extensive manual tuning, which may pave the way for easier commercial and clinical translation of ultrasound clutter suppression (Additional file 1).

Availability of data and materials

The datasets generated and analyzed during the current study are available from http://data.sonography.ai/.

Abbreviations

B-mode:

Brightness-mode

CP:

CANDECOMP/PARAFAC

CFI:

Color flow imaging

CNR:

Contrast-to-noise ratio

CTA:

Computed tomography angiography

CT:

Computed tomography

DLSM:

Decomposition into low-rank and sparse matrices

DSA:

Digital subtraction angiography

DUS:

Duplex ultrasonography

FIR:

Finite impulse response

IIR:

Infinite impulse response

LRR:

Low-rank representation

MRA:

Magnetic resonance angiography

MRI:

Magnetic resonance imaging

MC:

Matrix completion

NMF:

Non-negative matrix factorization

OCT:

Optical coherence tomography

PCA:

Principal component analysis

PCP:

Principal component pursuit

RSTD:

Rank sparsity tensor decomposition

RMC:

Robust matrix completion

RNMF:

Robust non-negative matrix factorization

RPCA:

Robust principal component analysis

RPCP:

Robust principal component pursuit

RST:

Robust subspace tracking

SNR:

Signal-to-noise ratio

SVD:

Singular value decomposition

Stable NMF:

Stable non-negative matrix factorization

Stable RPCA:

Stable robust principal component analysis

ST:

Subspace tracking

TD:

Tensor decomposition

t-SVD:

Tensor singular value decomposition

TTD:

Three-term decomposition

References

  1. Wang H, Naghavi M, Allen C, Barber RM, Bhutta ZA, Carter A, Casey DC, Charlson FJ, Chen AZ, Coates MM, et al. Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980–2015: a systematic analysis for the global burden of disease study 2015. Lancet. 2016;388(10053):1459–544.

    Article  Google Scholar 

  2. Dasgupta R, Fishman SJ. Issva classification. In: Seminars in pediatric surgery, vol. 23. Elsevier; 2014. p. 158–61.

  3. Camici PG, Crea F. Coronary microvascular dysfunction. N Engl J Med. 2007;356(8):830–40.

    Article  Google Scholar 

  4. Marinescu MA, Löffler AI, Ouellette M, Smith L, Kramer CM, Bourque JM. Coronary microvascular dysfunction, microvascular angina, and treatment strategies. JACC Cardiovasc Imaging. 2015;8(2):210–20.

    Article  Google Scholar 

  5. McDonald DM, Baluk P. Significance of blood vessel leakiness in cancer. AACR. 2002.

  6. Padera TP, Stoll BR, Tooredman JB, Capen D, di Tomaso E, Jain RK. Pathology: cancer cells compress intratumour vessels. Nature. 2004;427(6976):695.

    Article  Google Scholar 

  7. Solomon SD, Chew E, Duh EJ, Sobrin L, Sun JK, VanderBeek BL, Wykoff CC, Gardner TW. Diabetic retinopathy: a position statement by the american diabetes association. Diabetes Care. 2017;40(3):412–8.

    Article  Google Scholar 

  8. Association AD, et al. Peripheral arterial disease in people with diabetes. Diabetes Care. 2003;26(12):3333–41.

    Article  Google Scholar 

  9. Dolan NC, Liu K, Criqui MH, Greenland P, Guralnik JM, Chan C, Schneider JR, Mandapat AL, Martin G, McDermott MM. Peripheral artery disease, diabetes, and reduced lower extremity functioning. Diabetes Care. 2002;25(1):113–20.

    Article  Google Scholar 

  10. Healy D, Rogers P, Hii L, Wingfield M. Angiongenesis: a new theory for endometriosis. Hum Reprod Update. 1998;4(5):736–40.

    Article  Google Scholar 

  11. Horsch AD, Weale AR. Imaging in vascular disease. Surgery. 2018.

  12. Evans DH, Jensen JA, Nielsen MB. Ultrasonic colour Doppler imaging. Interface Focus. 2011;1(4):490–502.

    Article  Google Scholar 

  13. Demchuk AM, Menon BK, Goyal M. Comparing vessel imaging: noncontrast computed tomography/computed tomographic angiography should be the new minimum standard in acute disabling stroke. Stroke. 2016;47(1):273–81.

    Article  Google Scholar 

  14. Rübenthaler J, Reiser M, Clevert D-A. Diagnostic vascular ultrasonography with the help of color Doppler and contrast-enhanced ultrasonography. Ultrasonography. 2016;35(4):289.

    Article  Google Scholar 

  15. Bjaerum S, Torp H, Kristoffersen K. Clutter filter design for ultrasound color flow imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2002;49(2):204–16.

    Article  Google Scholar 

  16. Oglat AA, Matjafri M, Suardi N, Oqlat MA, Abdelrahman MA, Oqlat AA. A review of medical Doppler ultrasonography of blood flow in general and especially in common carotid artery. J Med Ultrasound. 2018;26(1):3.

    Article  Google Scholar 

  17. Gerhard-Herman M, Gardin JM, Jaff M, Mohler E, Roman M, Naqvi TZ. Guidelines for noninvasive vascular laboratory testing: a report from the american society of echocardiography and the society for vascular medicine and biology. Vasc Med. 2006;11(3):183–200.

    Article  Google Scholar 

  18. Demené C, Deffieux T, Pernot M, Osmanski B-F, Biran V, Gennisson J-L, Sieu L-A, Bergel A, Franqui S, Correas J-M, et al. Spatiotemporal clutter filtering of ultrafast ultrasound data highly increases Doppler and fultrasound sensitivity. IEEE Trans Medical Imaging. 2015;34(11):2271–85.

    Article  Google Scholar 

  19. Alfred C, Lovstakken L. Eigen-based clutter filter design for ultrasound color flow imaging: a review. IEEE Trans Ultrason Ferroelectr Freq Control. 2010;57(5):1096–111.

    Article  Google Scholar 

  20. Mauldin FW, Lin D, Hossack JA. The singular value filter: a general filter design strategy for pca-based signal separation in medical ultrasound imaging. IEEE Trans Med Imaging. 2011;30(11):1951–64.

    Article  Google Scholar 

  21. Solomon O, Cohen R, Zhang Y, Yang Y, He Q, Luo J, van Sloun RJ, Eldar YC. Deep unfolded robust pca with application to clutter suppression in ultrasound. IEEE Trans Med Imaging. 2019.

  22. Bjaerum S, Torp H, Kristoffersen K. Clutter filters adapted to tissue motion in ultrasound color flow imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2002;49(6):693–704.

    Article  Google Scholar 

  23. Kadi AP, Loupas T. On the performance of regression and step-initialized iir clutter filters for color Doppler systems in diagnostic medical ultrasound. IEEE Trans Ultrason Ferroelectr Freq Control. 1995;42(5):927–37.

    Article  Google Scholar 

  24. Hoeks A, Van de Vorst J, Dabekaussen A, Brands P, Reneman R. An efficient algorithm to remove low frequency Doppler signals in digital Doppler systems. Ultrason imaging. 1991;13(2):135–44.

    Article  Google Scholar 

  25. Torp H. Clutter rejection filters in color flow imaging: a theoretical approach. IEEE Trans Ultrason Ferroelectr Freq Control. 1997;44(2):417–24.

    Article  Google Scholar 

  26. Eckersley RJ, Chin CT, Burns PN. Optimising phase and amplitude modulation schemes for imaging microbubble contrast agents at low acoustic power. Ultrasound Med Biol. 2005;31(2):213–9.

    Article  Google Scholar 

  27. Bruce M, Averkiou M, Tiemann K, Lohmaier S, Powers J, Beach K. Vascular flow and perfusion imaging with ultrasound contrast agents. Ultrasound Med Biol. 2004;30(6):735–43.

    Article  Google Scholar 

  28. Simpson DH, Burns PN, Averkiou MA. Techniques for perfusion imaging with microbubble contrast agents. IEEE Trans Ultrason Ferroelectr Freq Control. 2001;48(6):1483–94.

    Article  Google Scholar 

  29. Hwang J-J, Simpson DH. Two pulse technique for ultrasonic harmonic imaging. Google Patents. US Patent 5,951,478. 1999.

  30. Simpson DH, Chin CT, Burns PN. Pulse inversion Doppler: a new method for detecting nonlinear echoes from microbubble contrast agents. IEEE Trans Ultrason Ferroelectr Freq Control. 1999;46(2):372–82.

    Article  Google Scholar 

  31. Thomas L, Hall A. An improved wall filter for flow imaging of low velocity flow. In: 1994 Proceedings of IEEE ultrasonics symposium, vol. 3. New York: IEEE; 1994. p. 1701–4.

  32. Yoo YM, Managuli R, Kim Y. Adaptive clutter filtering for ultrasound color flow imaging. Ultrasound Med Biol. 2003;29(9):1311–20.

    Article  Google Scholar 

  33. Allam ME, Kinnick RR, Greenleaf JF. Isomorphism between pulsed-wave Doppler ultrasound and direction-of-arrival estimation. II. Experimental results. IEEE Trans Ultrason Ferroelectr Freq Control. 1996;43(5):923–35.

    Article  Google Scholar 

  34. Vaitkus PJ, Cobbold RS, Johnston KW. A new time-domain narrowband velocity estimation technique for Doppler ultrasound flow imaging. II. Comparative performance assessment. IEEE Trans Ultrason Ferroelectr Freq Control. 1998;45(4):955–71.

    Article  Google Scholar 

  35. Ledoux LA, Brands PJ, Hoeks AP. Reduction of the clutter component in Doppler ultrasound signals based on singular value decomposition: a simulation study. Ultrasonic Imaging. 1997;19(1):1–18.

    Article  Google Scholar 

  36. Kargel C, Hobenreich G, Trummer B, Insana MF. Adaptive clutter rejection filtering in ultrasonic strain-flow imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2003;50(7):824–35.

    Article  Google Scholar 

  37. Song F, Zhang D, Gong X. Performance evaluation of eigendecomposition-based adaptive clutter filter for color flow imaging. Ultrasonics. 2006;44:67–71.

    Article  Google Scholar 

  38. Kruse DE, Ferrara KW. A new high resolution color flow system using an eigendecomposition-based adaptive filter for clutter rejection. IEEE Trans Ultrason Ferroelectr Freq Control. 2002;49(10):1384–99.

    Article  Google Scholar 

  39. Alfred C, Cobbold RS. Single-ensemble-based eigen-processing methods for color flow imaging-part i. the hankel-svd filter. IEEE Trans Ultrason Ferroelectr Freq Control. 2008;55(3):559–72.

    Article  Google Scholar 

  40. Candes EJ, Sing-Long CA, Trzasko JD. Unbiased risk estimates for singular value thresholding and spectral estimators. IEEE Trans Signal Process. 2013;61(19):4643–57.

    Article  MathSciNet  MATH  Google Scholar 

  41. Bar-Zion A, Tremblay-Darveau C, Solomon O, Adam D, Eldar YC. Fast vascular ultrasound imaging with enhanced spatial resolution and background rejection. IEEE Trans Med Imaging. 2016;36(1):169–80.

    Article  Google Scholar 

  42. Kim M, Abbey CK, Hedhli J, Dobrucki LW, Insana MF. Expanding acquisition and clutter filter dimensions for improved perfusion sensitivity. IEEE Trans Ultrason Ferroelectr Freq Control. 2017;64(10):1429–38.

    Article  Google Scholar 

  43. Lovstakken L, Bjaerum S, Kristoffersen K, Haaverstad R, Torp H. Real-time adaptive clutter rejection filtering in color flow imaging using power method iterations. IEEE Trans Ultrason Ferroelectr Freq Control. 2006;53(9):1597–608.

    Article  Google Scholar 

  44. Mauldin FW, Viola F, Walker WF. Complex principal components for robust motion estimation. IEEE Trans Ultrason Ferroelectr Freq Control. 2010;57(11):2437–49.

    Article  Google Scholar 

  45. Gallippi CM, Nightingale KR, Trahey GE. Bss-based filtering of physiological and arfi-induced tissue and blood motion. Ultrasound Med Biol. 2003;29(11):1583–92.

    Article  Google Scholar 

  46. Bayat M, Fatemi M, Alizad A. Background removal and vessel filtering of noncontrast ultrasound images of microvasculature. IEEE Trans Biomed Eng. 2019;66(3):831–42.

    Article  Google Scholar 

  47. Adabi S, Ghavami S, Fatemi M, Alizad A. Non-local based denoising framework for in vivo contrast-free ultrasound microvessel imaging. Sensors. 2019;19(2):245.

    Article  Google Scholar 

  48. Olleros GG, Stuart MB, Jensen JA, Hoyos CAV, Hansen KL. Spatiotemporal filtering for synthetic aperture slow flow imaging. In: 2018 IEEE international ultrasonics symposium (IUS). New York: IEEE; 2018. p. 1–4.

  49. Bergqvist G, Larsson EG. The higher-order singular value decomposition: theory and an application [lecture notes]. IEEE Signal Process Mag. 2010;27(3):151–4.

    Article  Google Scholar 

  50. Du Y, Zhang M, Alfred C, Yu W. Low-rank adaptive clutter filtering for robust ultrasound vector flow imaging. In: 2018 IEEE international ultrasonics symposium (IUS). New York: IEEE; 2018. p. 1–9.

  51. Song P, Manduca A, Trzasko JD, Chen S. Ultrasound small vessel imaging with block-wise adaptive local clutter filtering. IEEE Trans Med Imaging. 2016;36(1):251–62.

    Article  Google Scholar 

  52. Kim M, Zhu Y, Hedhli J, Dobrucki LW, Insana MF. Multidimensional clutter filter optimization for ultrasonic perfusion imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2018;65(11):2020–9.

    Article  Google Scholar 

  53. Nayak R, Kumar V, Webb J, Gregory A, Fatemi M, Alizad A. Non-contrast agent based small vessel imaging of human thyroid using motion corrected power Doppler imaging. Sci Rep. 2018;8(1):15318.

    Article  Google Scholar 

  54. Baranger J, Arnal B, Perren F, Baud O, Tanter M, Demené C. Adaptive spatiotemporal svd clutter filtering for ultrafast Doppler imaging using similarity of spatial singular vectors. IEEE Trans Med Imaging. 2018;37(7):1574–86.

    Article  Google Scholar 

  55. Ashikuzzaman M, Belasso C, Kibria MG, Bergdahl A, Gauthier CJ, Rivaz H. Low rank and sparse decomposition of ultrasound color flow images for suppressing clutter in real-time. IEEE Trans Med Imaging. 2019.

  56. Candès EJ, Li X, Ma Y, Wright J. Robust principal component analysis? J ACM. 2011;58(3):11.

    Article  MathSciNet  MATH  Google Scholar 

  57. Bayat M, Fatemi M. Concurrent clutter and noise suppression via low rank plus sparse optimization for non-contrast ultrasound flow Doppler processing in microvasculature. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). New York: IEEE; 2018. p. 1080–4.

  58. Lin Z. A review on low-rank models in data analysis. Big Data Inf Anal. 2016;1(2&3):139–61.

    Google Scholar 

  59. Davenport MA, Romberg J. An overview of low-rank matrix recovery from incomplete observations. IEEE J Sel Top Signal Process. 2016;10(4):608–22.

    Article  Google Scholar 

  60. Lauritzen SL. Graphical models, vol. 17. Oxford: Clarendon Press; 1996.

    MATH  Google Scholar 

  61. Sobral A, Bouwmans T, Zahzah E-h. Lrslibrary: Low-rank and sparse tools for background modeling and subtraction in videos. Robust low-rank and sparse matrix decomposition: applications in image and video processing. 2016.

  62. Zhou X, Yang C, Zhao H, Yu W. Low-rank modeling and its applications in image analysis. ACM Comput Surv. 2015;47(2):36.

    Article  Google Scholar 

  63. Bouwmans T, Sobral A, Javed S, Jung SK, Zahzah E-H. Decomposition into low-rank plus additive matrices for background/foreground separation: a review for a comparative evaluation with a large-scale dataset. Comput Sci Rev. 2017;23:1–71.

    Article  MATH  Google Scholar 

  64. Chandrasekaran V, Sanghavi S, Parrilo PA, Willsky AS. Rank-sparsity incoherence for matrix decomposition. SIAM J Optim. 2011;21(2):572–96.

    Article  MathSciNet  MATH  Google Scholar 

  65. Li P, Yang X, Zhang D, Bian Z. Adaptive clutter filtering based on sparse component analysis in ultrasound color flow imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2008;55(7):1582–96.

    Article  Google Scholar 

  66. Chen G, Needell D. Compressed sensing and dictionary learning. Finite Frame Theory Complet Introd Overcompleteness. 2016;73:201.

    MathSciNet  MATH  Google Scholar 

  67. Cevher V, Duarte MF, Hegde C, Baraniuk R. Sparse signal recovery using Markov random fields. In: Adv Neural Inf Process Syst. 2009. p. 257–64.

  68. Cevher V, Sankaranarayanan A, Duarte MF, Reddy D, Baraniuk RG, Chellappa R. Compressive sensing for background subtraction. In: European conference on computer vision. Berlin: Springer; 2008. p. 155–68.

    Google Scholar 

  69. Ramirez I, Sprechmann P, Sapiro G. Classification and clustering via dictionary learning with structured incoherence and shared features. In: 2010 IEEE computer society conference on computer vision and pattern recognition. New York: IEEE; 2010. p. 3501–3508.

  70. Tong T, Wolz R, Coupé P, Hajnal JV, Rueckert D, Initiative ADN, et al. Segmentation of mr images via discriminative dictionary learning and sparse coding: application to hippocampus labeling. NeuroImage. 2013;76:11–23.

    Article  Google Scholar 

  71. Sobral AC. Robust low-rank and sparse decomposition for moving object detection: from matrices to tensors. PhD thesis, Université de La Rochelle. 2017.

  72. Bouwmans T, Javed S, Zhang H, Lin Z, Otazo R. On the applications of robust pca in image and video processing. Proc IEEE. 2018;106(8):1427–57.

    Article  Google Scholar 

  73. Otazo R, Candes E, Sodickson DK. Low-rank plus sparse matrix decomposition for accelerated dynamic mri with separation of background and dynamic components. Magn Reson Med. 2015;73(3):1125–36.

    Article  Google Scholar 

  74. Gao H, Yu H, Osher S, Wang G. Multi-energy ct based on a prior rank, intensity and sparsity model (prism). Inverse Probl. 2011;27(11):115012.

    Article  MathSciNet  MATH  Google Scholar 

  75. Cohen R, Zhang Y, Solomon O, Toberman D, Taieb L, van Sloun RJ, Eldar YC. Deep convolutional robust pca with application to ultrasound imaging. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP). new York: IEEE; 2019. p. 3212–6.

  76. Zhang H, Lin Z, Zhang C, Gao J. Relations among some low-rank subspace recovery models. Neural Comput. 2015;27(9):1915–50.

    Article  MathSciNet  MATH  Google Scholar 

  77. Ding X, He L, Carin L. Bayesian robust principal component analysis. IEEE Trans Image Process. 2011;20(12):3419–30.

    Article  MathSciNet  MATH  Google Scholar 

  78. Babacan SD, Luessi M, Molina R, Katsaggelos AK. Sparse Bayesian methods for low-rank matrix estimation. IEEE Trans Signal Process. 2012;60(8):3964–77.

    Article  MathSciNet  MATH  Google Scholar 

  79. Zhao Q, Meng D, Xu Z, Zuo W, Zhang L. Robust principal component analysis with complex noise. In: International conference on machine learning. 2014. p. 55–63.

  80. Wright J, Ganesh A, Rao S, Peng Y, Ma Y. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In: Advances in neural information processing systems. 2009. p. 2080–8.

  81. Liu R, Lin Z, Wei S, Su Z. Solving principal component pursuit in linear time via \(l\_1\) filtering. 2011. arXiv preprint arXiv:1108.5359.

  82. Mu Y, Dong J, Yuan X, Yan S. Accelerated low-rank visual recovery by random projection. In: CVPR 2011. New York: IEEE; 2011. p. 2609–16.

  83. Lin Z, Chen M, Ma Y. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. 2010. arXiv preprint arXiv:1009.5055.

  84. Anderson M, Ballard G, Demmel J, Keutzer K. Communication-avoiding qr decomposition for gpus. In: 2011 IEEE international parallel & distributed processing symposium. New York: IEEE; 2011. p. 48–58.

  85. Tang G, Nehorai A. Robust principal component analysis based on low-rank and block-sparse matrix decomposition. In: 2011 45th annual conference on information sciences and systems. New York: IEEE; 2011. p. 1–5.

  86. Klopp O, Lounici K, Tsybakov AB. Robust matrix completion. Probab Theory Related Fields. 2017;169(1–2):523–64.

    Article  MathSciNet  MATH  Google Scholar 

  87. Xu H, Caramanis C, Sanghavi S. Robust pca via outlier pursuit. In: Advances in neural information processing systems. 2010. p. 2496–504.

  88. Kim J, He Y, Park H. Algorithms for nonnegative matrix and tensor factorizations: a unified view based on block coordinate descent framework. J Glob Optim. 2014;58(2):285–319.

    Article  MathSciNet  MATH  Google Scholar 

  89. Lee DD, Seung HS. Learning the parts of objects by non-negative matrix factorization. Nature. 1999;401(6755):788–91.

    Article  MATH  Google Scholar 

  90. Van Benthem MH, Keenan MR. Fast algorithm for the solution of large-scale non-negativity-constrained least squares problems. J Chemom J Chemom Soc. 2004;18(10):441–50.

    Google Scholar 

  91. Javed S, Narayanamurthy P, Bouwmans T, Vaswani N. Robust pca and robust subspace tracking: a comparative evaluation. In: 2018 IEEE statistical signal processing workshop (SSP). New York: IEEE; 2018. p. 836–40.

  92. Lu C, Feng J, Chen Y, Liu W, Lin Z, Yan S. Tensor robust principal component analysis: exact recovery of corrupted low-rank tensors via convex optimization. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 5249–57.

  93. Zhang Z, Ely G, Aeron S, Hao N, Kilmer M. Novel methods for multilinear data completion and de-noising based on tensor-svd. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. p. 3842–9.

  94. Goldfarb D, Qin Z. Robust low-rank tensor recovery: models and algorithms. SIAM J Matrix Anal Appl. 2014;35(1):225–53.

    Article  MathSciNet  MATH  Google Scholar 

  95. Li Y, Yan J, Zhou Y, Yang J. Optimum subspace learning and error correction for tensors. In: European conference on computer vision. Berlin: Springer; 2010. p. 790–803.

    Chapter  Google Scholar 

  96. Kolda TG, Bader BW. Tensor decompositions and applications. SIAM Rev. 2009;51(3):455–500.

    Article  MathSciNet  MATH  Google Scholar 

  97. De Lathauwer L, De Moor B, Vandewalle J. A multilinear singular value decomposition. SIAM J Matrix Anal Appl. 2000;21(4):1253–78.

    Article  MathSciNet  MATH  Google Scholar 

  98. De Lathauwer L, De Moor B, Vandewalle J. On the best rank-1 and rank-(r 1, r 2,., rn) approximation of higher-order tensors. SIAM J Matrix Anal Appl. 2000;21(4):1324–42.

    Article  MathSciNet  MATH  Google Scholar 

  99. Bayat M, Alizad A, Fatemi M. Multi-rate higher order singular value decomposition for enhanced non-contrast ultrasound Doppler imaging of slow flow. In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018). New York: IEEE; 2018. p. 1178–81.

  100. Zhang H, Cai J-F, Cheng L, Zhu J. Strongly convex programming for exact matrix completion and robust principal component analysis. 2011. arXiv preprint arXiv:1112.3946.

  101. Chen Z. Multidimensional signal processing for sparse and low-rank problems. PhD thesis, Citeseer. 2014.

  102. Cai J-F, Candès EJ, Shen Z. A singular value thresholding algorithm for matrix completion. SIAM J Optim. 2010;20(4):1956–82.

    Article  MathSciNet  MATH  Google Scholar 

  103. Lin Z, Ganesh A, Wright J, Wu L, Chen M, Ma Y. Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. Coordinated Science Laboratory Report no. UILU-ENG-09-2214, DC-246. 2009.

  104. Errico C, Pierre J, Pezet S, Desailly Y, Lenkei Z, Couture O, Tanter M. Ultrafast ultrasound localization microscopy for deep super-resolution vascular imaging. Nature. 2015;527(7579):499.

    Article  Google Scholar 

  105. Chee AJ, Alfred C. Receiver-operating characteristic analysis of eigen-based clutter filters for ultrasound color flow imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2017;65(3):390–9.

    Article  Google Scholar 

  106. Urban A, Dussaux C, Martel G, Brunner C, Mace E, Montaldo G. Real-time imaging of brain activity in freely moving rats using functional ultrasound. Nat Methods. 2015;12(9):873.

    Article  Google Scholar 

  107. Errico C, Osmanski B-F, Pezet S, Couture O, Lenkei Z, Tanter M. Transcranial functional ultrasound imaging of the brain using microbubble-enhanced ultrasensitive Doppler. NeuroImage. 2016;124:752–61.

    Article  Google Scholar 

  108. Christensen-Jeffries K, Browning RJ, Tang M-X, Dunsby C, Eckersley RJ. In vivo acoustic super-resolution and super-resolved velocity mapping using microbubbles. IEEE Trans Med imaging. 2014;34(2):433–40.

    Article  Google Scholar 

  109. Hintermüller M, Wu T. Robust principal component pursuit via inexact alternating minimization on matrix manifolds. J Math Imaging Vis. 2015;51(3):361–77.

    Article  MathSciNet  MATH  Google Scholar 

  110. Jensen JA. Field: A program for simulating ultrasound systems. In: 10th nordicbaltic conference on biomedical imaging, vol. 4, supplement 1, part 1. Citeseer; p. 351–3. 1996.

  111. Jensen JA, Svendsen NB. Calculation of pressure fields from arbitrarily shaped, apodized, and excited ultrasound transducers. IEEE Trans Ultrason Ferroelectr Freq Control. 1992;39(2):262–7.

    Article  Google Scholar 

  112. Ashikuzzaman M, Gauthier CJ, Rivaz H. Global ultrasound elastography in spatial and temporal domains. IEEE Trans Ultrason Ferroelectr Freq Control. 2019;66(5):876–87.

    Article  Google Scholar 

  113. Freund RM, Grigas P, Mazumder R. An extended Frank-Wolfe method with “in-face” directions, and its application to low-rank matrix completion. SIAM J Optim. 2017;27(1):319–46.

    Article  MathSciNet  MATH  Google Scholar 

  114. Nayak R, Fatemi M, Alizad A. Adaptive background noise bias suppression in contrast-free ultrasound microvascular imaging. Phys Med Biol. 2019;64(24):245015.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by NSERC Discovery grant RGPIN 04136. The authors also wish to acknowledge the LRSLibrary which has been developed by Andrews Sobral, for collecting and sharing the open-source DLSM algorithms.

Funding

This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant RGPIN 04136.

Author information

Authors and Affiliations

Authors

Contributions

NZ designed the experiments and analyzed the results. NZ also generated the simulation data. MA provided phantom data and in vivo rat data and provided important comments for the revision of the manuscript. HR contributed to the completion of the manuscript and provides critical guidance. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Hassan Rivaz.

Ethics declarations

Ethics approval and consent to participate

The in vivo experiment on the rat was carried out at the Animal Care Facility (ACF) of Concordia University. All procedures were approved by the Animal Ethics Committee of Concordia University \((\#30000259)\) and were conducted in accordance with guidelines of the Canadian Council on Animal Care.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1: Table S1.

Information and abbreviations of DLSM algorithms.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, N., Ashikuzzaman, M. & Rivaz, H. Clutter suppression in ultrasound: performance evaluation and review of low-rank and sparse matrix decomposition methods. BioMed Eng OnLine 19, 37 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s12938-020-00778-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12938-020-00778-z

Keywords