博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
【 Notes 】NLS ALGORITHMS of TOA - Based Positioning
阅读量:2028 次
发布时间:2019-04-28

本文共 4053 字,大约阅读时间需要 13 分钟。

目录

 


NLS 

这篇博文【】给出了TOA定位的测量模型,给出向量形式:

\bold{r_{TOA}} = \bold{f_{TOA}(x)} + \bold{n_{TOA}}                                                                                                                 (1)

非向量形式:

r_{TOA,l} = \sqrt{(x - x_l)^2+(y-y_l)^2}+n_{TOA,l},l = 1,2,...,L                                                               (2)

The nonlinear methodology attempts to find the source location directly from Equations (1)  which includes the NLS and ML estimators.

非线性方法试图直接从等式(1)中找到源位置,方法包括 NLS和ML 估计器。

Global convergence of these schemes may not be guaranteed because their optimization cost functions are multimodal.

这些方案的全局收敛性可能无法保证, 因为它们的优化成本函数是多模态的。

Moreover, the NLS method is simpler and is a practical choice when the noise information is unavailable. On the other hand, the ML estimator can be considered as a weighted version of the NLS method by utilizing the noise covariance, and it is optimum in the sense that its estimation performance can attain CRLB.

此外,NLS方法更简单,并且在噪声信息不可用时是实用的选择。 另一方面,ML估计器可以通过利用噪声协方差被认为是NLS方法的加权版本,并且在其估计性能可以达到CRLB的意义上它是最佳的。

 

The NLS approach minimizes the LS cost functions directly constructed from Equations (2) , which are presented  as follows.

Detailed discussion is provided for TOA - based positioning, while the results can be straightforwardly applied for the remaining three cases.

只对TOA进行详细讨论,结果可以直接应用到其他三种情况,包括TDOA,RSS,DOA。

TOA - Based Positioning:

Based on Equations (1) and (2) , the NLS cost function, denoted by \bold {J_{NLS,TOA}(\tilde x)}, is 

 \bold {J_{NLS,TOA}(\tilde x)} = \sum_{l = 1}^{L}(r _{TOA,l}-\sqrt{(\tilde x - x_l)^2+(\tilde y - y_l)^2} )^2                                                                   (3)

                        = \bold {(r_{TOA}-f_{TOA}(\tilde x))^T(r_{TOA}-f_{TOA}(\tilde x))}

 

The NLS position estimate is equal to \bold{\tilde x}, which corresponds to the smallest value of \bold {J_{NLS,TOA}(\tilde x)};that is 

\bold {\tilde x} = arg \underset{\bold{\tilde x}}{min} \bold{J_{NLS,TOA}(\tilde x)}                                                                                                                        (4)

Finding \bold{\tilde x} is not a simple task as there are local minima apart from the global minimum in the 2 - D surface of \bold {J_{NLS,TOA}(\tilde x)}

寻找 \bold{\tilde x} 并不是一项简单的任务,因为除了 \bold {J_{NLS,TOA}(\tilde x)} 的二维表面的全局最小值之外,还有局部最小值

Basically, there are two directions for solving Equation (4) . The first one attempts to perform a global exploration by using grid search or random search techniques, such as genetic algorithm [14] and particle swarm optimization [15] .

【14】

【15】

On the other hand, the second direction corresponds to a local search, which is an iterative algorithm based on an initial position estimate, denoted by \bold {\hat{x}^0} , where 0 refers to the 0th iteration. If  \bold {\hat{x}^0}  is sufficiently close to x , it is expected that ˆ x can be obtained in the iterative procedure.

In this chapter, three commonly used local search schemes, namely, Newton –Raphson, Gauss – Newton, and steepest descent methods, are presented. For more advanced local search techniques of source localization, the interested reader is referred to [16] .

【16】论文地址:


Newton – Raphson

The iterative Newton – Raphson procedure for \bold {\hat{x}} is

                                                                           (5)

where \bold { H(J_{NLS,TOA}(\hat{x}^k)) } and  \bold{ \triangledown (J_{NLS,TOA}(\hat{x}^k) ) } are the corresponding Hessian matrix and gradient vector computed at the k th iteration estimate, namely, \bold { \hat{x} ^ k} , and they have the forms of

                                                                      (6)

with 

               (7)

                                          (8)

                                    (9)


Gauss – Newton

For the Gauss – Newton method, the updating rule is

                  (10)

where  \bold { G(f_{TOA}(\hat {x}^k) ) } is the Jacobian matrix of \bold{ f_{TOA}(\hat{x}^k)} computed at \bold{\hat{x}^k}  and has the following expression:

                               (11)


steepest descent method

Finally, the iterative procedure for the steepest descent method is

                                                                                                                          (12)

where μ is a positive constant, which controls the convergence rate and stability.

Generally speaking, a larger value of μ increases the convergence speed and vice versa. In practice, we should choose a sufficiently small μ to ensure algorithm stability.

其中μ是正常数,它控制收敛速度和稳定性。

一般而言,较大的μ值会增加收敛速度,反之亦然。 在实践中,我们应该选择足够小的μ来确保算法的稳定性。

Starting with \bold {\hat{x}^0} , the iterative procedure of Equations (5) (10) or (12) is terminated according to a stopping criterion, which indicates convergence. Typical choices of stopping criteria include number of iterations and

where ε is a sufficiently small positive constant.

As a brief comparison, both Newton –Raphson and Gauss – Newton methods provide fast convergence, but matrix inverse is required

作为简要比较,牛顿 - 拉普森和高斯 - 牛顿方法都提供快速收敛,但矩阵逆是必需的

and the latter is simpler in the sense that second - order differentiation of \bold {J_{NLS,TOA}(\tilde x)} is not involved.

而且后者在\bold {J_{NLS,TOA}(\tilde x)}的二阶微分不涉及的意义上更简单。

On the other hand, the steepest descent method is stable, but its convergence rate is slow and can be considered as an approximation form of Equation (5) , where the Hessian matrix is omitted.

另一方面,最速下降方法是稳定的,但其收敛速度很慢并且可以被认为是等式(5)的近似形式,其中省略了Hessian矩阵。



这篇博文就到这里,里面讲到了三种局部估计方法,并提出了其他一些全局方法,可自行查看相关论文。

下面的博文将对上面三种方法的收敛性以及定位准确性进行讨论,并且给出某些MATLAB代码。读者如果足够灵活且有探索精神的话,可以通过最基本的代码去编写其他类型的相关代码。

这都是仿真基本功,在下不才,被导师批评MATLAB编程能力太差,我也承认,但知耻而后勇,我会沉下来默默研究的。

转载地址:http://nijaf.baihongyu.com/

你可能感兴趣的文章
SIP能否成为主流? 三种IP电话协议比较
查看>>
常用正则表达式
查看>>
几种开源SIP协议栈对比
查看>>
ASN.1 encoding rules
查看>>
A Layman's Guide to a Subset of ASN.1, BER, and DER
查看>>
Asterisk Race Condition Test
查看>>
everyday sentence 7.27
查看>>
Subversion常用操作总结
查看>>
从追MM谈23种设计模式 --- 很经典! 学会用设计模式思考问题
查看>>
work log 7.23--- 7.27
查看>>
everyday sentence 7.30
查看>>
转:设计模式与篮球
查看>>
发现一个很不错的linux/unix工具screen: 管理远程会话
查看>>
在Gentoo linux中怎样生成core dump文件
查看>>
用ncurses在linux字符界面中进行界面开发
查看>>
转:rrdtool强大的绘图的引擎
查看>>
ncurses based open source little tools
查看>>
uSTL - a size-optimized STL
查看>>
linux中装载动态库出错的解决办法
查看>>
转:LINUX系统中动态链接库的创建与使用
查看>>