000 | 05223nam a2200505 i 4500 | ||
---|---|---|---|
001 | 6276887 | ||
003 | IEEE | ||
005 | 20190220121650.0 | ||
006 | m o d | ||
007 | cr |n||||||||| | ||
008 | 151223s2003 maua ob 001 eng d | ||
010 | _z 67016501 (print) | ||
020 |
_a9780262310925 _qelectronic |
||
020 |
_z9780262511483 _qprint |
||
035 | _a(CaBNVSL)mat06276887 | ||
035 | _a(IDAMS)0b000064818c2028 | ||
040 |
_aCaBNVSL _beng _erda _cCaBNVSL _dCaBNVSL |
||
050 | 4 |
_aQA276 _b.A528 1967eb |
|
082 | 0 | 0 |
_a519 _219 |
100 | 1 |
_aAlbert, Arthur E., _eauthor. |
|
245 | 1 | 0 |
_aStochastic approximation and nonlinear regression / _c[by] Arthur E. Albert [and] Leland A. Gardner, Jr. |
264 | 1 |
_aCambridge, Massachusetts, _bMIT Press, _c[1967] |
|
264 | 2 |
_a[Piscataqay, New Jersey] : _bIEEE Xplore, _c[2003] |
|
300 |
_a1 PDF (xv, 204 pages) : _billustrations, |
||
336 |
_atext _2rdacontent |
||
337 |
_aelectronic _2isbdmedia |
||
338 |
_aonline resource _2rdacarrier |
||
490 | 1 |
_aMIT Press research monograph ; _vno. 42 |
|
500 | _a"The MIT Press Classics"--Cover | ||
504 | _aIncludes bibliographical references (p. 200-201). | ||
506 | 1 | _aRestricted to subscribers or individual electronic text purchasers. | |
520 | _aThis monograph addresses the problem of "real-time" curve fitting in the presence of noise, from the computational and statistical viewpoints. It examines the problem of nonlinear regression, where observations are made on a time series whose mean-value function is known except for a vector parameter. In contrast to the traditional formulation, data are imagined to arrive in temporal succession. The estimation is carried out in real time so that, at each instant, the parameter estimate fully reflects all available data.Specifically, the monograph focuses on estimator sequences of the so-called differential correction type. The term "differential correction" refers to the fact that the difference between the components of the updated and previous estimators is proportional to the difference between the current observation and the value that would be predicted by the regression function if the previous estimate were in fact the true value of the unknown vector parameter. The vector of proportionality factors (which is generally time varying and can depend upon previous estimates) is called the "gain" or "smoothing" vector.The main purpose of this research is to relate the large-sample statistical behavior of such estimates (consistency, rate of convergence, large-sample distribution theory, asymptotic efficiency) to the properties of the regression function and the choice of smoothing vectors. Furthermore, consideration is given to the tradeoff that can be effected between computational simplicity and statistical efficiency through the choice of gains.Part I deals with the special cases of an unknown scalar parameter-discussing probability-one and mean-square convergence, rates of mean-square convergence, and asymptotic distribution theory of the estimators for various choices of the smoothing sequence. Part II examines the probability-one and mean-square convergence of the estimators in the vector case for various choices of smoothing vectors. Examples are liberally sprinkled throughout the book. Indeed, the last chapter is devoted entirely to the discussion of examples at varying levels of generality.If one views the stochastic approximation literature as a study in the asymptotic behavior of solutions to a certain class of nonlinear first-order difference equations with stochastic driving terms, then the results of this monograph also serve to extend and complement many of the results in that literature, which accounts for the authors' choice of title.The book is written at the first-year graduate level, although this level of maturity is not required uniformly. Certainly the reader should understand the concept of a limit both in the deterministic and probabilistic senses (i.e., almost sure and quadratic mean convergence). This much will assure a comfortable journey through the first fourth of the book. Chapters 4 and 5 require an acquaintance with a few selected central limit theorems. A familiarity with the standard techniques of large-sample theory will also prove useful but is not essential. Part II, Chapters 6 through 9, is couched in the language of matrix algebra, but none of the "classical" results used are deep. The reader who appreciates the elementary properties of eigenvalues, eigenvectors, and matrix norms will feel at home.MIT Press Research Monograph No. 42. | ||
530 | _aAlso available in print. | ||
538 | _aMode of access: World Wide Web | ||
588 | _aDescription based on PDF viewed 12/23/2015. | ||
650 | 0 | _aTime-series analysis. | |
650 | 0 | _aRegression analysis. | |
655 | 0 | _aElectronic books. | |
700 | 1 |
_aGardner, Leland A., _ejoint author. |
|
710 | 2 |
_aIEEE Xplore (Online Service), _edistributor. |
|
710 | 2 |
_aMIT Press, _epublisher. |
|
776 | 0 | 8 |
_iPrint version _z9780262511483 |
830 | 0 |
_aMIT Press research monograph ; _vno. 42. |
|
856 | 4 | 2 |
_3Abstract with links to resource _uhttp://ieeexplore.ieee.org/xpl/bkabstractplus.jsp?bkn=6276887 |
999 |
_c39542 _d39542 |