Normal view MARC view ISBD view

Handbook of learning and approximate dynamic programming / [edited by] Jennie Si ... [et al.].

Contributor(s): Si, Jennie | John Wiley & Sons [publisher.] | IEEE Xplore (Online service) [distributor.].
Material type: materialTypeLabelBookSeries: IEEE press series on computational intelligence: 2Publisher: Hoboken, New Jersey : IEEE Press, c2004Distributor: [Piscataqay, New Jersey] : IEEE Xplore, [2004]Description: 1 PDF (xxi, 644 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9780470544785.Subject(s): Dynamic programming | Automatic programming (Computer science) | Machine learning | Control theory | Systems engineering | Adaptation model | Aerospace control | Aerospace electronics | Algorithm design and analysis | Analytical models | Approximation algorithms | Approximation methods | Argon | Artificial neural networks | Atmospheric modeling | Automatic test pattern generation | Benchmark testing | Books | Cities and towns | Coils | Communities | Concurrent computing | Conferences | Control systems | Convergence | Data structures | Decision making | Driver circuits | Dynamic programming | Dynamic scheduling | Eigenvalues and eigenfunctions | Equations | Estimation | Focusing | Function approximation | Fuzzy control | Generators | Helicopters | Heuristic algorithms | Hidden Markov models | History | Humans | Indexes | Learning | Learning systems | Linear programming | Load flow | Loss measurement | Machine learning | Machine learning algorithms | Markov processes | Mathematical model | Measurement | Missiles | Optimal control | Optimization | Power system dynamics | Power system stability | Process control | Programming | Proposals | Propulsion | Recurrent neural networks | Resource management | Roads | Robots | Robust control | Robustness | Rotors | Sections | Security | Sensitivity | Stability analysis | Stability criteria | State estimation | Steady-state | Stochastic systems | Supervised learning | Training | Trajectory | Uncertainty | Vectors | Water heatingGenre/Form: Electronic books.DDC classification: 519.7/03 Online resources: Abstract with links to resource Also available in print.
Contents:
Foreword. -- 1. ADP: goals, opportunities and principles. -- Part I: Overview. -- 2. Reinforcement learning and its relationship to supervised learning. -- 3. Model-based adaptive critic designs. -- 4. Guidance in the use of adaptive critics for control. -- 5. Direct neural dynamic programming. -- 6. The linear programming approach to approximate dynamic programming. -- 7. Reinforcement learning in large, high-dimensional state spaces. -- 8. Hierarchical decision making. -- Part II: Technical advances. -- 9. Improved temporal difference methods with linear function approximation. -- 10. Approximate dynamic programming for high-dimensional resource allocation problems. -- 11. Hierarchical approaches to concurrency, multiagency, and partial observability. -- 12. Learning and optimization - from a system theoretic perspective. -- 13. Robust reinforcement learning using integral-quadratic constraints. -- 14. Supervised actor-critic reinforcement learning. -- 15. BPTT and DAC - a common framework for comparison. -- Part III: Applications. -- 16. Near-optimal control via reinforcement learning. -- 17. Multiobjective control problems by reinforcement learning. -- 18. Adaptive critic based neural network for control-constrained agile missile. -- 19. Applications of approximate dynamic programming in power systems control. -- 20. Robust reinforcement learning for heating, ventilation, and air conditioning control of buildings. -- 21. Helicopter flight control using direct neural dynamic programming. -- 22. Toward dynamic stochastic optimal power flow. -- 23. Control, optimization, security, and self-healing of benchmark power systems.
Summary: . A complete resource to Approximate Dynamic Programming (ADP), including on-line simulation code. Provides a tutorial that readers can use to start implementing the learning algorithms provided in the book. Includes ideas, directions, and recent results on current research issues and addresses applications where ADP has been successfully implemented. The contributors are leading researchers in the field.
Tags from this library: No tags from this library for this title. Log in to add tags.
    average rating: 0.0 (0 votes)
No physical items for this record

Includes bibliographical references and index.

Foreword. -- 1. ADP: goals, opportunities and principles. -- Part I: Overview. -- 2. Reinforcement learning and its relationship to supervised learning. -- 3. Model-based adaptive critic designs. -- 4. Guidance in the use of adaptive critics for control. -- 5. Direct neural dynamic programming. -- 6. The linear programming approach to approximate dynamic programming. -- 7. Reinforcement learning in large, high-dimensional state spaces. -- 8. Hierarchical decision making. -- Part II: Technical advances. -- 9. Improved temporal difference methods with linear function approximation. -- 10. Approximate dynamic programming for high-dimensional resource allocation problems. -- 11. Hierarchical approaches to concurrency, multiagency, and partial observability. -- 12. Learning and optimization - from a system theoretic perspective. -- 13. Robust reinforcement learning using integral-quadratic constraints. -- 14. Supervised actor-critic reinforcement learning. -- 15. BPTT and DAC - a common framework for comparison. -- Part III: Applications. -- 16. Near-optimal control via reinforcement learning. -- 17. Multiobjective control problems by reinforcement learning. -- 18. Adaptive critic based neural network for control-constrained agile missile. -- 19. Applications of approximate dynamic programming in power systems control. -- 20. Robust reinforcement learning for heating, ventilation, and air conditioning control of buildings. -- 21. Helicopter flight control using direct neural dynamic programming. -- 22. Toward dynamic stochastic optimal power flow. -- 23. Control, optimization, security, and self-healing of benchmark power systems.

Restricted to subscribers or individual electronic text purchasers.

. A complete resource to Approximate Dynamic Programming (ADP), including on-line simulation code. Provides a tutorial that readers can use to start implementing the learning algorithms provided in the book. Includes ideas, directions, and recent results on current research issues and addresses applications where ADP has been successfully implemented. The contributors are leading researchers in the field.

Also available in print.

Mode of access: World Wide Web

Description based on PDF viewed 12/21/2015.

There are no comments for this item.

Log in to your account to post a comment.

International Institute of Information Technology, Bangalore
26/C, Electronics City, Hosur Road,Bengaluru-560100 Contact Us
Koha & OPAC at IIITB deployed by Bhargav Sridhar & Team.

Powered by Koha