Neural networks : (Registro nro. 12987)

Detalles MARC
000 -Cabecera
Campo de control de longitud fija 07165nam a2200265 a 4500
003 - Identificador del Número de control
Identificador del número de control AR-sfUTN
008 - Códigos de información de longitud fija-Información general
Códigos de información de longitud fija 170717s1999 ||||| |||| 00| 0 eng d
020 ## - ISBN
ISBN 0132733501
040 ## - Fuente de la catalogación
Centro transcriptor AR-sfUTN
041 ## - Código de lengua
Código de lengua del texto eng
080 ## - CDU
Clasificación Decimal Universal 004.85 H331
Edición de la CDU 2000
100 1# - Punto de acceso principal-Nombre de persona
Nombre personal Haykin, Simon
245 10 - Mención de título
Título Neural networks :
Resto del título a comprehensive foundation /
Mención de responsabilidad Simon Haykin.
250 ## - Mención de edición
Mención de edición 2nd
260 ## - Publicación, distribución, etc. (pie de imprenta)
Lugar de publicación, distribución, etc. Upper Saddle River, New Jersey :
Nombre del editor, distribuidor, etc. Prentice-Hall,
Fecha de publicación, distribución, etc. 1999
300 ## - Descripción física
Extensión 842 p.
336 ## - Tipo de contenido
Fuente rdacontent
Término de tipo de contenido texto
Código de tipo de contenido txt
337 ## - Tipo de medio
Fuente rdamedia
Nombre del tipo de medio sin mediación
Código del tipo de medio n
338 ## - Tipo de soporte
Fuente rdacarrier
Nombre del tipo de soporte volumen
Código del tipo de soporte nc
505 80 - Nota de contenido con formato
Nota de contenido con formato CONTENIDO<br/>1. Introduction 1<br/>What Is a Neural Network? 1<br/>Human Brain 6<br/>Models of a Neuron 10<br/>Neural Networks Viewed as Directed Graphs 15<br/>Feedback 18<br/>Network Architectures 21<br/>Knowledge Representation 23<br/>Artificial Intelligence and Neural Networks 34<br/>Historical Notes 38<br/>2. Learning Processes 50<br/>Error-Correction Learning 51<br/>Memory-Based Learning 53<br/>Hebbian Learning 55<br/>Competitive Learning 58<br/>Boltzmann Learning 60<br/>Credit Assignment Problem 62<br/>Learning with a Teacher 63<br/>Learning without a Teacher 64<br/>Learning Tasks 66<br/>Memory 75<br/>Adaptation 83<br/>Statistical Nature of the Learning Process 84<br/>Statistical Learning Theory 89<br/>Probably Approximately Correct Model of Learning 102<br/>3. Single Layer Perceptrons 117<br/>Adaptive Filtering Problem 118<br/>Unconstrained Optimization Techniques 121<br/>Linear Least-Squares Filters 126<br/>Least-Mean-Square Algorithm 128<br/>Learning Curves 133<br/>Learning Rate Annealing Techniques 134<br/>Perceptron 135<br/>Perceptron Convergence Theorem 137<br/>Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment 143<br/>4. Multilayer Perceptrons 156<br/>Some Preliminaries 159<br/>Back-Propagation Algorithm 161<br/>Summary of the Back-Propagation Algorithm 173<br/>XOR Problem 175<br/>Heuristics for Making the Back-Propagation Algorithm Perform Better 178<br/>Output Representation and Decision Rule 184<br/>Computer Experiment 187<br/>Feature Detection 199<br/>Back-Propagation and Differentiation 202<br/>Hessian Matrix 204<br/>Generalization 205<br/>Approximations of Functions 208<br/>Cross-Validation 213<br/>Network Pruning Techniques 218<br/>Virtues and Limitations of Back-Propagation Learning 226<br/>Accelerated Convergence of Back-Propagation Learning 233<br/>Supervised Learning Viewed as an Optimization Problem 234<br/>Convolutional Networks 245<br/>5. Radial-Basis Function Networks 256<br/>Cover's Theorem on the Separability of Patterns 257<br/>Interpolation Problem 262<br/>Supervised Learning as an Ill-Posed Hypersurface Reconstruction Problem 265<br/>Regularization Theory 267<br/>Regularization Networks 277<br/>Generalized Radial-Basis Function Networks 278<br/>XOR Problem (Revisited) 282<br/>Estimation of the Regularization Parameter 284<br/>Approximation Properties of RBF Networks 290<br/>Comparison of RBF Networks and Multilayer Perceptrons 293<br/>Kernel Regression and Its Its Relation to RBF Networks 294<br/>Learning Strategies 298<br/>Computer Experiment 305<br/>6. Support Vector Machines 318<br/>Optimal Hyperplane for Linearly Separable Patterns 319<br/>Optimal Hyperplane for Nonseparable Patterns 326<br/>How to Build a Support Vector Machine for Pattern Recognition 329<br/>Example: XOR Problem (Revisited) 335<br/>Computer Experiment 337<br/>epsis-Insensitive Loss Function 339<br/>Support Vector Machines for Nonlinear Regression 340<br/>7. Committee Machines 351<br/>Ensemble Averaging 353<br/>Computer Experiment I 355<br/>Boosting 357<br/>Computer Experiment II 364<br/>Associative Gaussian Mixture Model 366<br/>Hierarchical Mixture of Experts Model 372<br/>Model Selection Using a Standard Decision Tree 374<br/>A Priori and A Posteriori Probabilities 377<br/>Maximum Likelihood Estimation 378<br/>Learning Strategies for the HME Model 380<br/>EM Algorithm 382<br/>Application of the EM Algorithm to the HME Model 383<br/>8. Principal Components Analysis 392<br/>Some Intuitive Principles of Self-Organization 393<br/>Principal Components Analysis 396<br/>Hebbian-Based Maximum Eigenfilter 404<br/>Hebbian-Based Principal Components Analysis 413<br/>Computer Experiment: Image Coding 419<br/>Adaptive Principal Components Analysis Using Lateral Inhibition 422<br/>Two Classes of PCA Algorithms 430<br/>Batch and Adaptive Methods of Computation 430<br/>Kernel-Based Principal Components Analysis 432<br/>9. Self-Organizing Maps 443<br/>Two Basic Feature-Mapping Models 444<br/>Self-Organizing Map 446<br/>Summary of the SOM Algorithm 453<br/>Properties of the Feature Map 454<br/>Computer Simulations 461<br/>Learning Vector Quantization 466<br/>Computer Experiment: Adaptive Pattern Classification 468<br/>Hierarchical Vector Quantization 470<br/>Contextual Maps 474<br/>10. Information-Theoretic Models 484<br/>Entropy 485<br/>Maximum Entropy Principle 490<br/>Mutual Information 492<br/>Kullback-Leibler Divergence 495<br/>Mutual Information as an Objective Function To Be Optimized 498<br/>Maximum Mutual Information Principle 499<br/>Infomax and Redundancy Reduction 503<br/>Spatially Coherent Features 506<br/>Spatially Incoherent Features 508<br/>Independent Components Analysis 510<br/>Computer Experiment 523<br/>Maximum Likelihood Estimation 525<br/>Maximum Entropy Method 529<br/>11. Stochastic Machines And Their Approximates Rooted in Statistical Mechanics 545<br/>Statistical Mechanics 546<br/>Markov Chains 548<br/>Metropolis Algorithm 556<br/>Simulated Annealing 558<br/>Gibbs Sampling 561<br/>Boltzmann Machine 562<br/>Sigmoid Belief Networks 569<br/>Helmholtz Machine 574<br/>Mean-Field Theory 576<br/>Deterministic Boltzmann Machine 578<br/>Deterministic Sigmoid Belief Networks 579<br/>Deterministic Annealing 586<br/>12. Neurodynamic Programming 603<br/>Markovian Decision Processes 604<br/>Bellman's Optimality Criterion 607<br/>Policy Iteration 610<br/>Value Iteration 612<br/>Neurodynamic Programming 617<br/>Approximate Policy Iteration 618<br/>Q-Learning 622<br/>Computer Experiment 627<br/>13. Temporal Processing Using Feedforward Networks 635<br/>Short-term Memory Structures 636<br/>Network Architectures for Temporal Processing 640<br/>Focused Time Lagged Feedforward Networks 643<br/>Computer Experiment 645<br/>Universal Myopic Mapping Theorem 646<br/>Spatio-Temporal Models of a Neuron 648<br/>Distributed Time Lagged Feedforward Networks 651<br/>Temporal Back-Propagation Algorithm 652<br/>14. Neurodynamics 664<br/>Dynamical Systems 666<br/>Stability of Equilibrium States 669<br/>Attractors 674<br/>Neurodynamical Models 676<br/>Manipulation of Attractors as a Recurrent Network Paradigm 680<br/>Hopfield Models 680<br/>Computer Experiment I 696<br/>Cohen-Grossberg Theorem 701<br/>Brain-State-in-a-Box Model 703<br/>Computer Experiment II 709<br/>Strange Attractors and Chaos 709<br/>Dynamic Reconstruction of a Chaotic Process 714<br/>Computer Experiment III 718<br/>15. Dynamically Driven Recurrent Networks 732<br/>Recurrent Network Architectures 733<br/>State-Space Model 739<br/>Nonlinear Autoregressive with Exogenous Inputs Model 746<br/>Computation Power of Recurrent Networks 747<br/>Learning Algorithms 750<br/>Back-Propagation Through Time 751<br/>Real-Time Recurrent Learning 756<br/>Kalman Filters 762<br/>Decoupled Extended Kalman Filters 765<br/>Computer Experiment 770<br/>Vanishing Gradients in Recurrent Networks 773<br/>System Identification 776<br/>Model-Reference Adaptive Control 780<br/>Epilogue 790<br/>Bibliography 796<br/>Index 837<br/>
650 ## - Punto de acceso adicional de materia - Término de materia
Término de materia NEURAL NETWORKS
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Tipo de ítem Koha Libro
Esquema de clasificación Clasificación Decinal Universal
Existencias
Estado Estado perdido Estado de conservación Tipo de préstamo Biblioteca Biblioteca Fecha de adquisición Origen de la adquisición Número de inventario Total Checkouts ST completa de Koha Código de barras Date last seen Precio efectivo a partir de Tipo de ítem Koha
      Sólo Consulta Facultad Regional Santa Fe - Biblioteca "Rector Comodoro Ing. Jorge Omar Conca" Facultad Regional Santa Fe - Biblioteca "Rector Comodoro Ing. Jorge Omar Conca" 02/02/2018 Compra Exp. 23/2010 10439   004.85 H331 10439 02/02/2018 02/02/2018 Libro