Dynamic programming algorithm for training functional networks

Emad A. El-Sebakhy*, Salahadin A. Mohammed, Moustafa A. Elshafei

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

The paper proposes a dynamic programming algorithm for training of functional networks. The algorithm considers each node as a state. The problem is formulated as finding the sequence of states which minimizes the sum of the squared errors approximation. Each node is optimized with regard to its corresponding neural functions and its estimated neuron functions. The dynamic programming algorithm tries to find the best path from the final layer nodes to the input layer which minimizes an optimization criterion. Finally, in the pruning stage, the unused nodes are deleted. The output layer can be taken as a summation node using some linearly independent families, such as, polynomial, exponential, Fourier, etc. The algorithm is demonstrated by two examples and compared with other common algorithms in both computer science and statistics communities.

Original languageEnglish
Title of host publicationProceedings of the 2007 International Conference on Artificial Intelligence, ICAI 2007
Pages801-805
Number of pages5
StatePublished - 2007

Publication series

NameProceedings of the 2007 International Conference on Artificial Intelligence, ICAI 2007
Volume2

Keywords

  • Dynamic programming
  • Functional networks
  • Interpolation
  • Minimum description length

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Dynamic programming algorithm for training functional networks'. Together they form a unique fingerprint.

Cite this