Matches in SemOpenAlex for { <https://semopenalex.org/work/W268759760> ?p ?o ?g. }
Showing items 1 to 78 of
78
with 100 items per page.
- W268759760 startingPage "17" @default.
- W268759760 abstract "In most forecasting problems the optimal number of nodes appears to be between two and five ... in econometric model selection, some qualitative criteria such as signs of parameters and absolute magnitudes of elasticities may be used, whereas in neural networks, it is usually based on the bestfit... neural networks provide a flexible nonlinear modeling framework that can have significant advantages. Artificial neural network models are beginning to be used in the electric utility industry for short-term forecasting. The neural network framework provides a flexible function that can approximate a wide range of nonlinear processes. In forecasting problems where nonlinearities and variable interactions are important, neural networks can provide significant advantages. Despite these advantages, the topic of neural networks is surrounded by confusion and controversy. In part, this reflects the fact that a different language is used for neural networks than is used in the more familiar (to forecasters) area of econometrics. The main purpose of this article is to bridge this language gap. The discussion is put in Q & A format. Each question focuses on a specific issue or concept involved with specifying, estimating, or understanding neural networks. And the answers draw parallels, where possible, to elements of time series and econometric analysis. So, let's start with some basic questions. NEURAL NETWORK TERMINOLOGY Q. What exactly is an artificial neural network? A. Artificial neural network models are flexible nonlinear models. In the most general form, the type of neural network model typically used in forecasting can be written as follows: Y = Ft H,(X), H2(X), ..., HN(X) ] + u where Y is a dependent variable, X is a set of explanatory variables, F and the H's are the neural network functions, and u is the model error term. In the neural network language: The X's are called inputs Y is called the output The H functions are called the hidden layer activation functions F is called the output layer activation function Q. Is there a more specific form? A. Yes. In the specific form, that is normally used, F is linear in the H functions. The H functions are specified to be Sshaped curves using the function. In this case, the neural network model is described as follows: 1. Single output feed forward neural network 2. With one hidden layer and with multiple nodes in the hidden layer 3. With logistic activation functions in the hidden layer 4. With a linear activation function at the output layer Q. How does this relate to a linear regression model. A. The two main differences are that a linear regression model is linear in its parameters and there are no hidden layer functions in a regression model. The linear model takes the form: Y = XB + u In neural network terms, this is a single output feed forward system with no hidden layer and with a linear activation function at the output layer. In this sense, the linear regression model is a severely limited special case of the neural network framework. Q. Why is it called a feed forward neural network? A. The best way to answer this question is to draw the classic neural net diagram. As shown in Figure 1, the explanatory variables (X) enter at the bottom in the input layer. The logistic functions (H, and H2) appear in the hidden layer. And the result (Y) appears in the output layer. The idea is that the inputs feed into the functions in the hidden layer, and there is no feedback. Further, the functions in the hidden layer do not feed sideways into each other. Instead, they feed onward to the output layer. And there is no feedback, delayed, or otherwise, from the output layer to the hidden layer. The absence of feedback and the absence of interaction between hidden-layer functions makes it a feed forward system. …" @default.
- W268759760 created "2016-06-24" @default.
- W268759760 creator A5038721760 @default.
- W268759760 date "1997-10-01" @default.
- W268759760 modified "2023-09-26" @default.
- W268759760 title "A Primer on Neural Networks for Forecasting" @default.
- W268759760 cites W2157350782 @default.
- W268759760 cites W2159219354 @default.
- W268759760 hasPublicationYear "1997" @default.
- W268759760 type Work @default.
- W268759760 sameAs 268759760 @default.
- W268759760 citedByCount "8" @default.
- W268759760 countsByYear W2687597602014 @default.
- W268759760 countsByYear W2687597602021 @default.
- W268759760 crossrefType "journal-article" @default.
- W268759760 hasAuthorship W268759760A5038721760 @default.
- W268759760 hasConcept C119857082 @default.
- W268759760 hasConcept C121332964 @default.
- W268759760 hasConcept C138885662 @default.
- W268759760 hasConcept C14036430 @default.
- W268759760 hasConcept C154945302 @default.
- W268759760 hasConcept C158622935 @default.
- W268759760 hasConcept C173079777 @default.
- W268759760 hasConcept C175202392 @default.
- W268759760 hasConcept C177973122 @default.
- W268759760 hasConcept C41008148 @default.
- W268759760 hasConcept C41895202 @default.
- W268759760 hasConcept C50644808 @default.
- W268759760 hasConcept C547195049 @default.
- W268759760 hasConcept C62520636 @default.
- W268759760 hasConcept C78458016 @default.
- W268759760 hasConcept C86803240 @default.
- W268759760 hasConceptScore W268759760C119857082 @default.
- W268759760 hasConceptScore W268759760C121332964 @default.
- W268759760 hasConceptScore W268759760C138885662 @default.
- W268759760 hasConceptScore W268759760C14036430 @default.
- W268759760 hasConceptScore W268759760C154945302 @default.
- W268759760 hasConceptScore W268759760C158622935 @default.
- W268759760 hasConceptScore W268759760C173079777 @default.
- W268759760 hasConceptScore W268759760C175202392 @default.
- W268759760 hasConceptScore W268759760C177973122 @default.
- W268759760 hasConceptScore W268759760C41008148 @default.
- W268759760 hasConceptScore W268759760C41895202 @default.
- W268759760 hasConceptScore W268759760C50644808 @default.
- W268759760 hasConceptScore W268759760C547195049 @default.
- W268759760 hasConceptScore W268759760C62520636 @default.
- W268759760 hasConceptScore W268759760C78458016 @default.
- W268759760 hasConceptScore W268759760C86803240 @default.
- W268759760 hasIssue "3" @default.
- W268759760 hasLocation W2687597601 @default.
- W268759760 hasOpenAccess W268759760 @default.
- W268759760 hasPrimaryLocation W2687597601 @default.
- W268759760 hasRelatedWork W1489327613 @default.
- W268759760 hasRelatedWork W1508957419 @default.
- W268759760 hasRelatedWork W1551520385 @default.
- W268759760 hasRelatedWork W1968049629 @default.
- W268759760 hasRelatedWork W1993498260 @default.
- W268759760 hasRelatedWork W1994275350 @default.
- W268759760 hasRelatedWork W2090965351 @default.
- W268759760 hasRelatedWork W2749307223 @default.
- W268759760 hasRelatedWork W2912753314 @default.
- W268759760 hasRelatedWork W2932914515 @default.
- W268759760 hasRelatedWork W2951603627 @default.
- W268759760 hasRelatedWork W2970769658 @default.
- W268759760 hasRelatedWork W3087166003 @default.
- W268759760 hasRelatedWork W3094965745 @default.
- W268759760 hasRelatedWork W3099056690 @default.
- W268759760 hasRelatedWork W3110762630 @default.
- W268759760 hasRelatedWork W3120371886 @default.
- W268759760 hasRelatedWork W3199255997 @default.
- W268759760 hasRelatedWork W3213639461 @default.
- W268759760 hasRelatedWork W6336603 @default.
- W268759760 hasVolume "16" @default.
- W268759760 isParatext "false" @default.
- W268759760 isRetracted "false" @default.
- W268759760 magId "268759760" @default.
- W268759760 workType "article" @default.