Matches in SemOpenAlex for { <https://semopenalex.org/work/W4367694585> ?p ?o ?g. }
Showing items 1 to 73 of
73
with 100 items per page.
- W4367694585 abstract "Researchers commonly believe that neural networks model a high-dimensional space but cannot give a clear definition of this space. What is this space? What is its dimension? And does it has finite dimensions? In this paper, we develop a plausible theory on interpreting neural networks in terms of the role of activation functions in neural networks and define a high-dimensional (more precisely, an infinite-dimensional) space that neural networks including deep-learning networks could create. We show that the activation function acts as a magnifying function that maps the low-dimensional linear space into an infinite-dimensional space, which can distinctly identify the polynomial approximation of any multivariate continuous function of the variable values being the same features of the given dataset. Given a dataset with each example of $d$ features $f_1$, $f_2$, $cdots$, $f_d$, we believe that neural networks model a special space with infinite dimensions, each of which is a monomial $$prod_{i_1, i_2, cdots, i_d} f_1^{i_1} f_2^{i_2} cdots f_d^{i_d}$$ for some non-negative integers ${i_1, i_2, cdots, i_d} in mathbb{Z}_{0}^{+}={0,1,2,3,ldots} $. We term such an infinite-dimensional space a $textit{ Super Space (SS)}$. We see such a dimension as the minimum information unit. Every neuron node previously through an activation layer in neural networks is a $textit{ Super Plane (SP) }$, which is actually a polynomial of infinite degree. This $textit{ Super Space }$ is something like a coordinate system, in which every multivalue function can be represented by a $textit{ Super Plane }$. We also show that training NNs could at least be reduced to solving a system of nonlinear equations. %solve sets of nonlinear equations" @default.
- W4367694585 created "2023-05-03" @default.
- W4367694585 creator A5002732282 @default.
- W4367694585 date "2023-05-01" @default.
- W4367694585 modified "2023-10-14" @default.
- W4367694585 title "Activation Functions Not To Active: A Plausible Theory on Interpreting Neural Networks" @default.
- W4367694585 doi "https://doi.org/10.48550/arxiv.2305.00663" @default.
- W4367694585 hasPublicationYear "2023" @default.
- W4367694585 type Work @default.
- W4367694585 citedByCount "0" @default.
- W4367694585 crossrefType "posted-content" @default.
- W4367694585 hasAuthorship W4367694585A5002732282 @default.
- W4367694585 hasBestOaLocation W43676945851 @default.
- W4367694585 hasConcept C111919701 @default.
- W4367694585 hasConcept C11252640 @default.
- W4367694585 hasConcept C114614502 @default.
- W4367694585 hasConcept C118615104 @default.
- W4367694585 hasConcept C121332964 @default.
- W4367694585 hasConcept C134306372 @default.
- W4367694585 hasConcept C14036430 @default.
- W4367694585 hasConcept C154945302 @default.
- W4367694585 hasConcept C17825722 @default.
- W4367694585 hasConcept C202444582 @default.
- W4367694585 hasConcept C24890656 @default.
- W4367694585 hasConcept C2524010 @default.
- W4367694585 hasConcept C2775997480 @default.
- W4367694585 hasConcept C2778572836 @default.
- W4367694585 hasConcept C33676613 @default.
- W4367694585 hasConcept C33923547 @default.
- W4367694585 hasConcept C38365724 @default.
- W4367694585 hasConcept C41008148 @default.
- W4367694585 hasConcept C50644808 @default.
- W4367694585 hasConcept C78458016 @default.
- W4367694585 hasConcept C86803240 @default.
- W4367694585 hasConcept C90119067 @default.
- W4367694585 hasConceptScore W4367694585C111919701 @default.
- W4367694585 hasConceptScore W4367694585C11252640 @default.
- W4367694585 hasConceptScore W4367694585C114614502 @default.
- W4367694585 hasConceptScore W4367694585C118615104 @default.
- W4367694585 hasConceptScore W4367694585C121332964 @default.
- W4367694585 hasConceptScore W4367694585C134306372 @default.
- W4367694585 hasConceptScore W4367694585C14036430 @default.
- W4367694585 hasConceptScore W4367694585C154945302 @default.
- W4367694585 hasConceptScore W4367694585C17825722 @default.
- W4367694585 hasConceptScore W4367694585C202444582 @default.
- W4367694585 hasConceptScore W4367694585C24890656 @default.
- W4367694585 hasConceptScore W4367694585C2524010 @default.
- W4367694585 hasConceptScore W4367694585C2775997480 @default.
- W4367694585 hasConceptScore W4367694585C2778572836 @default.
- W4367694585 hasConceptScore W4367694585C33676613 @default.
- W4367694585 hasConceptScore W4367694585C33923547 @default.
- W4367694585 hasConceptScore W4367694585C38365724 @default.
- W4367694585 hasConceptScore W4367694585C41008148 @default.
- W4367694585 hasConceptScore W4367694585C50644808 @default.
- W4367694585 hasConceptScore W4367694585C78458016 @default.
- W4367694585 hasConceptScore W4367694585C86803240 @default.
- W4367694585 hasConceptScore W4367694585C90119067 @default.
- W4367694585 hasLocation W43676945851 @default.
- W4367694585 hasOpenAccess W4367694585 @default.
- W4367694585 hasPrimaryLocation W43676945851 @default.
- W4367694585 hasRelatedWork W1512363145 @default.
- W4367694585 hasRelatedWork W1516407058 @default.
- W4367694585 hasRelatedWork W1557101139 @default.
- W4367694585 hasRelatedWork W1585475679 @default.
- W4367694585 hasRelatedWork W2027742133 @default.
- W4367694585 hasRelatedWork W2053955420 @default.
- W4367694585 hasRelatedWork W2090726416 @default.
- W4367694585 hasRelatedWork W2989875800 @default.
- W4367694585 hasRelatedWork W4247080134 @default.
- W4367694585 hasRelatedWork W1592122239 @default.
- W4367694585 isParatext "false" @default.
- W4367694585 isRetracted "false" @default.
- W4367694585 workType "article" @default.