Deep Learning

From wiki
Revision as of 17:21, 1 December 2018 by Hdridder (talk | contribs) (Created page with "From a [https://www.datacamp.com datacamp] course. =Concepts= ==Layers== # Input layer # Hidden layer(s); These find interactions between inputs. There can be several layers...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

From a datacamp course.

Concepts

Layers

  1. Input layer
  2. Hidden layer(s); These find interactions between inputs. There can be several layers taking input from another layer. The more nodes on a layer the more interactions can be analyzed.
  3. Output layer

Forward propagation

Hidden node give a value to the input data based on their weight (Input value * weight).

Example:

The amount of children has weight 2 on a hidden node, if there are 3 children 6 will be added to the hidden node.
Another attribute on the input layer will add a value to the hidden node too.
On another node in the hidden layer the amount of children may have the weight -1, to this node -3 will be added.
The values in the nodes in a layer have a weight on the next layer nodes too. Again value * weight is put in the next layer node.
A node assigns a weight to each input node. The result of each input value is input * weight. All the results add up to the value of the node.
import numpy as np
input_data = np_array([2,3]) # There are 2 input attributes
weights = {'node_0' : np_array([1,1]),  # Both input attributes have weight 1
           'node_1' : np_array([-1,1]), # The first attribute has weight -1 the other 1
           'output' : np_array([2,-1]), # Weights for the hidden nodes (node_0 and node_1)}
node_0_value = (input_data * weights['node_0']).sum
node_1_value = (input_data * weights['node_1']).sum