Deep Learning

From wiki
Jump to navigation Jump to search

From a datacamp course.

Concepts

Layers

  1. Input layer
  2. Hidden layer(s); These find interactions between inputs. There can be several layers taking input from another layer. The more nodes on a layer the more interactions can be analyzed.
  3. Output layer

Forward propagation

Hidden node give a value to the input data based on their weight (Input value * weight).

Example:

The amount of children has weight 2 on a hidden node, if there are 3 children 6 will be added to the hidden node.
Another attribute on the input layer will add a value to the hidden node too.
On another node in the hidden layer the amount of children may have the weight -1, to this node -3 will be added.
The values in the nodes in a layer have a weight on the next layer nodes too. Again value * weight is put in the next layer node.
A node assigns a weight to each input node. The result of each input value is input * weight. All the results add up to the value of the node.
import numpy as np
input_data = np_array([2,3]) # There are 2 input attributes
weights = {'node_0' : np_array([1,1]),  # Both input attributes have weight 1
           'node_1' : np_array([-1,1]), # The first attribute has weight -1 the other 1
           'output' : np_array([2,-1]), # Weights for the hidden nodes (node_0 and node_1)}
node_0_value = np.sum(input_data * weights['node_0'])
node_1_value = np.sum(input_data * weights['node_1'])
layer1_value = np.array([node_0_value,node_1_value])
output = np.sum(layer1_value * weights['output'])

Rectified Linear Activation Function

A ReLU can be used to modify the node's value before it is passed to the next layer. The node will only fire (have a non-zero output value) or fire with more force if the criteria for that are matched. E.g. the output must be positive.