Difference between revisions of "AI:Regression Problems"

From wiki
Jump to navigation Jump to search
Line 15: Line 15:
 
;J(&theta;(0),&theta;(1)) = 0.5m * &sum;<sup>m</sup><sub>i</sub>( h( x(i) ) = y(i) )&sup2;
 
;J(&theta;(0),&theta;(1)) = 0.5m * &sum;<sup>m</sup><sub>i</sub>( h( x(i) ) = y(i) )&sup2;
  
<nowiki><math>\sum_{i=1} </math></nowiki>
+
<math>J(\theta^{(0)},\theta^{(1)}) = \frac{1}{2}m*\sum_{i=1}^m (h( x^{(i)}) - y^{(i)})^2</math>

Revision as of 22:32, 8 April 2019


Learning from a training set. A training set has m samples of x's (input variables or features) and the resulting y's (output/target variables)

The learning algorithm finds the best matching hypothesis that brings the input to the output values.

The hypothesis can be:

Linear regression with 1 variable (Univariate linear regression)

h(x) = θ(0) + θ(1)*x
θ are the hypothesis parameters, it is the weight a feature gets. For the multiplication table θ is just the table you are working on. So for the table of 4, in the above formula θ(1) = 4

The aim of the learning algorithm is to choose θ(0) and θ(1) so that the result for all input values is as close as possible to the given output values.

J(θ(0),θ(1)) = 0.5m * ∑mi( h( x(i) ) = y(i) )²