Это 19-й день моего участия в августовском испытании обновлений.Подробности о событии:Испытание августовского обновления
Заметки о машинном обучении Эндрю Нг - (1) линейная регрессия с одной переменной
Model Representation
Notation
- denote the input variables, called input features
- denote the output variables, called target variable
- is called a training example
- —— называетсяtraining set
⚠️【Примечание】 Верхний индекс "" in the notations is simply an index into the train set.
- denote the space of input values
- denote the space of output values
In many examples, .
A more formal description of supervised learning
Given a training set, to learn a function so that is a "good" predictor for the corresponding value of .
For historical reasons, this function is called a hypothesis
.
Pictorially:
With our new notations, the categories of supervised learning problems can be this:
-
regression problem
: when the target variable is continuous. -
classification problem
: can take on only a small number of discrete values
Hypothesis & Model Regression
To represent the hypothesis , for example, if we want to fit a data set like this:
Simply, we may make the . This is means that we are going to predict that is a linear function of .
Plotting this in the picture, it will be something like this:
And what this function is doing is predicting that is some straight line function of .
In this case, this model is called linear regression with one variable
(or Univariate linear regression
).
P.S. we can also fit a more complicated (perhaps non-linear) functions, but this linear case is the simplest.
Cost Function
Well, now we have got this:
To get a "good" hypothesis function, we need to choose , so that is close to for our training exmaples . A cost function is of great help with this goal.
Cost function
A cost function
can help us to measure the accuracy of our hypothesis function.
This takes an average difference of all the results of the hypothesis with inputs from x's and the actual output y's:
The function is our cost function exactly. Take a look at it:
-
shows the difference between the predicted value and the actual value.
-
offers the mean of the squares of
-
The mean is halved () as a convenience for the computati::on of the gradient descent, as the .
P.S. This function is otherwise called the "Squared error function", or "Mean squared error".
Goal
To make closing to , we are just expected to minimize our cost function by adjusting the value of , .
We describe this goal like this:
Or, more directly:
And if we are working with a linear regression with one variable, the .
⚠️【Примечание】Функция гипотезы и функция стоимости
- for fixed , is a function of .
- is a function of the parameter .
Cost Function - Intuition I
Let's draw some pictures for better understanding of what the values of the cost function.
To getting start, we are going to work with a simplified hypothesis function:
Our training data set is scattered on the x-y plane. We are trying to make a straight line (defined by ) which passes through these scattered data points.
Our objective is to get the best possible line. The best possible line will be such:
So that the average squared vertical distances of the scattered points from the line () will be the least.
Ideally, the line should pass through all the points of our training data set. In such a case, the value of cost function will be 0
.
E.g. A ideal situation where :
Let this be our training set: Only three points (1, 1)
, (2, 2)
& (3, 3)
Now, we try setting to different values: -0.5
, 0
, 0.5
, 1
, 1.5
......
When , we get a slope of 1 which goes through every single data point in our model. Conversely, when , we see the vertical distance from our fit to the data points increase:
By doing this, we got a series of graph of in x-y plane as well as yield to the following - graph:
Thus as a goal, we should try to minimize the cost function. In this case, is our global minimum.
Cost Function - Intuition II
Unlike before, this time, we won't continue with the simplified hypothesis, we are going to keep both of parameters and . So the hypithesis function will be .
Here's our problem formulation as usual:
Same as last time, we want to unserstand the hypothesis and the cost function via a series of graph. However, we'd like to use a contour plot
to describe our .
A contour plot is a graph that contains many contour lines.
A contour line of a two variable function has a constant value at all points of the same line.
An example:
Taking any color and going along the 'circle', one would expect to get the same value of the cost function.
To touch our optimization objective, we can try to setting the parameters to different values.
When and , the value of in the contour plot gets closer to the center thus reducing the cost function error. Now we get a result in a better fit of the data:
Minimizing the cost function as much as possible and consequently, the result of and tend to be around 0.12 and 250 respectively. Plotting those values on our graph to the right seems to put our point in the center of the inner most 'circle'.
Obviously, we dislike to write a software to just plot out a contour plot and then try to manually read off the numbers to reach our goal. We want an efficient algorithm for automatically finding the value of and that minimizes the cost function . Actually, the gradient descent algorithm that we will talk about works great on this question.
Gradient Descent
There is a algorithm called gradient descent for minimizing the cost function . And we can use it not only in linear regression as it's actually used all over the place in machine learning.
Let's talk about gradient descent for minimizing some arbitrary function . So here's the problem setup:
We put on the x
axis and on the y
axis, with the cost function on the vertical z
axis. The points on our graph will be the result of the cost function using our hypothesis with those specific theta parameters. The graph below depicts such a setup:
We will know that we have succeeded when our cost function is at the very bottom of the pits in our graph, i.e. when its value is the minimum. The red arrows show the minimum points in the graph.
Image that we are physically standing at a point on a hill, in gradient descent, what we're going to do is to spin 360 degrees around and just look all around us, and ask, "If I were to take a little step in some direction, and I want to go down the hill as quickly as possible, what direction should I take?" then you take a step in that direction. Repeat doing this until you converge to a local minimum. Like the black line in the picture above shows.
Notice that if we choose different points to grandient descent, we may reach different local optimums.
Mathematically, this is the definition of the gradient descent algorithm:
Gradient Descent Algorithm
repeat until convergence {
}
The is a number that is called the learning rate
. It basically controls how big a step we take downhill with gradient descent.
At each iteration , one should simultaneously update the parameters θ1,θ2,...,θn. Updating a specific parameter prior to calculating another one on the j(th)
iteration would yield to a wrong implementation:
Gradient Descent Intuition
Let's explore the scenario where we used **one parameter ** and plotted its cost function to implement a gradient descent. Our formula for a single parameter was : Repeat until convergence:
Regardless of the slope's sign for eventually converges to its minimum value.
The following graph shows that when the slope is negative, the value of increases and when it is positive, the value of decreases:
On a side note, we should adjust our parameter to ensure that the gradient descent algorithm converges in a reasonable time. Failure to converge or too much time to obtain the minimum value imply that our step size is wrong.
How does gradient descent converge with a fixed step size ? The intuition behind the convergence is that approaches 0 as we approach the bottom of our convex function. At the minimum, the derivative will always be 0 and thus we get:
Gradient Descent for Linear Regression
Now, we have learnt the gradient descent, the linear regression model and the squared error cost function as well. This time, we are going to put together gradient descent with our cost function that will give us an algorithm for linear regression for fitting a straight line to our data.
This is what we worked out:
We are going to apply gradient descent algorithm to minimize our squared error cost function.
The key term is the derivative term:
Plug them back into our gradient descent algorithm:
repeat until convergence {
}
Notice: update and simultaneously.
The point of all this is that if we start with a guess for our hypothesis and then repeatedly apply these gradient descent equations, our hypothesis will become more and more accurate. So, this is simply gradient descent on the original cost function J.
This method looks at every example in the entire training set on every step, and is called batch gradient descent.
Note that, while gradient descent can be susceptible to local minima in general, the optimization problem we have posed here for linear regression has only one global, and no other local,optima; thus gradient descent always converges (assuming the learning rate α is not too large) to the global minimum. Indeed, J is a convex quadratic function. Here is an example of gradient descent as it is run to minimize a quadratic function.
Эллипсы, показанные выше, являются контурами квадратичной функции. Также показана траектория, взятая градиентным спуском, которая была инициализирована в (48,30). Хи на рисунке (соединено прямыми линиями), отмечают последовательные значения θ Этот градиентный спуск прошел как itConvered до его минимального.
Experiment
В качестве дополнения я написал очень простой для понимания, но удивительно низкопроизводительный одномерный градиентный спуск на Python:
(ограничено пробелом, я удалил много комментариев, полный код здесьGist)
import math
import random
class LinearRegressionWithOneVariable(object):
def __init__(self, training_set):
self.training_set = training_set # [(x: int, y: int), ...]
self.theta = [0, 0]
def _hypothesis(self, x, theta):
return theta[0] + theta[1] * x
def _cost(self, dataset, theta):
s = 0
for i in dataset:
s += (self._hypothesis(i[0], theta) - i[1]) ** 2
return s / (2 * len(dataset))
def _gradient_descent(self, dataset, starting_theta, learning_rate, epsilon, max_count=4294967296):
theta = list.copy(starting_theta)
last_theta = list.copy(starting_theta)
cost = self._cost(dataset, theta)
last_cost = cost * 2
count = 0
epsilon = abs(epsilon)
diff = epsilon + 1
while (diff > epsilon):
count += 1
if count >= max_count:
raise RuntimeError("not convergence after {mc} iterations.".format(mc=max_count))
try:
t_sum= sum((self._hypothesis(i[0], theta) - i[1] for i in dataset))
theta[0] = theta[0] - learning_rate * t_sum / len(dataset)
t_sum = sum(
((self._hypothesis(i[0], theta) - i[1]) * i[0] for i in dataset)
)
theta[1] = theta[1] - learning_rate * t_sum / len(dataset)
last_cost = cost
cost = self._cost(dataset, theta)
if not any((math.isnan(x) or math.isinf(x) for x in theta)) and abs(cost) <= abs(last_cost):
diff = max((abs(last_theta[i] - theta[i]) for i in range(len(theta))))
last_theta = list.copy(theta)
learning_rate += learning_rate * 4
else:
theta = list.copy(last_theta)
learning_rate /= 10
if (learning_rate == 0):
learning_rate = self._get_learning_rate(self.training_set)
except OverflowError:
theta = list.copy(last_theta)
learning_rate /= 10
if (learning_rate == 0):
learning_rate = self._get_learning_rate(self.training_set)
return theta, count
def _get_learning_rate(self, dataset):
return 1 / max((i[1] for i in dataset))
def regress(self, epsilon=1, learning_rate=0, verbose=False):
init_theta = [random.random(), random.random()]
if learning_rate == 0:
learning_rate = self._get_learning_rate(self.training_set)
self.theta, count = self._gradient_descent(self.training_set, init_theta, learning_rate, epsilon)
if verbose:
print('Gradient descent finished after {count} iterations:\ntheta_0:\t{t0}\ntheta_1:\t{t1}\ncost:\t\t{cost}\nhypothesis:\th(x) = {t0} + {t1}*x'.format(
count=count, t0=self.theta[0], t1=self.theta[1], cost=self._cost(self.training_set, self.theta)))
def hypothesis(self, x):
return self._hypothesis(x, self.theta)
Пример использования:
>>> model = LinearRegressionWithOneVariable([(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)])
>>> model.regress(verbose=True, epsilon=0.0001)
theta_0: 0.025486336182825836
theta_1: 0.9940305813471573
cost: 9.99815680487604e-05
hypothesis: h(x) = 0.025486336182825836 + 0.9940305813471573*x
>>> model.hypothesis(10)
9.9657921496544