Это 21-й день моего участия в августовском испытании обновлений.Подробности о событии:Испытание августовского обновления
Заметки о машинном обучении Эндрю Нг - (3) Многомерная линейная регрессия
Multiple Features
Linear regression with multiple variables is alse known as "multivariate linear regression".
Notations
We now introduce notation for equations where we can have any number of input variables (Multiple Features, i.e. Multivariate):
- : the number of training examples.
- : the number of features.
- : the input (features) of the training example, a n-dimensional vector.
- : the value of feature in the training example.
Hypothesis
The multivariable form of the hypothesis function accommodating these multiple features is as follows:
In order to develop intuition about this function, we can think about as the basic price of a house, as the price per square meter, as the price per floor, etc. will be the number of square meters in the house, the number of floors, etc.
Using the definition of matrix muyltiplication, our multivariable hypothesis function can be concisely represented as:
This is a vectorization of our hypothesis function for one training example.
Remark: Note that for convenience reasons, we assume . This allows us to do matrix operations with and . Hence making two vector and match each other element-wise (that is, have the same number of elements: n+1).
Gradient Descent For Multiple Variables
Let's say the condition about the multiple variables:
Hypothesis: .
Parameters: .
Cost Function: .
Or vectorizedly:
Hypothesis: .
Parameters: .
Cost Function: .
The Gradient Descent will be like this:
repeat until convergence {
}
The following image compares gradient descent with one variable to gradient descent with multiple variables:
Feature Scaling
We can speed up gradient descent by having each of our input values in roughly the same range. This is because will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.
The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same. Ideally:
These aren't exact requirements; we are only trying to speed things up. The goal is to get all input variables into roughly one of these ranges, give or take a few.
In practice, we offen think it's ok for variables in range .
Two techniques to help with this are feature scaling
and mean normalization
.
Feature scaling
Feature scaling involves dividing the input values by the range (i.e. the maximum value minus the minimum value) of input variable, resulting in a new range of just 1.
Mean normalization
Mean normalization involves subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero.
Implement
We always implement both of these techniques via adjusting our input values as shown in this formula:
- is the average of all the values for feature (i)
-
is the range of values (
max - min
), or could also be the standard deviation.
For example, if represents housing prices with a range of 100 to 2000 and a mean value of 1000, then, .
In octave
In octave, the function mean
can offer us the avaerage of values for feature (i), when the function std
gives us the standard deviation of values for feature (i). So we can write the program like this:
function [X_norm, mu, sigma] = featureNormalize(X)
%FEATURENORMALIZE Normalizes the features in X
X_norm = X;
mu = zeros(1, size(X, 2));
sigma = zeros(1, size(X, 2));
mu = mean(X);
sigma = std(X);
X_norm = (X - mu) ./ sigma;
end
FEATURENORMALIZE(X)
returns a normalized version of X where
the mean value of each feature is 0 and the standard deviation
is 1. This is often a good preprocessing step to do when
working with learning algorithms:
First, for each feature dimension, compute the mean of the feature and subtract it from the dataset, storing the mean value in mu. Next, compute the standard deviation of each feature and divide each feature by it's standard deviation, storing the standard deviation in sigma.
Note that X is a matrix where each column is a feature and each row is an example. You need to perform the normalization separately for each feature.
Learning Rate
Debugging gradient descent
Make a plot with number of iterations on the x-axis. Now plot the cost functiong over the number of iterations of gradient descent. If ever increases, then you probably need to decrease learning rate .
Automatic convergence test
Declare convergence if decreases by less than , where is some small value such as . However in practice it's difficult to choose this threshold value.
Making sure gradient descent is working correctly
It has been proven that if learning rate α is sufficiently small, then J(θ) will decrease on every iteration.
To summarize:
If is too small: slow convergence.
If слишком велико: может не уменьшаться на каждой итерации и, следовательно, может не сходиться.
Implement
We should try different to find a fit one by drawing #iterations-J(θ) plots.
E.g. To choose , try:
... , 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, ...
Features and Polynomial Regression
Combine Features
We can improve our features and the form of our hypothesis function in a couple different ways.
For example, we can combine multiple features into one, Such as combining and into a new feature by taking .
Polynomial Regression
To fit the data well, our hypothesis function may need to be non-linear. So we can change the behavior or curve of our hypothesis function by making it a quadratic, cubic or square root function (or any other form).
For example, if our hypothesis function is then we can create additional features based on , to get the quadratic function or the cubic function , to make it a square root function, we could do: .
In the cubic version, we can create new features , where and , then we can get a set of thetas via gradient descent for multiple variables.
⚠️Примечание. Если вы выбираете свои функции таким образом, тоfeature scalingстановится очень важным:
e.g. if x has range 1 ~ 1000:
Then range of becomes 1 ~ 1000000;
And range of becomes 1 ~ 1000000000;