Never too late to learn.

0%

机器学习 Andrew Ng《Machine Learning》课程笔记3

课程地址:coursera—机器学习
讲师: Andrew Ng

Classification and Representation

Classification

To attempt classification, one method is to use linear regression and map all predictions greater than 0.5 as a 1 and all less than 0.5 as a 0. However, this method doesn’t work well because classification is not actually a linear function.

Hypothesis Representation

In classification problem, we know that y belongs to {0, 1}. It doesn’t make sense for Hypothesishθ(x) to take values larger or smaller than 1. So we change the form for our hθ(x) to satisfy 0 <= hθ(x) <= 1. This is accomplished by plugging θ^Tx into the Logistic Function.

Our new form uses the “Sigmoid Function”(S型函数), also called the “Logistic Function”(逻辑函数):

classification1

The following image shows us what the sigmoid function looks like:

classification2

The function g(z), shown here, maps any real number to the (0, 1) interval, making it useful for transforming an arbitrary-valued function into a function better suited for classification.
hθ(x) will give us the probability that our output is 1.

hθ(x) = P(y=1|x;θ) = 1 - P(y=0|x;θ)
P(y=0|x;θ) + P(y=1|x;θ) = 1

Decision Boundary(决策边界)

In order to get our discrete 0 or 1 classification, we can translate the output of the hypothesis function as follows:

hθ(x) >= 0.5 -> y = 1
hθ(x) < 0.5 -> y = 0

The way our logistic function g behaves is that when its input is greater than or equal to zero, its output is greater than or equal to 0.5:

g(z) >= 0.5 when z >= 0

So if our input to g is θ^TX, then that means:

hθ(x) = g(θ^TX) >= 0.5 when θ^TX >= 0

From these statements we can now say:

θ^TX >= 0 => y = 1
θ^TX >= 0 => y = 0

The decision boundary is the line that separates the area where y = 0 and where y = 1. It is created by our hypothesis function.

Cost Function

We cannot use the same cost function that we use for linear regression because the Logistic Function will cause the output to be wavy, causing many local optima. In other words, it will not be a convex function.

Instead, our cost function for logistic regression looks like:

classification3

When y = 1, we get the following plot for Jθ vs hθ(x)):

classification4

Similarly, when y = 0,

classification5

classification6

If our correct answer ‘y’ is 0, then the cost function will be 0 if our hypothesis function also outputs 0. If our hypothesis approaches 1, then the cost function will approach infinity.

If our correct answer ‘y’ is 1, then the cost function will be 0 if our hypothesis function outputs 1. If our hypothesis approaches 0, then the cost function will approach infinity.

Note that writing the cost function in this way guarantees that J(θ) is convex for logistic regression.

Coffee? ☕