Coursera Deep Learning 1 Нейронные сети и Deep Learning Week 2 Проблемы

машинное обучение глубокое обучение Нейронные сети

Neural Network Basics

1

What does a neuron compute?

A neuron computes a linear function (z = Wx + b) followed by an activation function

A neuron computes a function g that scales the input x linearly (Wx + b)

A neuron computes the mean of all features before applying the output to an activation function

A neuron computes an activation function followed by a linear function (z = Wx + b)

2

Which of these is the "Logistic Loss"?

3

Suppose img is a (32,32,3) array, representing a 32x32 image with 3 color channels red, green and blue. How do you reshape this into a column vector?

x = img.reshape((32323,1))

x = img.reshape((3,32*32))

x = img.reshape((1,3232,3))

x = img.reshape((32*32,3))

4

Consider the two following random arrays "a" and "b":

a = np.random.randn(2, 3) # a.shape = (2, 3)
b = np.random.randn(2, 1) # b.shape = (2, 1)
c = a + b

What will be the shape of "c"?

c.shape = (2, 3)

Yes! This is broadcasting. b (column vector) is copied 3 times so that it can be summed to each column of a.

c.shape = (2, 1)

c.shape = (3, 2)

The computation cannot happen because the sizes don't match. It's going to be "Error"!

5

Consider the two following random arrays "a" and "b":

a = np.random.randn(4, 3) # a.shape = (4, 3)
b = np.random.randn(3, 2) # b.shape = (3, 2)
c = a*b

What will be the shape of "c"?

The computation cannot happen because the sizes don't match. It's going to be "Error"!

Indeed! In numpy the "*" operator indicates element-wise multiplication. It is different from "np.dot()". If you would try "c = np.dot(a,b)" you would get c.shape = (4, 2).

c.shape = (4, 3)

c.shape = (3, 3)

c.shape = (4,2)

6

Suppose you have nx input features per example. Recall that X=[x(1)x(2)...x(m)]. What is the dimension of X?

(nx,m)

(m,1)>

(1,m)

(m,nx)

7

Recall that "np.dot(a,b)" performs a matrix multiplication on a and b, whereas "a*b" performs an element-wise multiplication.

Consider the two following random arrays "a" and "b":

a = np.random.randn(12288, 150) # a.shape = (12288, 150)
b = np.random.randn(150, 45) # b.shape = (150, 45)
c = np.dot(a,b)

What is the shape of c?

c.shape = (12288, 45)

Correct, remember that a np.dot(a, b) has shape (number of rows of a, number of columns of b). The sizes match because :

"number of columns of a = 150 = number of rows of b"

The computation cannot happen because the sizes don't match. It's going to be "Error"!

c.shape = (150,150)

c.shape = (12288, 150)

8

Consider the following code snippet:

# a.shape = (3,4)
# b.shape = (4,1)
for i in range(3):
  for j in range(4):
    c[i][j] = a[i][j] + b[j]

How do you vectorize this?

c = a.T + b.T

c = a.T + b

c = a + b.T

c = a + b

9

Consider the following code:

a = np.random.randn(3, 3)
b = np.random.randn(3, 1)
c = a*b

Что будет с? (Если вы не уверены, не стесняйтесь запустить это в python, чтобы узнать).

Это вызовет широковещательную рассылку, поэтому b копируется три раза, чтобы стать (3,3), а ∗ является поэлементным произведением, поэтому c.shape будет (3, 3)

Это вызовет широковещательную передачу, поэтому b копируется три раза, чтобы стать (3, 3), а ∗ вызывает операцию умножения двух матриц 3x3, поэтому c.shape будет (3, 3)

This will multiply a 3x3 matrix a with a 3x1 vector, thus resulting in a 3x1 vector. That is, c.shape = (3,1).

Это приведет к ошибке, так как вы не можете использовать «*» для работы с этими двумя матрицами.Вместо этого вам нужно использовать np.dot(a,b)

10

Consider the following computation graph.


What is the output J?

J = (c - 1)*(b + a)

J = (a - 1) * (b + c)

Yes. J = u + v - w = a*b + a*c - (b + c) = a * (b + c) - (b + c) = (a - 1) * (b + c).

J = ab + bc + a*c

J = (b - 1) * (c + a)