Linear Regression 의 cost 최소화의 TensorFlow 구현(new)
X = [1, 2, 3]
Y = [1, 2, 3]
W= tf.Variable(5.0) # 말도 안되는 값을 준다.
hypothesis = X * W
gradient = tf.reduce_mean((W * X - Y) * X) * 2
cost = tf.reduce_mean(tf.square(hypothesis - Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
gvs = optimizer.compute_gradients(cost,[W])
apply_gradients = optimizer.apply_gradients(gvs)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for step in range(101):
print(step, sess.run([gradient, W,gvs]))
sess.run(apply_gradients)
gvs = optimizer.compute_gradients(cost) -> gvs = optimizer.compute_gradients(cost,[W]) 으로 수정
참고 :
'Deep learning > 오류해결' 카테고리의 다른 글
unknown flag: --gpus (0) | 2021.04.28 |
---|---|
gensim 설치시 오류 (0) | 2021.04.07 |
konlpy okt java (0) | 2021.04.01 |
assertion failed: [0] [Op:Assert] name: EagerVariableNameReuse (0) | 2021.03.26 |
20201127-assertion failed: [0] [Op:Assert] name: EagerVariableNameReuse (0) | 2020.12.02 |