donut

Tensorflow 다중선형회귀 공부

Tensorflow/ML

 

 

 

다중 선형회귀 복습

지난번포스팅했던 다중 선형회귀를 복습하는 시간을 가지겠습니다.
자세한 포스팅은 이전에 했기때문에 주석을 최소화 하였습니다.

In [280]:
import tensorflow as tf
import numpy as np
print(tf.__version__)
 
2.1.0
 

변수 만들기(컴프리헨션 복습)

In [281]:
test1 = [i for i in range(0,10) if i>2]
test1
Out[281]:
[3, 4, 5, 6, 7, 8, 9]
In [282]:
test1 = list(range(0,10))
test2 = list(range(10,20))
print(test1)
print(test2)
 
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
In [283]:
test3 = []
for i,v in zip(test1,test2):
    test3.append([i,v])
test3
Out[283]:
[[0, 10],
 [1, 11],
 [2, 12],
 [3, 13],
 [4, 14],
 [5, 15],
 [6, 16],
 [7, 17],
 [8, 18],
 [9, 19]]
 

x,y, 값 나눠주기

In [284]:
x1 = [x[0] for x in test3]
x2 = [x[1] for x in test3]
y = [i for i in range(100,110)]
In [285]:
print(x1)
print(x2)
print(y)
 
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[100, 101, 102, 103, 104, 105, 106, 107, 108, 109]
In [288]:
#x변수가 2개이기 때문에 W도 2개 설정
tf.random.set_seed(0)
W1 = tf.Variable(tf.random.uniform((1,),1., 10.0 ))
W2 = tf.Variable(tf.random.uniform((1,),1., 10.0 ))
b = tf.Variable(tf.random.uniform((1,),1., 10.0 ))
In [287]:
learning_rate = tf.Variable(0.001)
for i in range(1001):
    with tf.GradientTape() as tape:
        hypothesis = W1 * x1 + W2* x2 + b
        cost = tf.reduce_mean(tf.square(hypothesis - y))
    W1_grad, W2_grad, b_grad = tape.gradient(cost, [W1,W2,b])
    W1.assign_sub(learning_rate * W1_grad)
    W2.assign_sub(learning_rate * W2_grad)
    b.assign_sub(learning_rate * b_grad)
    
    if i % 50 ==0:
        print("{:5} | {:10.6f} | {:10.4f} | {:10.4f} | {:10.6f}".format(
        i, cost.numpy(), W1.numpy()[0], W2.numpy()[0], b.numpy()[0]))
 
    0 | 616.380981 |     3.4714 |     5.8110 |   2.753797
   50 | 282.371643 |    -0.0978 |     6.7484 |   3.204454
  100 | 141.552185 |    -2.5705 |     7.5713 |   3.534010
  150 |  70.959732 |    -4.3213 |     8.1539 |   3.767345
  200 |  35.571896 |    -5.5608 |     8.5664 |   3.932551
  250 |  17.832087 |    -6.4385 |     8.8585 |   4.049521
  300 |   8.939185 |    -7.0599 |     9.0652 |   4.132339
  350 |   4.481170 |    -7.4998 |     9.2117 |   4.190976
  400 |   2.246401 |    -7.8113 |     9.3153 |   4.232491
  450 |   1.126115 |    -8.0319 |     9.3887 |   4.261886
  500 |   0.564514 |    -8.1880 |     9.4407 |   4.282698
  550 |   0.282984 |    -8.2986 |     9.4775 |   4.297431
  600 |   0.141857 |    -8.3769 |     9.5035 |   4.307865
  650 |   0.071112 |    -8.4323 |     9.5220 |   4.315252
  700 |   0.035649 |    -8.4715 |     9.5350 |   4.320483
  750 |   0.017869 |    -8.4993 |     9.5443 |   4.324186
  800 |   0.008958 |    -8.5190 |     9.5508 |   4.326808
  850 |   0.004490 |    -8.5329 |     9.5554 |   4.328665
  900 |   0.002251 |    -8.5428 |     9.5587 |   4.329979
  950 |   0.001129 |    -8.5498 |     9.5610 |   4.330909
 1000 |   0.000566 |    -8.5547 |     9.5627 |   4.331569
 

변수 3개 메트릭스 활용

변수만들기

In [289]:
x1 = [x for x in range(0,10)]
x2 = [x for x in range(10,20)]
x3 = [x for x in range(30,40)]
y = [x for x in range(100,110)]
In [290]:
print(x1)
print(x2)
print(x3)
print(y)
 
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39]
[100, 101, 102, 103, 104, 105, 106, 107, 108, 109]
In [291]:
#변수를 한 메트릭스로 만들기
data =np.array([[i,v,z,h] for i,v,z,h in zip(x1,x2,x3,y)],dtype =np.float32)
data
Out[291]:
array([[  0.,  10.,  30., 100.],
       [  1.,  11.,  31., 101.],
       [  2.,  12.,  32., 102.],
       [  3.,  13.,  33., 103.],
       [  4.,  14.,  34., 104.],
       [  5.,  15.,  35., 105.],
       [  6.,  16.,  36., 106.],
       [  7.,  17.,  37., 107.],
       [  8.,  18.,  38., 108.],
       [  9.,  19.,  39., 109.]], dtype=float32)
In [321]:
#X, Y 로 데이터 나눠주기
X = data[:,:-1]
Y = data[:,[-1]]
print(X)
print(Y)
 
[[ 0. 10. 30.]
 [ 1. 11. 31.]
 [ 2. 12. 32.]
 [ 3. 13. 33.]
 [ 4. 14. 34.]
 [ 5. 15. 35.]
 [ 6. 16. 36.]
 [ 7. 17. 37.]
 [ 8. 18. 38.]
 [ 9. 19. 39.]]
[[100.]
 [101.]
 [102.]
 [103.]
 [104.]
 [105.]
 [106.]
 [107.]
 [108.]
 [109.]]
In [322]:
print(X.shape)
print(X.shape[1])
 
(10, 3)
3
In [323]:
#W, b 설정해주기
W = tf.Variable(tf.random.normal((X.shape[1],1)))
b = tf.Variable(tf.random.normal((1,)))
In [324]:
# 예측 모델 및 경사하강법 적용
def predict(X):
    return tf.matmul(X,W) + b

learning_rate = 0.00001


for i in range(1001):
    with tf.GradientTape() as tape:
        cost =tf.reduce_mean((tf.square(predict(X) - y)))
        
    W_grad, b_grad = tape.gradient(cost, [W,b])
    
    W.assign_sub(learning_rate * W_grad)
    b.assign_sub(learning_rate * b_grad)
    
    if i % 500 ==0:
        print("{:5} | {:10.6f} | {:10.4f} | {:10.4f} | {:10.6f}".format(
            i, cost.numpy(), W.numpy()[0][0], W.numpy()[1][0], b.numpy()[0]))
 
    0 | 3581.468018 |     1.3541 |     0.4929 |  -2.126617
  500 | 186.300980 |     1.2123 |     0.8905 |  -2.072671
 1000 | 159.831528 |     0.9086 |     0.7253 |  -2.058817
In [325]:
#W값 확인
print(W)
 
<tf.Variable 'Variable:0' shape=(3, 1) dtype=float32, numpy=
array([[0.9085694 ],
       [0.72533375],
       [2.6267443 ]], dtype=float32)>
In [307]:
#X값으로 해 도출
predict(X)
Out[307]:
<tf.Tensor: shape=(10, 1), dtype=float32, numpy=
array([[ 89.7837  ],
       [ 92.84181 ],
       [ 95.899925],
       [ 98.958046],
       [102.01615 ],
       [105.07428 ],
       [108.13239 ],
       [111.190506],
       [114.24863 ],
       [117.30674 ]], dtype=float32)>
In [308]:
#임의의 값으로 해 도출
predict([[ 1.,  1.,  4.],[ 145.,  50.,  50.]]).numpy()
Out[308]:
array([[11.6773615],
       [87.66279  ]], dtype=float32)

'Tensorflow > ML' 카테고리의 다른 글

Python Logistic Regression  (0) 2020.07.28
Python Tensorflow 다중선형회귀  (0) 2020.07.25
Python Tensorflow 단순선형회귀  (0) 2020.07.20