반응형

pip install gensim

 

module 'numpy.random' has no attribute 'default_rng'

 

해결 방밥:

1. numpy 버전 올리기

conda update numpy

 

numpy 버전을 확인하고 1.17이상이여 한다.

numpy 버전 확인:

import numpy as np

np.__version__

 

 

stackoverflow.com/questions/62077194/can%C2%B4t-import-qiskit-attribute-error-in-numpy-numpy-random-has-no-attribute

 

Can´t import qiskit, attribute error in numpy: " 'numpy.random' has no attribute 'default_rng'"

I´m using Python 3 and I´m working in jupyter, when I try to import qiskit the following error is showed: --------------------------------------------------------------------------- AttributeError...

stackoverflow.com

 

2. gensim버전 다운하기

pip install -U gensim==3.7.x

반응형
반응형

tensorflow mnist LeNet-5

import tensorflow as tf
from tensorflow.keras.utils import to_categorical
import numpy as np

from tensorflow.keras.layers import Conv2D,Dense,Flatten,Input,AveragePooling2D
from tensorflow.keras import Model
# Download the mnist dataset using keras
(X_train, y_train), (X_test,y_test) = tf.keras.datasets.mnist.load_data()

X_train = X_train.astype('float32')/255.0
X_test = X_test.astype('float32')/255.0

num_classes = 10
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)

X_train = X_train[:, :, :, np.newaxis]
X_test = X_test[:, :, :, np.newaxis]

print(X_train.shape,X_test.shape, y_train.shape, y_test.shape)
image_input = Input(shape=(28, 28, 1))
x = Conv2D(6, kernel_size= (5,5), strides=1,  activation = 'relu')(image_input)
x = AveragePooling2D(2)(x)
x = Conv2D(16, kernel_size= (5,5),strides=1, activation = 'relu')(x)
x = AveragePooling2D(2)(x)

x = Flatten()(x)
x= Dense(units = 120, activation = 'relu')(x)
x= Dense(units = 84, activation = 'relu')(x)
output = Dense(10, activation='softmax')(x)
model = Model(image_input, output)
model.summary()
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
model.fit(X_train,y_train, epochs=100, batch_size = 8,validation_data=(X_test, y_test), verbose=1)

pytorch mnist LeNet-5 => 향후 수정 필요 ***

import numpy as np
from datetime import datetime 

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader

from torchvision import datasets, transforms

import matplotlib.pyplot as plt

# check device
device = 'cuda' if torch.cuda.is_available() else 'cpu'
random_seed = 42
lr = 0.001
batch_size = 32
n_epochs = 15

num_class = 10
kwargs = {'num_workers': 2, 'pin_memory': True} 
mnist_train = datasets.MNIST(root='MNIST_data/', 
                          train=True, 
                          transform=transforms.ToTensor(), 
                          download=True)

mnist_test = datasets.MNIST(root='MNIST_data/', 
                         train=False, # 
                         transform=transforms.ToTensor(), 
                         download=True)

train_loader = DataLoader(dataset=mnist_train, 
                          batch_size=batch_size, 
                          shuffle=True, **kwargs)

test_loader = DataLoader(dataset=mnist_test, 
                          batch_size=batch_size, 
                          shuffle=False, **kwargs)
class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, padding=2)
        self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5)

        self.fc1 = nn.Linear(16 * 6 * 6, 120)
        self.dens1 = torch.nn.Linear(in_features=120, out_features=84)
        self.dens2 = torch.nn.Linear(in_features=84, out_features=10)
    def forward(self,x):
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2))
        x = x.view(-1, self.num_flat_features(x))

        x = x.view(x.shape[0], -1)
        x = F.relu(self.fc1(x))
        x = F.relu(self.dens1(x))
        x = self.dens2(x)
        return x
    def num_flat_features(self, x):
        #x.size() return (256, 16, 5, 5),size의 값은 (16, 5, 5),256은 batch_size
        size = x.size()[1:]       
        num_features = 1
        for s in size:
            num_features *= s
        return num_features

net = MyModel()
print(net)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=2e-3)
for _epoch in range(epoch):
    for idx, (train_x, train_label) in enumerate(train_loader):
        label_np = np.zeros((train_label.shape[0], 10))
        optimizer.zero_grad()
        predict_y = model(train_x.float())
        _error = criterion(predict_y, train_label.long())
        if idx % 10 == 0:
            print('idx: {}, _error: {}'.format(idx, _error))
        _error.backward()
        optimizer.step()

    correct = 0
    _sum = 0

    for idx, (test_x, test_label) in enumerate(test_loader):
        predict_y = model(test_x.float()).detach()
        predict_ys = np.argmax(predict_y, axis=-1)
        label_np = test_label.numpy()
        _ = predict_ys == test_label
        correct += np.sum(_.numpy(), axis=-1)
        _sum += _.shape[0]

    print('accuracy: {:.2f}'.format(correct / _sum))

 

반응형

'Deep learning > 소스' 카테고리의 다른 글

yolov5 Multi-GPU Training  (0) 2021.09.22
yolov5 coco data set training  (0) 2021.09.17
yolov5 모델pt  (0) 2021.03.26
classification model code  (0) 2021.01.14
python 한국 시간으로 설정  (0) 2020.11.17
반응형

1. konlpy 설치

pip install konlpy

 

2. jdk 설치

 

3. jdk를  환경 변수 추가

java_home

 

4. JPype다운

www.lfd.uci.edu/~gohlke/pythonlibs/#jpype

 

Python Extension Packages for Windows - Christoph Gohlke

by Christoph Gohlke, Laboratory for Fluorescence Dynamics, University of California, Irvine. Updated on 1 April 2021 at 00:31 UTC. This page provides 32- and 64-bit Windows binaries of many scientific open-source extension packages for the official CPython

www.lfd.uci.edu

에서 다운로드 하고 

해당 경로에 나둔다음 PIP로 설치한다. 

pip install JPype1-1.2.0-cp38-cp38-win_amd64.whl

 

반응형
반응형

Receptive Field:

같은 크기의 필터여도 , 풀링에 의해 작아진 특징 맵에 적용되면 원본 영상에서 차지하는 범위가 넓다. 

이 범위를 Receptive Field 라고 한다.

영상이 있을 때  -> 3x3 kernel -> 풀링 -> 상대적으로 3x3 크지면서 영상 원래 에서 보면 6x6으로

pooling하면서 적은 것에서 -> 넓은 영역으로 

 

반응형

'Deep learning > 개념' 카테고리의 다른 글

logit 의 뜻  (0) 2021.07.02
loss function  (0) 2021.03.27
Optimizer  (0) 2021.03.27
Activation Function  (0) 2021.03.27
DBN  (0) 2021.03.27
반응형

loss function

 

regression:

mean_squared_error

 

classification:

Cross-Entropy Loss:

Binary classification:  binary_crossentropy

multi classification : categorical_crossentropy

 

반응형

'Deep learning > 개념' 카테고리의 다른 글

logit 의 뜻  (0) 2021.07.02
개념  (0) 2021.03.29
Optimizer  (0) 2021.03.27
Activation Function  (0) 2021.03.27
DBN  (0) 2021.03.27
반응형

Optimizer

SGD (Stochastic Gradient Descent): 

Adam (Adaptive Moments)

RMSprop (Root Mean Squared Propagation)

반응형

'Deep learning > 개념' 카테고리의 다른 글

개념  (0) 2021.03.29
loss function  (0) 2021.03.27
Activation Function  (0) 2021.03.27
DBN  (0) 2021.03.27
알고리즘 개념  (0) 2021.03.21
반응형

Activation Function

neuron의 output y를 입력으로 f(y)를 계산

linear system , non-linear system

 

Sigmoid : 0~ 1사이

tanh (Hyperbolic Tangent) :  [−1.0,1.0] 의 범위로 output을 제한함

Step계단 함수: 0아니면 1

ReLU (Rectified Linear Unit) : 0 보다 작은 것을 0으로 수렴하고 0보다 큰것은 원래 값으로 

LeakyReLU :  negative input에 대해 ReLU는 0를 return하나, LeakyReLU는 ax를 return

Softmax: one-hot encoding => 모든 합이 1

 

반응형

'Deep learning > 개념' 카테고리의 다른 글

loss function  (0) 2021.03.27
Optimizer  (0) 2021.03.27
DBN  (0) 2021.03.27
알고리즘 개념  (0) 2021.03.21
top-1 and top-5  (0) 2020.08.18
반응형

심층신뢰망 (DBN, Deep Belief Network)

blog.skby.net/%EC%8B%AC%EC%B8%B5%EC%8B%A0%EB%A2%B0%EB%A7%9D-dbn-deep-belief-network/

 

심층신뢰망 (DBN, Deep Belief Network) > 도리의 디지털라이프

I. Gradient descent vanishing 해결 위한 심층신뢰망 가. 심층신뢰망 (DBN, Deep Belief Network)의 개념 입력층과 은닉층으로 구성된 RBM을 블록처럼 여러 층으로 쌓인 형태로 연결된 신경망 (딥러닝의 일종) RBM

blog.skby.net

 

I. Gradient descent vanishing 해결 위한 심층신뢰망

가. 심층신뢰망 (DBN, Deep Belief Network)의 개념

  • 입력층과 은닉층으로 구성된 RBM을 블록처럼 여러 층으로 쌓인 형태로 연결된 신경망 (딥러닝의 일종)
  • RBM: 제한된 볼츠만 머신(Restricted Boltzmann Machine

 

RBM(제한된 볼츠만 머신, Restricted Boltzmann machine)

은 차원 감소, 분류, 선형 회귀 분석, 협업 필터링(collaborative filtering), 특징값 학습(feature learning) 및 주제 모델링(topic modelling)에 사용할 수 있는 알고리즘으로 Geoff Hinton이 제안한 모델입니다. 

 

medium.com/@ahnchan2/%EC%B4%88%EB%B3%B4%EC%9E%90%EC%9A%A9-rbm-restricted-boltzmann-machines-%ED%8A%9C%ED%86%A0%EB%A6%AC%EC%96%BC-791ce740a2f0

 

 

Deep Belief Networks are stochastic algorithms, meaning that the algorithm utilizes random variables; 

thus, it is normal to obtain slightly different results when running the learning algorithm multiple times. 

To account for this, it is normal to obtain multiple sets of results and average them together prior to reporting final accuracies.

 

stochastic algorithms:tochastic optimization (SO) methods are optimization methods that generate and use random variables. : 랜덤 변수 사용하는 최적화 방법이다.

en.wikipedia.org/wiki/Stochastic_optimization

 

#“stochastic” means that the model has some kind of randomness in it

#https://machinelearningmastery.com/stochastic-in-machine-learning/

반응형

+ Recent posts