Welcome toVigges Developer Community-Open, Learning,Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
3.9k views
in Technique[技术] by (71.8m points)

python - Keras Cnn Model wont improve Accuracy

Im trying to implement a Cnn using Keras on a Sklearn dataset for handwritten digits recognition (load_digits). I have got the model to run but it is not improving the accuracy for each 'epochs' cycle, Im guessing its because my labels are incorrect, I have tried encoding my Y values with use of 'to_categorical' but it displays the following error:

    C:UsersAppDataLocalProgramsPythonPython38libsite-packagesensorflowpythonkerasackend.py:4979 binary_crossentropy
        return nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output)
    C:UsersAppDataLocalProgramsPythonPython38libsite-packagesensorflowpythonutildispatch.py:201 wrapper
        return target(*args, **kwargs)
    C:UsersAppDataLocalProgramsPythonPython38libsite-packagesensorflowpythonops
n_impl.py:173 sigmoid_cross_entropy_with_logits
        raise ValueError("logits and labels must have the same shape (%s vs %s)" %

    ValueError: logits and labels must have the same shape ((None, 1) vs (None, 10))

When i run my code without trying to encode the Y values it seems to go through the Cnn Model however it isn't accurate and it doesn't increase, this is my code:

import tensorflow as tf
from sklearn import datasets
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D

#from keras.utils.np_utils import to_categorical

X,y = datasets.load_digits(return_X_y = True)
X = X/16
#X = X.reshape(1797,8,8,1)

train_x, test_x, train_y, test_y = train_test_split(X, y)

train_x = train_x.reshape(1347,8,8,1)
#test_x = test_x.reshape()

#train_y = to_categorical(train_y, num_classes = 10)

model = Sequential()

model.add(Conv2D(32, (2, 2), input_shape=( 8, 8, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())  # this converts our 3D feature maps to 1D feature vectors

model.add(Dense(64))

model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

model.fit(train_x, train_y, batch_size=32, epochs=6, validation_split=0.3)

print(train_x[0])

And this gives me the following output:

Epoch 1/6

 1/30 [>.............................] - ETA: 13s - loss: 1.1026 - accuracy: 0.0938
 6/30 [=====>........................] - ETA: 0s - loss: 0.2949 - accuracy: 0.0652 
30/30 [==============================] - 1s 33ms/step - loss: -5.4832 - accuracy: 0.0893 - val_loss: -49.9462 - val_accuracy: 0.1012
Epoch 2/6

 1/30 [>.............................] - ETA: 0s - loss: -52.2145 - accuracy: 0.0625
30/30 [==============================] - 0s 3ms/step - loss: -120.6972 - accuracy: 0.0961 - val_loss: -513.0211 - val_accuracy: 0.1012
Epoch 3/6

 1/30 [>.............................] - ETA: 0s - loss: -638.2873 - accuracy: 0.1250
30/30 [==============================] - 0s 3ms/step - loss: -968.3621 - accuracy: 0.1006 - val_loss: -2804.1062 - val_accuracy: 0.1012
Epoch 4/6

 1/30 [>.............................] - ETA: 0s - loss: -3427.3135 - accuracy: 0.0000e+00
30/30 [==============================] - 0s 3ms/step - loss: -4571.7894 - accuracy: 0.0934 - val_loss: -10332.9727 - val_accuracy: 0.1012
Epoch 5/6

 1/30 [>.............................] - ETA: 0s - loss: -12963.2559 - accuracy: 0.0625
30/30 [==============================] - 0s 3ms/step - loss: -15268.3010 - accuracy: 0.0887 - val_loss: -29262.1191 - val_accuracy: 0.1012
Epoch 6/6

 1/30 [>.............................] - ETA: 0s - loss: -30990.6758 - accuracy: 0.1562
30/30 [==============================] - 0s 3ms/step - loss: -40321.9540 - accuracy: 0.0960 - val_loss: -68548.6094 - val_accuracy: 0.1012

Any guidance is greatly appricated, Thanks!


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

When you have a CNN you want the last layer to have as many nodes as the labels. So if you have 10 digits you want the last layer to have an output size 10. It usually has the activation function "softmax", which makes every value go to 0, except on value which is 1.

model.add(Dense(10))
model.add(Activation('softmax'))

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to Vigges Developer Community for programmer and developer-Open, Learning and Share
...