The CIFAR10 dataset comes with Keras. It has 50,000 training images and 10,000 test images of 10 classes, such as airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships and trucks. The complete source code for this tutorial: https://github.com/fgafarov/learn-neural-networks/blob/master/image_recognition_cifar10.py .

Using the following program code, you can download and prepare CIFAR10 data for further processing using neural networks:

from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D

import os

batch_size = 32
num_classes = 10
epochs = 100
num_predictions = 20
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'keras_cifar10_trained_model.h5'
# The data, split between train and test sets:
(x_tr, y_tr), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_tr.shape)
print(x_tr.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Convert class vectors to binary class matrices.
y_tr = keras.utils.to_categorical(y_tr, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

The size of images is 32 × 32 in. The figure shows a few examples.

A convolutional neural network will consist of convolutional layers, and MaxPooling layers. We will also include a dropout layer to avoid overfitting. At the output of the network, we add a fully connected layer (Dense), followed by a softmax layer. Here is the program code for creating the model structure.

convNetModel = Sequential()
convNetModel.add(Conv2D(32, (3, 3), padding='same',input_shape=x_tr.shape[1:]))
convNetModel.add(Activation('relu'))
convNetModel.add(Conv2D(32, (3, 3)))
convNetModel.add(Activation('relu'))
convNetModel.add(MaxPooling2D(pool_size=(2, 2)))
convNetModel.add(Dropout(0.25))
convNetModel.add(Conv2D(64, (3, 3), padding='same'))
convNetModel.add(Activation('relu'))
convNetModel.add(Conv2D(64, (3, 3)))
convNetModel.add(Activation('relu'))
convNetModel.add(MaxPooling2D(pool_size=(2, 2)))
convNetModel.add(Dropout(0.25))
convNetModel.add(Flatten())
convNetModel.add(Dense(512))
convNetModel.add(Activation('relu'))
convNetModel.add(Dropout(0.5))
convNetModel.add(Dense(num_classes))
convNetModel.add(Activation('softmax'))

In the above code, we use 6 convolutional layers and 1 fully connected layer. First, we add 32-filter convolutional layers with a window size of 3 × 3 to the model. Next, we add a 64-filter convolutional layer. For each layer, a layer of maximum pollin layer with a window size of 2 × 2 is added. Dropout layers with coefficients of 0.25 and 0.5 are also added in order to prevent network re-training. In the final lines, we add a dense layer, which classifies among 10 classes using the softmax activation function.

Bacuse this is a classification problem on 10 classes, we will use the categorical loss of entropy and use the RMSProp optimizer to train the network.

# initiate RMSprop optimizer
opt = keras.optimizers.rmsprop(lr=0.0001, decay=1e-6)
# Let's train the model using RMSprop
convNetModel.compile(loss='categorical_crossentropy',optimizer=opt,  metrics=['accuracy'])
x_tr = x_tr.astype('float32')
x_test = x_test.astype('float32')
x_tr /= 255
x_test /= 255
convNetModel.fit(x_tr, y_tr, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test),   shuffle=True)
# Save model and weights
if not os.path.isdir(save_dir):
    os.makedirs(save_dir)
model_ph = os.path.join(save_dir, model_name)
convNetModel.save(model_ph)
print('Saved trained model at %s ' % model_ph)

# Score trained model.
scores = convNetModel.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])