Referring to exercise to create CNN model for fashion_mnist dataset.

1. I had used EarlyStopping as well as validation_split when fitting. For EarlyStopping, monitor argument used was specified as either 'val_loss' or 'val_accuracy'. However, it seems that EarlyStopping monitors 'loss' or 'accuracy'(assume this is for training data), not 'val_loss' or 'val_accuracy' for validation data. Is this understanding correct?

2. I was trying out the layers, filters and dense units to be used. Noted that iterations of same model definition can produce somewhat different results. In trying out model architecture/parameters, should I have specified a seed right at the beginning, example: np.random.seed(1)? Will specifying this also fix the random_state of whatever functions having a random_state parameter?

3. The CNN illustration given has this:

misclassified_idx = np.where(p_test != y_test).

What is the meaning of the at the end?

4. Noted the CNN illustration had used 3 convolutional layers with increasing filters. Is this "increasing number of filters" through Conv2D layers a best practice, or obtained by experimentation? What about Dense layers? The limited illustrations I came across seem to go narrower (decrease in number of units/nodes) through the Dense layers. Any comments here?

Lastly, just a comment. I used 1 Conv2D layer with 64 filters, MaxPooling & BatchNormalization; then 1 Dense layer with 128 units, with Dropout. Gives val_accuracy of 90-91%. If someone has a good model (beyond the given illustration), appreciate your sharing. Thanks, ym.

]]>1) Why is there no argument attached to lambda (before the ':')

2) Where does the function '*print_results()' come from? it's not a built-in function the way that print() is. Correct me if I'm wrong..*

exercise here: https://campus.datacamp.com/courses/introduction-to-tensorflow-in-python/63343?ex=10

Why is it not:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

# Define the linear regression model

def linear_regression(params, feature1 = size_log, feature2 = bedrooms):

return params + feature1*params + feature2*params

# Define the loss function

def loss_function(params, targets = price_log, feature1 = size_log, feature2 = bedrooms):

# Set the predicted values

predictions = linear_regression(params, feature1, feature2)

# Use the mean absolute error loss

return keras.losses.mae(targets, predictions)

# Define the optimize operation

opt = keras.optimizers.Adam()

# Perform minimization and print trainable variables

for j in range(10):

opt.minimize(lambda: loss_function(params), var_list=)

print_results(params)

--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-35-73e444a33ce1> in <module>() 5 Dense(128, activation='relu'), 6 Dropout(0.2), ----> 7 Dense(10, activation = 'softmax') 8 9 ])

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in __hash__(self) 724 725 *N.B.* Before invoking `Tensor.eval()`, its graph must have been --> 726 launched in a session, and either a default session must be 727 available, or `session` must be specified explicitly. 728 TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key.

`# Install TensorFlow !pip install -q tensorflow-gpu==2.0.0-beta1 import tensorflow as tf print(tf.__version__) # Additional imports import numpy as np import matplotlib.pyplot as plt from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout from tensorflow.keras.models import Model`

####excluded other code that I assumed did not contribute to the error###

modelCNN = Sequential(

[Conv2D(32, activation='relu', kernel_size=(3, 3), input_shape=(reshaped_x_train.shape, reshaped_x_train.shape, reshaped_x_train.shape)),

Flatten(),

Dense(128, activation='relu'),

Dropout(0.2),

Dense(10, activation = 'softmax')

])