# Forum

values attribute an...

Clear all

# values attribute and reshape method

0

Hi

Would like to check concepts.

1. Is it correct to say that conversion to numpy arrays (of X and y) using the values attribute is only required when we are using 1 feature (X)? And we shape the array to a col with the rehape method.

Another way to put it is: under what situation is the use of values and reshape needed, for ML.

2. Noted the supervised learning course reshaped y, the target as well. Is this necessary? Noted it is not done in the BiasVariance exercise notebook.

thanks, ym.

0

Hey @yongyuetmei!

1. Is it correct to say that conversion to numpy arrays (of X and y) using the values attribute isonlyrequired when we are using 1 feature (X)? And we shape the array to a col with the rehape method.

Another way to put it is: under what situation is the use of values and reshape needed, for ML.

Actually, conversion to numpy arrays is not usually required. Reshaping using numpy is just useful sometimes to create the right number of columns to feed into X, especially when feature engineering or applying new data to a pre-trained model.

2. Noted the supervised learning course reshaped y, the target as well. Is this necessary? Noted it is not done in the BiasVariance exercise notebook.

Do you have a link or screenshot of the code you are talking about? That would make it easier for me to assist. In general, it's not necessary to reshape y, unless the data somehow requires it.

0

Hi, re: "Do you have a link or screenshot of the code you are talking about?"

Referring to the Ex_BiasVariance_start.ipynb file, quoted below:

"To help us illustrate our concepts, let's filter the dataset down to make things simpler. Here, we select only 2 columns and keep only 10 samples:

features = ['OverallQual','SalePrice']
data = input_data[features].sample(n=10, random_state=42)
data.columns = ['X','Y'] # rename the columns to make it easier to reference

X = data.X.values.reshape(-1,1)
y = data.Y.values

# Here we define 2 models: Linear and a higher order polynomial
model1 = PolynomialRegression(1).fit(X, y)
model20 = PolynomialRegression(20).fit(X, y)"

Referring to Supervised Learning in datacamp, ch2, Regression, Predicting house value from a single feature:

X = boston.drop('MEDV', axis=1).values; X_rooms = X[:,5]  # selecting 1 feature being the no. of rooms.

y = boston['MEDV'].values   # target being price of house.

y = y.reshape(-1, 1)
X_rooms = X_rooms.reshape(-1, 1)

thanks, ym.

Share: