HAL8999 7/100
I was sick yesterday but did spend some time looking over some “cheat sheets” that people had put together for various machine learning topics. Some were good, some were just stupid (I’m looking at you Machine Learning in Emoji). Also went through a very simply classifier based on the iris data set.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
from sklearn import neighbors, datasets, preprocessing from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score # load the data into training and test sets iris = datasets.load_iris() X, y = iris.data[:, :2], iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=13) # scale the data scaler = preprocessing.StandardScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) # train a k-nearest neighbors model knn = neighbors.KNeighborsClassifier(n_neighbors = 5) knn.fit(X_train, y_train) # Test the model y_pred = knn.predict(X_test) print(accuracy_score(y_test, y_pred)) |
1 |
0.8157894736842105 |
![]() Model Selection |
|