Make sure you check the syllabus for the due date.

A) Run an SVM from
a package or library of your choice on the Spambase dataset. Try
several kernels, including the polynomial and the RBF ones. Report
the results. Use one of these packages: SVMlight, SGDSVM, osu SVM, LIBSVM, Matlab
SVMtrain, or other software (here, here).

B) Run an SVM from a package or library of your choice on the
Digits Dataset (Training data,
labels. Testing data,
labels). Use your extracted HAAR features from HW5. If you
choose an SVM packge that does not provide multi-class
implementations, you should write a wrapper that will make the
code run for each class versus the others, separately.

If you did not extract HAAR Features for HW5, you can use our version of MNIST_HAAR_dataset

Since the data has a range of 10 labels (multiclass) while your SVM is a binary classifier, you will have to implement a wrapper on top of the SVM. You can choose one of the following:

- One-vs-the rest approach and train 10 SVM classifiers (one per class)

- Run ECOC on top of SVMs (similar with HW5 setup, only with SVM
instead of boosting)

- We suggest a voting schema: train all possible one-to-one SVM
classifiers, for a total of (10 choose 2) = 45 models.
Each one of these will train/test only on labeled data for the
two particular classes is made for : for example 7vs9 SVM will
only train/test on datapoints labeled 7 or 9. To obtain a
multiclass classifier: first run (for a given test-datapoint)
all 45 models and get their scores; then you would need a voting
strategy in order to decide a prediction or a ranking among all
10 classes. Such voting strategy can be to predict the class
with most wins, and if there is tie for the most wins to use the
direct "match" one-to-one to break the tie.

Explain why 0$\le $
$\alpha $$\le \mathrm{C/m}$is a constraint in the dual
optimization with slack variables. (HINT: read Chris Burges
tutorial first) Distinguish three cases, and explain them in terms
of the classification and constraints: a) 0$=$
$\alpha $; b) 0$<$$\alpha $

$\mathrm{This\; has\; been\; discussed\; in\; class\; and\; in\; SVM\; notes;\; a\; detailed\; rigurous\; explanation\; is\; expected.}$

A optional) Fix values of ‘k’= "the number of closest neighbors used", i.e. k=1,k=3, and k=7.

B optional) Fixed Window. Instead of using the closest k Neighbors, now we are going to use all neighbors within a window. It should be clear that a testpoint z

Fix an appropriate window size around the test datapoint by defining a windows "radius" R , or the maximum distance allowed. Predict the label as the majority or average of training neighbors within the window.

- Run on Spambase dataset with Euclidian distance.
- Run on Digits dataset with cosine distance.

C required) Kernel density estimation. Separately for each label class, estimate P(z|C) using the density estimator given by the kernel K restricted to the training points form class C. There is no need for physical windows, as the kernel is weighting each point by similarity:

where m

- Run on Spambase dataset with Gaussian kernel.
- (optional) Run on Digits dataset with Gaussian kernel
- (optional) Run on Digits dataset with polynomial kernel.

Consider the following 6 points in 2D, for two classes:

class 0: (1,1) (2,2) (2,0)

class 1: (0,0) (1,0) (0,1)

a) Plot these 6 points, construct the optimal hyperplane by inspection and intuition (give the W,b) and calculate the margin.

b) Which points are support vectors ?

c) [Extra Credit] Construct the hyperplane by solving the dual
optimization problem using the Lagrangian. Compare with part (a).

Extra points will be given for an implementation for both PB2 and PB3 that works with kernels (for example Gaussian Kernel)

Run your SMO-SVM on other datasets.