CPSC 633 - Spring 2008 Project 3 due: Tues Apr 29, 2008 1) Implement a k-Nearest Neighbor (k-NN) method. (don't forget to normalize your data; try different values of k; or experiment with distance-weighting) 2) Implement a Naive Bayes (NB) classifier. (you might want to compare different ways of treating continuous attributes, such as discretization versus modeling them with Gaussian distributions) 3) Implement a method for feature selection, weighting, or construction. (for example, FRINGE, RELIEF, a wrapper method, mutual information...) Show that it improves the performance of one of your learning algorithms for at least one database. Test your algorithms on some representative databases from the UCI Repository. Compare you results with your decision tree and neural network. Use proper evaluation methodology (cross-validation, report standard errors) and statistical tests (e.g. paired T-tests). What to turn in: a written report that describes your implementation, the algorithmic variations you tested, the testing methodology (e.g. cross-validation), and then the results on several (at least 3) different databases. Be sure to interpret your results. Are the accuracies good? Why or why not?