Comp 135: Introduction to Machine Learning
Department of Computer Science
Tufts University
Fall 2016

Course Web Page (redirects to current page): http://www.cs.tufts.edu/comp/135/

Announcement(s):
  • (12/7) Review session / office hours Khardon will hold office hours / review session on Friday 12/9 6-7pm and Sunday 12/11 12-1pm.
  • (12/3) Information for the exam posted.
  • Previous announcements are here

What is this course about?

Machine learning is the science of collecting and analyzing data and turning it into predictions, encapsulated knowledge, or actions. There are many ways and scenarios by which data can be obtained, many different models and algorithms for data analysis, and many potential applications. In recent years machine learning has attracted attention due to successful applications in science and commerce leading to widespread use.
The course gives a broad introduction to machine learning aimed at upper level undergraduates and beginning graduate students. Some mathematical aptitude is required, but generally speaking we focus on baseline algorithms, practical aspects, and breadth and leave more sophisticated aspects as well as detailed analysis to more advanced courses: Statistical Pattern Recognition (Fall 2016), Machine Learning Seminar (Fall 2016), Computational Learning Theory (Fall 2015), Learning, Planning and Acting in Complex Environments (Fall 2014)

Syllabus:

An overview of methods whereby computers can learn from data or experience and make decisions accordingly. Topics include supervised learning, unsupervised learning, reinforcement learning, and knowledge extraction from large databases with applications to science, engineering, and medicine.

Prerequisites:

Comp 15 and COMP/MATH 61 or consent of instructor. Comp 160 is highly recommended. You will also need a minimal amount of calculus.

Class Times:

(K+ Block) MW 4:30-5:45, Halligan 111A

Instructor:

Roni Khardon
Office Hours: Monday 6-7, Halligan 230.
Email: roni@cs.tufts.edu

Teaching Assistant:

Daniel Kasenberg, Email: Daniel.Kasenberg@tufts.edu
Office Hours: Tuesday 10-11, Wednesday 6-7, Thursday 1:30-2:30, Friday 2-3 (all in Halligan 121).

Course Work and Marking

The course grade will be determined by a combination of
Written homework assignments (25%)
these assignments exercise and reinforce class material.
Experimental/Programming projects (25%)
these assignments exercise and reinforce class material. Projects will include both programming assignments and use of existing machine learning software.
Rules for late submissions:
All work must be turned in on the date specified. Unless there is a last minute emergency, please notify Roni Khardon of special circumstances at least two days in advance. If you haven't finished an assignment by the due date, Please turn in the work you have done (even partial) on time , and it will be evaluated for partial credit.
Two class exams (each worth 25%):
Exam1: Wednesday 10/26, and Exam2: Monday 12/12.

Collaboration:

On homework assignments and projects: You may discuss the problems and general ideas about their solutions with other students, and similarly you may consult other textbooks or the web. However, you must work out the details on your own and code/write-out the solution on your own. Every such collaboration (either getting help of giving help) and every use of text or electronic sources must be clearly cited and acknowledged in the submitted homework.
On exams: no collaboration is allowed.
Failure to follow these guidelines may result in disciplinary action for all parties involved. Any questions? for this and other issues concerning academic integrity please consult the detailed guidelines and policy available from the office of the Dean of Student Affairs.

Tentative List of Topics

[We may not cover all sub-topics]

Textbooks and Material Covered

No single text covers all the material for this course at the right level. We have the following texts on reserve in the library and I will try to use first [M] and then [F] as preferred sources for reading when possible. Other material will be selected from research and survey articles or other notes. Detailed reading assignments and links to material will be posted.

Software

Reading, References, and Assignments

Lecture Topic Reading/Assignments/Notes Due Date
9/7 Introduction to Machine Learning Read the introductory chapter of [M].
See also lecture slides.
Alternate reading: Read the introductory chapter of one of [WF], [F] or [A].
 
  Supervised Learning Basics:    
9/12 Instance based learning [M] Chapter 8.
See also lecture slides.
See also Andrew Moore's tutorial on kd-trees
See also original paper describing the Relief Method
Alternate reading: [RN] 18.8 or [DHS] 4.4-4.6.
 
9/14; 9/19 Decision Trees [M] Chapter 3.
See also lecture slides.
Alternate reading: [RN] 18.1-4 or [F] Chapter 5.
 
  Optional Reading T. Dietterich, M. Kearns, and Y. Mansour Decision Tree Learning and Boosting Applying the Weak Learning Framework to Understand and Improve C4.5. International Conference on Machine Learning, 1996.  
9/19 Probability Basics Lecture provides a basic and brief introduction to probability theory and working with random variables.
Please review relevant material from your discrete math, algorithms, or probability and statistics course.
 
  Written Assignment 1 Assignment 1 9/28
  Empirical/Programming Assignment 1 Project 1 and corresponding Data 9/28
9/21, 9/26 Maximum Likelihood Estimation and the Naive Bayes Algorithm [M] 6.1-6.2, and 6.9-6.10.
See also new book chapter from [M]
See also lecture slides.
Alternative reading: [DHS] Section 2.9; [F] 9.2; [WF] 4.2.
 
  Written Assignment 2 Assignment 2 10/5
  Empirical/Programming Assignment 2 Project 2 and corresponding Data 10/19
9/28 Evaluating Machine Learning Outcomes [M] Ch 5.
See also lecture slides.
Alternative reading: [F] Ch 12
 
  Additional Optional Reading Foster Provost, Tom Fawcett, Ron Kohavi The Case Against Accuracy Estimation for Comparing Induction Algorithms Proc. 15th International Conf. on Machine Learning, 1998.
T. Dietterich, Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms Neural Computation 10(7), 1998.
Stephen Salzberg On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach Data Mining and Knowledge Discovery, 1997.
 
10/3 Class Canceled      
10/5 Features (selection, transformation, discretization) None of our sources is a perfect match for this lecture.
Relevant reading includes some portions of [F] Chapter 10, and [A] Chapter 6
See also lecture slides.
 
  Additional Optional Reading Wrappers for Feature Subset Selection Ron Kohavi, George H. John Artificial Intelligence, 1996. (Read till section 3.2 inclusive.)
Supervised and unsupervised discretization of continuous features. James Dougherty, Ron Kohavi, and Mehran Sahami. International Conference on Machine Learning, 1995.
 
  Written Assignment 3 Assignment 3 10/19
10/10 University Holiday No classes      
10/12 Class Canceled      
10/17; 10/19 Linear Threshold Units [M] 4.1-4.4
See also new book chapter from [M]
See also lecture slides.
Alternative reading: [DHS] 5.5;
 
  Written Assignment 4 Assignment 4 11/2
Wednesday 10/26 Exam I Material for the exam includes everything covered up to 10/19 but excluding clustering. Everything discussed in class is included for the exam. The reading assignments are supporting materials that should be useful in review and study but I will not hold you responsible for details in the reading that were not discussed in class.
The Exam is closed book; no notes or books are allowed; no calculators or other machines of any sort are allowed.
The exam will aim to test whether you have grasped the main concepts, problems, ideas, and algorithms that we have covered, including the intuition behind these. Generally speaking, the exam will not test your technical wizardry with overly long equations or calculations, but, on the other hand, it is sure to include some shorter ones.
 
10/24 Clustering [F] 8.4-5
See also lecture slides.
Alternative reading: [DHS] 10.6-7,10.9.
 
10/31 Unsupervised and Semi-Supervised Learning with EM [M] Section 6.12
Text Classification Using Labeled and Unlabeled Documents using EM Nigam et. al, Machine Learning Volume 39, pages 103-134, 2000. (The entire paper is relevant; you can skip section 5.3)
See also lecture slides .
Alternative reading: [A] 7.4; [F] 9.4; [DHS] 3.9
 
11/2 Association Rules [F] 6.3
Mining Association Rules between Sets of Items in Large Databases Rakesh Agrawal, Tomasz Imielinski, Arun Swami Proceedings of the 1993 ACM SIGMOD International Conference on Management of Data, 1993.
See also lecture slides.
Alternative reading: [WF] 4.5
 
  Optional Reading Real World Performance of Association Rule Algorithms Zheng et al, KDD 2001.
Mining the Most Interesting Rules Bayardo et all, KDD 1999.
Dynamic Itemset Counting and Implication Rules for Market Basket Data Brin et al, SIGMOD 1997.
Discovering All Most Specific Sentences Gunopulos et al, TODS, 2003.
 
11/7 Neural Networks [M] Chapter 4.
See also lecture slides.
Alternative reading: [RN] 18.7, [DHS] 6.1-5.
 
  Written Assignment 5 Assignment 5 11/16
  Empirical/Programming Assignment 3 Project 3 and corresponding Data 11/21
11/9; 11/14 Kernels, Dual Perceptron, Support Vector Machines [F] 7.3.
A practical guide to support vector classification C.-W. Hsu, C.-C. Chang, C.-J. Lin. Technical report, Department of Computer Science, National Taiwan University. July, 2003.
See also lecture slides.
Alternative reading: [CST] pages: 9-19 and 26-32, [RN] 18.9,
 
11/16 Active Learning Active Learning Literature Survey
See also lecture slides.
 
  Optional Reading The Robot Scientist Adam IEEE Computer (Volume:42, Issue: 8) 2009.
Support Vector Machine Active Learning with Applications to Text Classification Simon Tong, Daphne Koller; JMLR 2(Nov):45-66, 2001.
 
  Written Assignment 6 Assignment 6 11/30
  Empirical/Programming Assignment 4 Project 4 and corresponding Data 12/7
11/21 Aggregation Methods [F] Chapter 11.
See also lecture slides.
Alternative reading: [A] Chapter 17
 
  Optional Reading Explaining AdaBoost In: Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik, Springer, 2013.
An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Dietterich, T. Machine Learning, 40 (2) 139-158, 2000.
Useful information about Random Forests
Boosting the margin: A new explanation for the effectiveness of voting methods. Robert E. Schapire, Yoav Freund, Peter Bartlett and Wee Sun Lee. The Annals of Statistics, 26(5):1651-1686, 1998. (source of margin graphs in slides; the introduction is informative and accessible)
Improved boosting algorithms using confidence-rated predictions Robert E. Schapire and Yoram Singer Machine Learning, 37, 1999. (source of confidence rated Aadaboost version in slides)
 
11/28; 11/30 Computational learning theory [M] 7.1, 7.2, 7.3, 7.5, and (for perceptron) [DHS] 5.5.2
Topics covered: on-line learning, the Perceptron convergence theorem, weighted majority, PAC learning, Agnostic PAC learning.
Alternative reading: [RN] 18.5 and (for perceptron) [CST] 2.1.1
 
  Written Assignment 7 Assignment 7 12/7
12/5; 12/7 Overview of MDPs and Reinforcement Learning [RN] Sections 17.1-3 and Chapter 21, [M] 13.1-3
See also lecture slides.
Topics covered: MDPs, Planning in MDPs: policies, policy evaluation (as linear equations), policy improvement, policy iteration algorithm, the Bellman equation, the value iteration algorithm. Bandit problems and the exploration exploitation problem. Model free algorithms in MDPs: policy evaluation: Monte Carlo value updates, temporal difference value updates. Model free planning: the SARSA algorithm, Q-learning.
 
Monday, 12/12 Exam II Material for the exam includes everything covered during the semester (i.e., it is cumulative). Everything discussed in class and homework assignments is included for the exam.
Lecture slides are not comprehensive and I expect you to read the assigned materials which should be useful in review and study. But I will not hold you responsible for details in the reading that were not discussed in class or assignments.
The Exam is closed book; no notes or books are allowed; no calculators or other machines of any sort are allowed.
The exam will aim to test whether you have grasped the main concepts, problems, ideas, and algorithms that we have covered, including the intuition behind these. Generally speaking, the exam will not test your technical wizardry with overly long equations or calculations, but, on the other hand, it is sure to include some shorter ones.