Difference between revisions of "Exam 2 Study Guide"
Jump to navigation
Jump to search
Line 19: | Line 19: | ||
==Readings== | ==Readings== | ||
− | "What Hedge Funds really do", Chapter 12: Overcoming data quirks to design trading strategies | + | * "What Hedge Funds really do", Chapter 12: Overcoming data quirks to design trading strategies |
− | "What Hedge Funds really do", Chapter 8: The Efficient Market Hypothesis(EMH) - its three versions | + | * "What Hedge Funds really do", Chapter 8: The Efficient Market Hypothesis(EMH) - its three versions |
− | "What Hedge Funds really do", Chapter 9: The fundamental law of active portfolio management | + | * "What Hedge Funds really do", Chapter 9: The fundamental law of active portfolio management |
− | "Machine Learning", Chapter 13, Reinforcement Learning | + | * "Machine Learning", Chapter 13, Reinforcement Learning |
==Legacy== | ==Legacy== |
Revision as of 10:58, 6 December 2017
Exam 2 will cover all material on the schedule since Exam 1. The exam will include 30 questions. You will have 30 minutes to complete the exam. The exam is closed book, closed notes. No calculator is allowed. The topics and readings are as follows:
Topics
- MC2 Lesson 6, Technical analysis
- MC2 Lesson 7, Dealing with data
- MC2 Lesson 8, The Efficient Markets Hypothesis
- MC2 Lesson 9, The fundamental law
- MC2 Lesson 10, Portfolio optimization and the efficient frontier
- MC3 Lesson 5, Reinforcement Learning
- MC3 Lesson 6, Q-Learning (Part 1)
- MC3 Lesson 7, Q-Learning (Part 2) & Dyna
- Options
- Black Scholes
- Movie: The Big Short
- ML methods for time series data
- Technical trading
Readings
- "What Hedge Funds really do", Chapter 12: Overcoming data quirks to design trading strategies
- "What Hedge Funds really do", Chapter 8: The Efficient Market Hypothesis(EMH) - its three versions
- "What Hedge Funds really do", Chapter 9: The fundamental law of active portfolio management
- "Machine Learning", Chapter 13, Reinforcement Learning
Legacy
- Comparison of different regression learner performance characteristics: Trees, forests, KNN, linreg
- Comparison of learner types: Regression, Classification, RL
- Overfitting: Definition, how to identify, what might prevent it, what might cause it?
- Bootstrap aggregating.
- Boosting.
- Decision trees. Random versus information based construction. Advantages of one over the other.
- Reinforcement learning: How is it defined? Questions about State, Action, Transitions, Reward
- Q-Learning. The update equation, definition of Q
- Dyna-Q
- Things you should know because you did the projects. In sample versus out of sample. Istanbul problem, why did shuffling help?
- Options
Readings:
- "Machine Learning", Chapter 1, Introduction
- "Machine Learning", Chapter 8, Instance-based Learning
- "Machine Learning", Chapter 3, Decision Tree Learning
- "Machine Learning", Chapter 3, Decision Tree Learning
- Paper: "Perfect Random Tree Ensembles" by Adele Cutler
- "Machine Learning", Chapter 13, Reinforcement Learning