Difference between revisions of "ML4T Software Setup"

From Quantitative Analysis Software Courses
Jump to navigation Jump to search
m (Added draft mode notice)
m (Removed draft notice)
(29 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Draft ==
+
== Notice ==
This page is being updated for the Summer 2017 semester, and is currently in draft mode. This notice will be removed once this page has been finalized.
+
The repository has been made private for the Fall 2017 semester, and so the links to the repository below will no longer be visible for you. A zip file containing the grading script and any template code or data will be linked off of each assignment's individual wiki page. A zip file containing the <tt>grading</tt> and <tt>util</tt> modules, as well as the data, is available here: [[Media:ML4T_2018Spring.zip]]. The instructions on running the test scripts provided below still applies.
  
 
== Overview ==
 
== Overview ==
  
This class has implemented an automated test suite which enables students to test their code submissions using servers provided by Georgia Tech. Since these servers have already been configured with all necessary libraries, setup has been reduced to simply checking out a single git repository, which will be covered below. For students with sporadic internet access who would like a local installation of the software, the instructions from previous semesters are available here: [[ML4T_Software_Installation]].
+
Most of the projects in this class will be graded automatically. As of the summer 2017 semester, we are providing the grading scripts with the template code for each of the projects, so that students can test their code to make sure they are API compatible. Georgia Tech also provides access to four servers that have been configured to be identical to the grading environment, specifically in terms of operating system and library versions. Since these servers have already been configured with all necessary libraries, setup has been greatly simplified.
  
 
===Important Notes===
 
===Important Notes===
  
* Your code '''MUST''' run properly on the Georgia Tech provided servers, and your code must be submitted to T-square. If you do not test your code on the provided machines it may not run correctly in our auto grader.  If your code fails to run on the provided servers, you will not get credit for the assignment.  So it is very important that you ensure that you have access to, and that your code runs correctly on, these machines.
+
* Your code '''MUST''' run properly on the Georgia Tech provided servers, and your code must be submitted to T-square. If you do not test your code on the provided machines it may not run correctly when we test it.  If your code fails to run on the provided servers, you will not get credit for the assignment.  So it is very important that you ensure that you have access to, and that your code runs correctly on, these machines. If you would like to develop on your personal machine and are comfortable installing libraries by hand, you can follow the instructions here: [[ML4T_Software_Installation]]. Note that these instructions are from an earlier version of the class, but should work reasonably well.
 
* We use a specific, static dataset for this course, which is provided as part of the repository detailed below. If you download your own data from Yahoo (or elsewhere), you will get wrong answers on assignments.
 
* We use a specific, static dataset for this course, which is provided as part of the repository detailed below. If you download your own data from Yahoo (or elsewhere), you will get wrong answers on assignments.
 +
* We reserve the right to modify the grading script while maintaining API compatibility with what is described on the project pages. This includes modifying or withholding test cases, changing point values to match the given rubric, and changing timeout limits to accommodate grading deadlines. The scripts are provided as a convenience to help students avoid common pitfalls or mistakes, and are intended to be used as a sanity check. '''Passing all tests does not guarantee full credit on the assignment, and should be considered a necessary but not sufficient condition for completing an assignment.'''
 +
* Using github.gatech.edu to back up your work is a very good idea which we encourage, however make sure that you '''do not''' make your solutions to the assignments public. It's easy to accidentally do this, so please be careful:
 +
** '''Do not''' put your solutions in a '''public''' repository. Repositories on github.com are public by default. The Georgia Tech github, github.gatech.edu, provides the same interface and allows for free private repos for students.
  
 
==Access to machines at Georgia Tech==
 
==Access to machines at Georgia Tech==
Line 27: Line 30:
 
These machines use your GT login credentials.  
 
These machines use your GT login credentials.  
  
'''NOTE:''' We reserve the right to limit login access or terminate processes to avoid resource contention with the autograder after assignment due dates, although we will endeavor to limit such interruptions.
+
The xhost command and the -X argument to ssh are only necessary if you want to interactively draw plots directly to your screen while running code remotely on buffet. If you have any problems doing this, just forgo xhost and the -X argument and instead plot to a file using the Agg backend of matplotlib and the savefig() function. These require no "screen" access.
 +
 
 +
'''NOTE:''' We reserve the right to limit login access or terminate processes to avoid resource contention during grading, although we will endeavor to limit such interruptions.
  
 
==Getting code templates==
 
==Getting code templates==
  
After you've successfully logged in, you will need to clone the following git repository containing all of the template code and data into your home directory: [https://github.gatech.edu/bhrolenok3/ML4T_2017Spring/]. You can do this with the following command:
+
As of Spring 2018, code for each of the individual assignments is provided in zip files, linked to on the individual project page. The data, grading module, and util.py, which are common across all assignments, are available here [[Media:ML4T_2018Spring.zip]] (<span style="color:red">same file as above</span>).
  
git clone https://github.gatech.edu/bhrolenok3/ML4T_2017Spring.git
+
== Running the grading scripts ==
  
again providing your GT login credentials when asked for. Make sure you check out the repository into your home directory (not any sub-directory), and that you do not change the name of the folder.  
+
The above zip files contain the grading scripts, data, and util.py for all assignments. Each project page will also have a link to a zip file containing a directory with some template code, which you should extract in the same directory that contains the <tt>data/</tt> and <tt>grading/</tt> directories, and <tt>util.py</tt>, (<tt>ML4T_2018Spring/</tt> by default). To complete the assignments you'll need to modify the templates according to the assignment description. You can do this on the <tt>buffet0X</tt> machines directly using a text editor such as <tt>gedit</tt>, <tt>nano</tt>, or <tt>vim</tt>. Or you can copy the file to your local machine, edit them in your favorite text editor or IDE, and upload them back to the server. Make sure to test run your code on the server after making changes to catch any typos or other bugs.
  
'''NOTE:''' If you change or rename directory structure, the test suite will not be able to find your assignments and you will not get any feedback.
+
To test your code, you'll need to set up your PYTHONPATH to include the <tt>grading</tt> module and the utility module <tt>util.py</tt>, which are both one directory up from the project directories. Here's an example of how to run the grading script for the first assignment:
  
== Getting feedback from the test suite ==
+
PYTHONPATH=../:. python grade_analysis.py
  
The repository you've just cloned contains the data and template code for all assignments. To complete the assignments you'll need to modify the templates according to the assignment description. You can do this on the <tt>buffet0X</tt> machines directly using a text editor such as <tt>gedit</tt>, <tt>nano</tt>, or <tt>vim</tt>. Or you can copy the file to your local machine, edit them in your favorite text editor or IDE, and upload them back to the server. Make sure to test run your code on the server after making changes to catch any typos or other bugs.
+
which assumes you're typing from the folder '''ML4T_2018Spring/assess_portfolio/'''. This will print out a lot of information, and will also produce two text files: <tt>points.txt</tt> and <tt>comments.txt</tt>. It will probably be helpful to scan through all of the output printed out in order to trace errors to your code, while <tt>comments.txt</tt> will contain a succinct summary of which test cases failed and the specific errors (without the backtrace). Here's an example of the contents of <tt>comments.txt</tt> for the first assignment using the unchanged template:
  
After you are satisfied that your program contains no obvious errors, you can have it tested by our auto grading script to make sure it passes all of our test cases. To do this, simply create an empty text file named '<tt>TESTME.txt</tt>' ('''case sensitive''') in the directory of the assignment you would like graded.
+
&lt;pre&gt;--- Summary ---
 
+
Tests passed: 0 out of 3
The auto grader runs periodically, and if it finds the <tt>TESTME.txt</tt> file in a students assignment directory, it will run the auto grader on that assignment, provide a <tt>score.txt</tt> and <tt>comments.txt</tt> file with more detailed information in the '''feedback/''' sub-directory of the assignment, and remove the <tt>TESTME.txt</tt> file.
 
 
 
'''NOTE:''' The autograder will only remove the <tt>TESTME.txt</tt> file once it has completed its run and copied it's files into the '''feedback/''' directory, so if you modify your code before this happens, the feedback may correspond to an earlier version of your code.
 
 
 
== Updating the repository ==
 
Note: these instructions are for students who have not committed to their repository, added a different origin, or any other advanced git techniques. If you have done this, some quick googling should resolve any questions you have.
 
 
   
 
   
From here on, we'll assume you've checked out the repository, and may have made some modifications you'd like to keep. First things first, figure out what you have changed since you originally pulled the repo. From the your code root directory (e.g., <tt>ML4T_2016Fall/</tt>), run the following command:
+
--- Details ---
 
+
Test #0: failed
  git status
+
Test case description: Wiki example 1
 
+
  IncorrectOutput: One or more stats were incorrect.
Look at the list have files that have changed and make sure it makes sense. For example, if you've only modified the python files for the first two assignments, the output may look something like this:
+
  Inputs:
 
+
    start_date: 2010-01-01 00:00:00
bhrolenok3@buffet04:~/ML4T_2016Fall$ git status
+
    end_date: 2010-12-31 00:00:00
On branch master
+
    symbols: ['GOOG', 'AAPL', 'GLD', 'XOM']
Your branch is up-to-date with 'origin/master'.
+
    allocs: [0.2, 0.3, 0.4, 0.1]
+
    start_val: 1000000
Changes not staged for commit:
+
  Wrong values:
  (use "git add <file>..." to update what will be committed)
+
    cum_ret: 0.25 (expected: 0.255646784534)
  (use "git checkout -- <file>..." to discard changes in working directory)
+
    avg_daily_ret: 0.001 (expected: 0.000957366234238)
 +
    sharpe_ratio: 2.1 (expected: 1.51819243641)
 
   
 
   
modified:  mc1_p1/analysis.py
+
  Test #1: failed
modified:  mc1_p2/optimization.py
+
  Test case description: Wiki example 2
 
+
  ...
You may see a few lines after this under the heading "Untracked files", these are safe to ignore. They are just files that aren't part of the repository (temporary backups, <tt>.pyc</tt> files, notes, etc). If you see any modified files that you don't remember editing, you can look at the exact differences by using the following git command:
 
 
 
git diff <filename>
 
 
 
replacing <tt>&lt;filename&gt;</tt> with the name of the file that's been marked as modified. Following the example earlier, here's what running that command looks like for the <tt>optimization.py</tt> changes I made:
 
 
 
bhrolenok3@buffet04:~/ML4T_2016Fall$ git diff mc1_p2/optimization.py
 
diff --git a/mc1_p2/optimization.py b/mc1_p2/optimization.py
 
index 716de34..965d8d9 100644
 
--- a/mc1_p2/optimization.py
 
+++ b/mc1_p2/optimization.py
 
@@ -19,7 +19,8 @@ def optimize_portfolio(sd=dt.datetime(2008,1,1), ed=dt.datetim
 
 
 
      # find the allocations for the optimal portfolio
 
      # note that the values here ARE NOT meant to be correct for a test case
 
-    allocs = np.asarray([0.2, 0.2, 0.3, 0.3, 0.0]) # add code here to find the
 
  +    #allocs = np.asarray([0.2, 0.2, 0.3, 0.3, 0.0]) # add code here to find the
 
+    allocs = np.asarray([0.1,0.1,0.3,0.4,0.1]) #Surely this is the right one!
 
      cr, adr, sddr, sr = [0.25, 0.001, 0.0005, 2.1] # add code here to compute s
 
 
 
      # Get daily portfolio value
 
(END)
 
 
 
lines with <b>-</b> have been removed, lines with <b>+</b> have been added, so this output means I changed one line in the file, changing the <code>allocs</code> variable. You'll be able to scroll up and down through the changes using your arrow keys, and you'll need to hit the <b>q</b> key to get back to the command line. Once you've identified all the changed files, use scp (or WinSCP or the ssh client of your choice) to copy the files you'd like to keep to your local computer. Now, you can stash all the changes you've made on your copy of the repo on <tt>buffet0x</tt> using <code>git stash</code>, which, following our example, will look something like this:
 
 
 
  bhrolenok3@buffet04:~/ML4T_2016Fall$ git stash
 
Saved working directory and index state WIP on master: 228f9ec Added validate_env.py, removed mc1_hw1
 
HEAD is now at 228f9ec Added validate_env.py, removed mc1_hw1
 
bhrolenok3@buffet04:~/ML4T_2016Fall$
 
 
 
Now you can safely pull down all the changes that have been made to the repo since the last time. Do that using <code>git pull</code>:
 
 
 
bhrolenok3@buffet04:~/ML4T_2016Fall$ git pull
 
Enter passphrase for key '/home/bhrolenok3/.ssh/id_rsa':
 
DISPLAY "(null)" invalid; disabling X11 forwarding
 
remote: Counting objects: 7, done.
 
remote: Compressing objects: 100% (7/7), done.
 
remote: Total 7 (delta 0), reused 0 (delta 0), pack-reused 0
 
Unpacking objects: 100% (7/7), done.
 
From github.gatech.edu:tb34/ML4T_2016Fall
 
    228f9ec..803f0be  master    -> origin/master
 
Updating 228f9ec..803f0be
 
Fast-forward
 
  mc3_p1/Data/winequality-red.csv  | 1599 ++++++++++++
 
  mc3_p1/Data/winequality-white.csv | 4898 +++++++++++++++++++++++++++++++++++++
 
  mc3_p1/Data/winequality.names.txt |  72 +
 
  3 files changed, 6569 insertions(+)
 
  create mode 100644 mc3_p1/Data/winequality-red.csv
 
  create mode 100644 mc3_p1/Data/winequality-white.csv
 
  create mode 100644 mc3_p1/Data/winequality.names.txt
 
 
 
This should be similar for everyone, since the only time the remote repository is updated is when we (TAs/Professor Balch) make changes. At this point, you'll have all the new changes to the repository. From here you can 1) start working from scratch on the current assignment (safest option), 2) copy back the modified files using scp (verify by hand), or 3) use <code>git stash</code> to apply the changes to the new repository.  Option 2 should be safe and quick in most instances, and if you're not comfortable with <code>git</code> and the command line may be the easiest. You'll have to check the differences between any files you overwrite when you copy them back to <tt>buffet0x</tt>, which you can do easily with the <code>git diff</code> command described earlier. Option 3 handles all of these things using <code>git</code>'s own tools. To apply your stashed changes from earlier, you can simply call <code>git stash pop</code>:
 
 
 
bhrolenok3@buffet04:~/ML4T_2016Fall$ git stash pop
 
On branch master
 
Your branch is up-to-date with 'origin/master'.
 
 
 
  Changes not staged for commit:
 
  (use "git add <file>..." to update what will be committed)
 
  (use "git checkout -- <file>..." to discard changes in working directory)
 
 
 
modified:  mc1_p1/analysis.py
 
modified:  mc1_p2/optimization.py
 
  
which tells you the status of the repo after applying all your changes, which you should double check makes sense using <code>git diff</code> as before. If you see any "conflicts" or error messages when applying your stashed changes, you'll need to go back over them by hand. Since you have a backup of your files, you can always wipe out the repo and start from a clean slate.
+
The <tt>comments.txt</tt> file will contain a summary of which tests were passed or failed, and any error messages. The <tt>points.txt</tt> file reports the score from the autograder, used by the teaching staff to automate grading submitted code in a batch run, and can be safely ignored by students.

Revision as of 15:08, 17 January 2018

Notice

The repository has been made private for the Fall 2017 semester, and so the links to the repository below will no longer be visible for you. A zip file containing the grading script and any template code or data will be linked off of each assignment's individual wiki page. A zip file containing the grading and util modules, as well as the data, is available here: Media:ML4T_2018Spring.zip. The instructions on running the test scripts provided below still applies.

Overview

Most of the projects in this class will be graded automatically. As of the summer 2017 semester, we are providing the grading scripts with the template code for each of the projects, so that students can test their code to make sure they are API compatible. Georgia Tech also provides access to four servers that have been configured to be identical to the grading environment, specifically in terms of operating system and library versions. Since these servers have already been configured with all necessary libraries, setup has been greatly simplified.

Important Notes

  • Your code MUST run properly on the Georgia Tech provided servers, and your code must be submitted to T-square. If you do not test your code on the provided machines it may not run correctly when we test it. If your code fails to run on the provided servers, you will not get credit for the assignment. So it is very important that you ensure that you have access to, and that your code runs correctly on, these machines. If you would like to develop on your personal machine and are comfortable installing libraries by hand, you can follow the instructions here: ML4T_Software_Installation. Note that these instructions are from an earlier version of the class, but should work reasonably well.
  • We use a specific, static dataset for this course, which is provided as part of the repository detailed below. If you download your own data from Yahoo (or elsewhere), you will get wrong answers on assignments.
  • We reserve the right to modify the grading script while maintaining API compatibility with what is described on the project pages. This includes modifying or withholding test cases, changing point values to match the given rubric, and changing timeout limits to accommodate grading deadlines. The scripts are provided as a convenience to help students avoid common pitfalls or mistakes, and are intended to be used as a sanity check. Passing all tests does not guarantee full credit on the assignment, and should be considered a necessary but not sufficient condition for completing an assignment.
  • Using github.gatech.edu to back up your work is a very good idea which we encourage, however make sure that you do not make your solutions to the assignments public. It's easy to accidentally do this, so please be careful:
    • Do not put your solutions in a public repository. Repositories on github.com are public by default. The Georgia Tech github, github.gatech.edu, provides the same interface and allows for free private repos for students.

Access to machines at Georgia Tech

There are 4 machines that will be accessible to students enrolled in the ML4T class via ssh. These machines may not be available until the second week of class; we will make an announcement once they are ready, and if at that time you are still unable to log in, please contact us. If you are using a Unix based operating system, such as Ubuntu or Mac OS X, you already have an ssh client, and you can connect to one of the servers by opening up a terminal and typing:

xhost +
ssh -X gtname@buffet0X.cc.gatech.edu

replacing the X in buffet0X with 1-4, as detailed below. You will then be asked for your password and be logged in. Windows users may have to install an ssh client such as putty. In order to distribute workload across the machines, please use the specific machines as follows:

  • buffet01.cc.gatech.edu if your last name begins with A-G
  • buffet02.cc.gatech.edu if your last name begins with H-N
  • buffet03.cc.gatech.edu if your last name begins with O-U
  • buffet04.cc.gatech.edu if your last name begins with V-Z

These machines use your GT login credentials.

The xhost command and the -X argument to ssh are only necessary if you want to interactively draw plots directly to your screen while running code remotely on buffet. If you have any problems doing this, just forgo xhost and the -X argument and instead plot to a file using the Agg backend of matplotlib and the savefig() function. These require no "screen" access.

NOTE: We reserve the right to limit login access or terminate processes to avoid resource contention during grading, although we will endeavor to limit such interruptions.

Getting code templates

As of Spring 2018, code for each of the individual assignments is provided in zip files, linked to on the individual project page. The data, grading module, and util.py, which are common across all assignments, are available here Media:ML4T_2018Spring.zip (same file as above).

Running the grading scripts

The above zip files contain the grading scripts, data, and util.py for all assignments. Each project page will also have a link to a zip file containing a directory with some template code, which you should extract in the same directory that contains the data/ and grading/ directories, and util.py, (ML4T_2018Spring/ by default). To complete the assignments you'll need to modify the templates according to the assignment description. You can do this on the buffet0X machines directly using a text editor such as gedit, nano, or vim. Or you can copy the file to your local machine, edit them in your favorite text editor or IDE, and upload them back to the server. Make sure to test run your code on the server after making changes to catch any typos or other bugs.

To test your code, you'll need to set up your PYTHONPATH to include the grading module and the utility module util.py, which are both one directory up from the project directories. Here's an example of how to run the grading script for the first assignment:

PYTHONPATH=../:. python grade_analysis.py

which assumes you're typing from the folder ML4T_2018Spring/assess_portfolio/. This will print out a lot of information, and will also produce two text files: points.txt and comments.txt. It will probably be helpful to scan through all of the output printed out in order to trace errors to your code, while comments.txt will contain a succinct summary of which test cases failed and the specific errors (without the backtrace). Here's an example of the contents of comments.txt for the first assignment using the unchanged template:

<pre>--- Summary ---
Tests passed: 0 out of 3

--- Details ---
Test #0: failed
Test case description: Wiki example 1
IncorrectOutput: One or more stats were incorrect.
  Inputs:
    start_date: 2010-01-01 00:00:00
    end_date: 2010-12-31 00:00:00
    symbols: ['GOOG', 'AAPL', 'GLD', 'XOM']
    allocs: [0.2, 0.3, 0.4, 0.1]
    start_val: 1000000
  Wrong values:
    cum_ret: 0.25 (expected: 0.255646784534)
    avg_daily_ret: 0.001 (expected: 0.000957366234238)
    sharpe_ratio: 2.1 (expected: 1.51819243641)

Test #1: failed
Test case description: Wiki example 2
...

The comments.txt file will contain a summary of which tests were passed or failed, and any error messages. The points.txt file reports the score from the autograder, used by the teaching staff to automate grading submitted code in a batch run, and can be safely ignored by students.