MS2: Release
Your task in this milestone is to complete your second and final sprint, and to submit official peer evaluations.
Build and Test
As with the first sprint, you’ll have another assignment during the first week of the sprint. We suggest using that time to assess where things stand and address any deficiencies. Then use the second week to work full-time on the spring.
Deliverables
Demos, progress reports, submission: These will all work the same as MS1.
Rubric
Total: 50 points.
-
10 points: demo, source code, progress report. Same as MS1.
-
10 points: code quality and documentation. This will be graded much like it has been throughout the semester. You must provide a working
make docs
command that the grader can run to extract HTML documentation. A detailed rubric can be found here. -
10 points: testing. Your source code must include an OUnit test suite that the grader can run with a
make test
command that you provide. You are also encouraged to use Bisect, but that is not required. At the top of your test file, which should be namedtest.ml
or something very similar so that the grader can find it, please write a (potentially lengthy) comment describing your approach to testing: what you tested, anything you omitted testing, and why you believe that your test suite demonstrates the correctness of your system. A detailed rubric can be found here. -
20 points: system size. With over 100 projects being built, it’s quite difficult to compare the effort expended by teams or the functionality they achieved. So we will instead use an objective metric that is an proxy for effort and achievement: physical lines of code (LOC), which means non-blank, non-comment lines. It includes any testing code you have written. You can easily measure LOC with a tool called
cloc
: just runcloc .
in your source directory, and look at the count reported for OCaml. But first, runmake clean
, so that the code generated in_build
is not included in the count.We expect there to be at least 300 physical lines of OCaml code per person on your team. (Why 300? The median measurement for past projects submitted in this course, before we instituted this rubric, was around 500 lines of code per person. We picked 300 to signal that we are reducing expectations about scope for Spring 2020). We will assess scope compassionately. But systems that are undersized may lose points: the penalties will be extremely small at first (so, the 300 number is not a hard requirement), but as system size decreases, the penalties will increase. Oversized systems will not receive any bonus points, though they could be excellent entries in your portfolio to show off to potential employers.
LOC is a metric that could be gamed: a team could artifically increase its LOC count by adding some “dead code” that isn’t really needed in the system. Bear in mind that graders will be reading and evaluating your source code for this milestone. If evidence were discovered that a team did unethically inflate their LOC, it would likely result in an Academic Integrity case.
You are welcome to add a file named
LOC.txt
to your submission to explain anything you think we should know about the measurement of your system or how you think it should be interpreted. These files will be read before making any large deductions.
Peer Evaluations
See this page for how peer evaluations will work.